KR20160054751A - System for editing a text and method thereof - Google Patents

System for editing a text and method thereof Download PDF

Info

Publication number
KR20160054751A
KR20160054751A KR1020140154127A KR20140154127A KR20160054751A KR 20160054751 A KR20160054751 A KR 20160054751A KR 1020140154127 A KR1020140154127 A KR 1020140154127A KR 20140154127 A KR20140154127 A KR 20140154127A KR 20160054751 A KR20160054751 A KR 20160054751A
Authority
KR
South Korea
Prior art keywords
candidate
editing
text
user
region
Prior art date
Application number
KR1020140154127A
Other languages
Korean (ko)
Inventor
신종훈
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020140154127A priority Critical patent/KR20160054751A/en
Publication of KR20160054751A publication Critical patent/KR20160054751A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

A text editing system according to an embodiment of the present invention includes a user interface capable of touch-based input and display; And an editing area designating unit for designating an editing area through the user interface for an error of a text displayed on the user interface and presenting a candidate area related to a designated editing area in a tiled manner through the user interface, And a text editing module for correcting the designated editing area with a candidate of an area.

Description

[0001] The present invention relates to a text editing system,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a text editing system and a method thereof, and more particularly, to a technique for providing a user interface capable of correcting text errors in interpretation, speech recognition, and text input.

The automatic translation and interpretation system translates the source language (eg Korean) entered by the user into the target language (eg English, Chinese, etc.). In order to input a native language, such an automatic translation and interpretation system allows a user to key-in text through a keyboard, a user uttering a voice through a microphone of the apparatus, and converting the uttered voice data into text.

However, in the case where speech is converted into text by the user's speech recognition, many cases are recognized as characters or sentences that are not intended by the user. Even if the speech recognition is successful, the ambiguity and automatic The result expressed in the band language due to errors in the translation and interpretation subsystem may not include the user's intentions.

In addition, due to the miniaturization of electronic devices such as a cellular phone or PDA (Personal Digital Assistants), it is often mistakenly input even when the user inputs directly through the keyboard.

In the case where an input error occurs even though the speech is recognized, interpreted or directly inputted by the user, the user has to go to the area where the error occurred or the character has to be corrected and correct the error word or character through the keyboard input.

In addition, when the text of the translation result appears to be unintended by the user, the user has to carry out the re-translation by eliminating the word sense ambiguity of the specific vocabulary or directly selecting another word.

However, such conventional text editing techniques have the following problems.

First, the correction method by direct input using the keyboard was complicated because it was necessary to perform all the steps of selecting the location, deleting the wrong recognition result part, and inputting the new text through keyboard input.

Second, one of the most frequently occurring errors in speech recognition is the failure to recognize vocabulary such as certain proper nouns, such as a person's name or a business name. Through an interface that does not take this into consideration, the user can perform additional steps (such as searching the address book) .

Thirdly, when a candidate is presented without direct input, the size of a screen display is small due to the characteristics of a portable computing device, so that when a candidate is presented by a simple list formula, there is a problem in that the number of touches is increased and the candidate search is complicated.

Patent Publication No. KR 2013-0106226

An embodiment of the present invention is to provide a text editing system and a method for providing a new interface based on a touch that can be used on a screen having a limited size to correct speech recognition or translation result.

A text editing system according to an embodiment of the present invention includes a user interface capable of touch-based input and display; And an editing area designating unit for designating an editing area through the user interface for an error of a text displayed on the user interface and presenting a candidate area related to a designated editing area in a tiled manner through the user interface, And a text editing module for correcting the designated editing area with a candidate of an area.

The apparatus may further include a voice recognition module for converting the input voice into text when the voice is input from the user.

In addition, the text editing module may correct the text error resulting from the speech recognition module in association with the speech recognition module.

The apparatus may further include an automatic translation module for translating the source language inputted by the user into the target language and outputting the source language through the user interface.

In addition, the text editing module can correct the error of the text, which is the result of the automatic translation module, in conjunction with the automatic translation module.

The text editing module may further include a touch recognition unit for recognizing whether a contact point of a user's finger is generated in the user interface; A candidate providing unit for searching candidates having high relevance with a designated editing area from a user through the user interface and providing the candidates through the user interface in a tiled manner; And a correcting unit for replacing and correcting the editing region with a candidate selected by the user among the candidates provided by the candidate providing unit.

Wherein the candidate providing unit forms a plurality of candidate regions around the designated editing region and when a contact movement stop period of a predetermined time occurs while the contact point moves to the plurality of candidate regions of the user's finger, It is possible to form another candidate region around the candidate region in which the movement stop region occurs and to present other candidates.

In addition, the candidate providing unit may determine a candidate region at a point where the contact point of the user's finger is removed as a final candidate.

The candidate providing unit may form a direct input area that can be directly input by the user when forming a plurality of candidate areas around the designated editing area and an address book search area that can search the address book.

The candidate providing unit may include at least one of a hexagonal shape, a square shape, and a circular shape.

If the number of the candidate regions is large, the candidate providing unit classifies and displays the candidate region as a superordinate concept, and when the user selects a tile of the superordinate concept, the candidate providing region of the subordinate concept for the superordinate concept tile is provided can do.

According to an embodiment of the present invention, there is provided a text editing method comprising: receiving an editing area for editing text displayed on a user interface; Displaying a candidate region for the specified editing region in a tiled manner around the editing region; Selecting one of the candidate regions from a user; And correcting the editing region as a candidate of the selected candidate region.

Also, the step of displaying on the periphery of the editing area may search for and provide candidates having high relevance to the designated editing area.

The displaying of the editing area may further include displaying a plurality of candidate areas around the specified editing area and displaying a plurality of candidate areas when the user's finger moves while maintaining the contact with the plurality of candidate areas When a contact movement stop section occurs, another candidate area may be formed around the candidate region in which the movement stop section occurs, and other candidates may be presented.

In addition, the step of selecting one of the candidate regions from the user may select a candidate region at which the finger is touched.

In addition, the step of displaying on the periphery of the editing area may display the candidate area with at least one of a hexagon shape, a square shape, and a circular shape.

If the number of the candidate regions is large, the step of displaying on the periphery of the editing region may classify and display the candidate region as a superordinate concept, and if the user selects the tile of the superordinate concept, The candidate region of the concept can be displayed.

This technology provides candidates for error correction in speech recognition or automatic translation and interpreting error correction as a tile type user interface so that the user can easily edit the error to improve the quality of speech recognition result or automatic translation and interpretation result, . ≪ / RTI >

1 is a configuration diagram of a text editing system according to an embodiment of the present invention.
2 is a flowchart illustrating a text editing method according to an embodiment of the present invention.
FIG. 3 is a view illustrating an example of selecting an area to be corrected in the result text according to an embodiment of the present invention.
4 is another example of selecting an output text according to an embodiment of the present invention.
5 is an exemplary diagram illustrating candidates according to an embodiment of the present invention.
FIG. 6 is an exemplary diagram illustrating candidate selection and correction to selected candidates according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating an example of selecting a region to be corrected in a translation result according to an exemplary embodiment of the present invention.
FIG. 8 is another exemplary diagram showing candidates according to an embodiment of the present invention. FIG.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings, in order to facilitate a person skilled in the art to easily carry out the technical idea of the present invention.

The present invention provides a user interface for correcting a text error due to automatic translation, speech recognition, text input, and the like.

Hereinafter, embodiments of the present invention will be described in detail with reference to FIGS. 1 to 8. FIG.

1 is a configuration diagram of a text editing system according to an embodiment of the present invention.

A text editing system according to an embodiment of the present invention includes a user interface 100, a voice recognition module 200, an automatic translation module 300, and a text editing module 400.

The user interface 100 may display an automatic translation result or a speech recognition result, a candidate presentation screen for text editing, and a corrected result based on a touch interface through which a user can input a user command in a touch manner with a finger or the like. In addition, the user interface 100 may include a mobile terminal such as a PDA, or a communication interface capable of wired or wireless communication with a PMP or a notebook PC.

The present invention discloses an example in which the user interface 100, the speech recognition module 200, the automatic translation module 300, and the text editing module 400 are implemented in one device or system. However, Or the system may have a user interface 100 and a text editing module 400 and may perform text editing in cooperation with the voice recognition module 200 and the automatic translation module 300 through wireless or wired communication.

The voice recognition module 200 converts the voice inputted on the basis of the language model storage unit 220 and the voice model storage unit 230 into text and outputs the text to the user interface 100. [ To this end, the voice recognition module 200 includes a voice recognition processor 210, a language model storage unit 220, and a voice model storage unit 230.

The language model storage unit 220 stores word strings W = w 1 w 2 w 3 ... w n , and the speech model storage unit 230 stores a speech model for each language model.

The voice recognition processing unit 210 converts the voice inputted on the basis of the language model storage unit 220 and the voice model storage unit 230 into text. That is, the speech recognition processing unit 210 displays the one having the highest probability value among the speech recognition result as the final result. At this time, the highest probability value of the word appearing in the monolingual corpus constructed in the language model storage unit 220 may be different from the intention of the user.

The automatic translation module 300 translates a source language (e.g., Korean) into a target language (e.g., English) and outputs the source language to the user interface 100. To this end, the automatic translation module 300 includes an automatic translation processing unit 310, a lexical semantic dictionary storage unit 320, and a dictionary dictionary storage unit 330.

The automatic translation processing unit 310 translates the inputted source language into the target language based on the lexical meaning dictionary storage unit 320 and the dictionary dictionary storage unit 330. The lexical semantic dictionary storage unit 320 stores a lexical semantic dictionary, and the thesaurus dictionary storage unit 330 stores a thesaurus. FIG. 7 is an example of a translation performed by the automatic translation module 300. FIG.

Unlike the result of the speech recognition module 200, the output of the automatic translation module 300 includes word alignment information of the band text (target language) and the original text (source language). Such sorting information can be selected by using such information, including rule based, statistical based, and example based. The reason why the editing area is selected by using the sorting information is because the user does not know the band sentence because the automatic translation and translating module 300 is used. Therefore, in order to confirm whether the user has a translation result matching his or her intention, This is because you have to choose the location based on the original text. According to the present invention, the target vocabulary is selected based on such characteristics.

Referring to FIG. 7, when an example sentence such as "Daegu is close to racing" is translated through the automatic translation module 300 as an example for word sence disambiguation, "Taegu and race quot; is near. " At this time, the original Korean vocabulary "Gyeongju" can have various meanings, which may have the meaning of "race" such as "kyung-ju" as a place name, and car racing. Through the sorting information described above, Daegu will have connection information with "Taegu", "and" with "and", "race" with "race" and close to "be near". Based on this connection information, when the user places the screen and the contact point at the location where the vocabulary is located with the finger, the user edits the corresponding vocabulary. The text editing module 400 utilizes the vocabulary semantic dictionary storage unit 320 and the vocabulary dictionary storage unit 330 in the automatic translation module 300 for candidate output for solving the lexical symptom, And can be represented by a rectangular tile of the concept, and the tile of the upper concept square is displayed in association with the "race" in FIG. For example, FIG. 8 provides a tile of a place name 610, a race 620, a concentrator 630 that is a homonym, and other bandwidth input 640. Accordingly, when the place name 610 is touched, subordinate candidates related to the place name and race are presented, and when the race 620 is touched, the subordinate candidates related to the race and the race can be presented.

Accordingly, when the user touches one of the tiles of the upper concept square and selects it, the candidate of the lower concept for the selected tile can be represented in a tiled manner.

The text editing module 400 corrects the errors of the speech recognition result recognized by the speech recognition module 200, the result automatically translated by the automatic translation module 300, the text input through the user interface 100, and the like . The text editing module 400 includes a touch recognition unit 410, a candidate providing unit 420, and a correction unit 430.

The touch recognition unit 410 is assigned an editing area from the user through the user interface 100 for the speech recognition result, the translation result, and the input text displayed on the screen of the user interface 100. [ At this time, the designation of the editing area can be designated by enlarging the corresponding area as shown in FIG. 3, and can be specified by displaying a shade in the editing area 410 as shown in FIG.

Also, the touch recognition unit 410 confirms whether or not a finger contact by the user occurs or disappears among the candidate regions presented in the user interface 100, and determines whether the finger is touching or not.

The candidate providing unit 420 searches candidates for the editing region designated by the user and outputs the candidates to the user interface 100. At this time, the candidate providing unit 420 searches candidates through the language model storage unit 220 or the speech model storage unit 230 of the speech recognition module 200 if the editing target is a speech recognition result, Searches candidate candidates of the designated editing region through the vocabulary semantic dictionary storage unit 320 and the thesaurus storage unit 330 of the automatic translation module 300 when the editing target is the automatic modification result. At this time, the candidates of the vocabulary close to the designated editing area means candidates of similar vocabulary candidates and related vocabulary candidates, and can be determined by relevance of concepts, shapes, and the like.

A method of searching candidates of the candidate providing unit 420 will be described below.

First, the language model storage unit 220 stores a word string W = w 1 w 2 w 3 ... w n , and can be expressed as Equation (1) below.

Figure pat00001

Assuming that a character string included in a designated editing area in the language model storage unit 220 is W x and adjacent words not included in the editing area are W x -1 and W x + 1 , respectively, the candidate providing unit 420 The candidates can be presented in order of probability p (W x +1 | W 1 W x -1 W x ). If the use of the language model is unsuitable, it may be possible to utilize an analysis method that is dependent on a particular language or present the candidate through a pre-built dictionary list. When the user first selects a candidate that is closest to his / her intention from among the candidates presented through the above-described method, such as the use of a language model, the candidate candidate is recalculated using the above-described method based on the selected candidate do. At this time, in order to inform the user that the candidate is a new candidate, it is possible to output it in a form having a slightly different color or shape. As shown in Fig. 4, when the user touches the candidate "Kim Soo Hyun" positioned at the upper center to form a contact, he recomputes candidates near the center around "Kim Soo Hyun" :

Also, when providing the candidates, the candidate providing unit 420 may form a plurality of candidate regions around the editing region except for the selected editing region, and display the candidates within the plurality of candidate regions. 5, the editing region 420 and the candidate region may be displayed in a hexagonal (honeycomb) tile manner. In addition to the candidate region, a direct input region that the user can input directly through the keyboard, An address book navigation menu for searching for names in the address book may be displayed together. At this time, the shape of the tile can be implemented in various shapes such as a hexagon, a rectangle, and a circle.

At this time, the candidate providing unit 420 can represent the candidate as a square tile as shown in FIG. 8, and if the number of the candidates is large, not all the candidates are presented to the tiles of each square, have. For example, if the selected vocabulary is' eye ',' eye ',' part of object (bud, artificial object (scale of scale) It is possible to move from the meaning of a large category such as 'maille', 'eyesigth', and 'snow' to the lower level and select the final meaning.

Also, the candidate providing unit 420 may expand and present candidates related to the selected region in the region where the user's finger contact occurs, that is, the region (the edit region or the candidate region) that is touched by the user . That is, as shown in FIG. 6, the user can move the finger contact while dragging the finger, and the candidates associated with the current selection region are consecutively formed around the selection region in which the contact exists, . At this time, the extension of the candidate can be continued until the desired candidate is retrieved by the user.

Also, the candidate providing unit 420 determines that the candidate at the point where the finger is touched by the touch recognizing unit 410 is selected, and transmits information on the selected final candidate to the correcting unit 430. Accordingly, the user must maintain the contact point between the finger and the screen until the desired final modification form is completed. Referring to FIG. 6, a contact point is moved to another candidate by moving the finger position of the user on the candidate region. In the selection of the repeated candidate, unlike the first selection, a predetermined time (for example, 0.5 second) A dragging pause section should be present. The candidate providing unit 420 indicates a candidate different from the previous option, and repeatedly presents another candidate when a contact movement stop period of a predetermined time occurs.

The correcting unit 430 corrects the editing area designated by the user to be edited to the final candidate received from the candidate providing unit 420. [ Referring to FIG. 6, it can be seen that the editing region "Kim Soo Yeon" has been modified to the final candidate "Kim Soo Hyun". At this time, in the case of correction for the translation result, the text editing unit 400 can perform re-translation in conjunction with the automatic translation module 300. [

The text editing system having such a configuration can be implemented in a portable terminal and can be mounted on various portable terminals such as a PDA, a smart phone, and a mobile phone.

Hereinafter, a text editing method according to an embodiment of the present invention will be described with reference to FIG.

The speech recognition module 200 converts the speech recognition result into text and outputs it through the user interface 100. The automatic translation module 300 translates the source language into the target language and transmits the translation result to the user interface 100 (S101).

Accordingly, the touch recognition unit 410 of the text editing module 400 receives an input of an editing area from the user through the user interface 100 for the speech recognition result or translation result displayed on the screen (S102). The editing area can be enlarged as shown in FIG. 3, or displayed in the editing area 410 as shown in FIG.

The candidate providing unit 420 of the text editing module 400 searches candidates for the editing area designated by the user and outputs the candidates to the user interface 100 (S103). At this time, the candidate providing unit 420 searches candidates for the specified editing region through the language model storage unit 220 or the voice model storage unit 230 of the voice recognition module 200 when the editing target is a voice recognition result, If the edit target is the automatic change result, the candidate of the edit region designated through the lexical meaning dictionary storage unit 320 and the dictionary dictionary storage unit 330 of the automatic translation module 300 is searched.

At this time, when the candidate providing unit 420 provides a candidate, a plurality of candidate regions may be formed around the editing region except for the selected editing region, and each candidate may be displayed in a plurality of candidate regions. 5, the editing region 420 and the candidate region may be displayed in a hexagonal (honeycomb) tile manner. In addition to the candidate region, a direct input region that a user can input directly through the keyboard may be displayed together .

Then, the touch recognition unit 410 determines whether a finger contact by a user has occurred in a candidate region presented in the user interface 100 (S104). That is, the touch recognition unit 410 determines whether the user has touched the editing area or the candidate area.

The candidate providing unit 420 expands and presents candidates related to the selected region in the region where the user's finger contact is generated, that is, the remaining region (peripheral region) of the region (editing region or candidate region) touched by the user ). As shown in FIG. 6, the user can move the finger contact while dragging the finger, and the candidates associated with the current selection region are consecutively formed around the selection region in which the contact exists, . At this time, the extension of the candidate can be continued until the desired candidate is retrieved by the user.

Then, the touch recognition unit 410 determines whether the finger contact of the user has disappeared or not (S106). If the finger has been touched, the candidate providing unit 420 determines whether the finger is touched As a final candidate (S107).

The correcting unit 430 replaces the editing area designated in step S102 with the final candidate selected in step S107. At this time, in the case of correction for the translation result, the text editing unit 400 can perform re-translation in conjunction with the automatic translation module 300. [

As described above, according to the present invention, when there is an error in the translation result, the speech recognition result, and the text input result, the user does not newly input and correct the result according to the desired result, but selects a desired region through dragging of the finger And can be modified to suit the user's intention.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, It should be regarded as belonging to the claims.

Claims (17)

A user interface capable of touch-based input and display; And
An editing area is specified through the user interface for a text error displayed on the user interface, a candidate area related to a designated editing area is presented in a tiled manner through the user interface, A text editing module for correcting the designated editing area with a candidate of
The text editing system comprising:
The method according to claim 1,
A voice recognition module for converting the input voice into text,
Further comprising: a text editing system for editing the text.
The method of claim 2,
The text editing module
Wherein the correcting unit corrects the error of the text, which is the result of the speech recognition module, in association with the speech recognition module.
The method according to claim 1,
An automatic translation module for translating the source language inputted by the user into the target language and outputting the source language through the user interface
Further comprising: a text editing system for editing the text.
The method of claim 4,
Wherein the text editing module corrects an error of text resulting from the automatic translation module in association with the automatic translation module.
The method according to claim 1,
Wherein the text editing module comprises:
A touch recognition unit for recognizing whether or not a contact point of a user's finger is generated in the user interface;
A candidate providing unit for searching candidates having high relevance with a designated editing region from a user displayed through the user interface and providing the selected candidates through the user interface in a tiled manner; And
And a correcting unit for replacing the editing region with a candidate selected by the user among the candidates provided by the candidate providing unit,
The text editing system comprising:
The method of claim 6,
The candidate providing unit,
When a plurality of candidate regions are formed in the vicinity of the designated editing region and a contact movement stop section for a predetermined time occurs while moving the contact point while keeping the contact point with the plurality of candidate regions of the user's finger, And forms another candidate region around the candidate region to present other candidates.
The method of claim 7,
The candidate providing unit,
And determines a candidate region at a point where the contact point of the user's finger is removed as a final candidate.
The method of claim 7,
The candidate providing unit,
Wherein a direct input area that can be directly input by a user when forming a plurality of candidate areas around the specified editing area is formed together with an address book search area capable of searching an address book.
The method of claim 6,
The candidate providing unit,
Wherein the candidate region is formed in at least one of a hexagonal shape, a square shape, and a circular shape.
The method of claim 6,
The candidate providing unit,
If the number of the candidate regions is large, the candidate region is classified into a superordinate concept and displayed, and if the user selects a tile of the superordinate concept, a candidate region of the subordinate concept for the superordinate concept tile is provided. system.
Receiving an edit area for editing among text displayed in a user interface;
Displaying a candidate region for the specified editing region in a tiled manner around the editing region;
Selecting one of the candidate regions from a user; And
Correcting the editing area as a candidate of the selected candidate area
The text editing method comprising the steps of:
The method of claim 12,
Wherein the step of displaying on the periphery of the editing area comprises:
Searching candidate candidates having high relevance to the specified editing region and providing the candidates.
The method of claim 12,
Wherein the step of displaying on the periphery of the editing area comprises:
A plurality of candidate regions are displayed in the vicinity of the designated editing region, and when the user's finger moves while maintaining the contact point with the plurality of candidate regions, if a contact movement stop period of a predetermined time occurs, And forming another candidate region around the candidate region to present other candidates.
The method of claim 14,
Wherein the step of selecting one of the candidate regions from the user comprises:
And selecting a candidate region at a point where the finger is torn off.
The method of claim 12,
Wherein the step of displaying on the periphery of the editing area comprises:
Wherein the candidate region is displayed in at least one of a hexagonal shape, a rectangular shape, and a circular shape.
The method of claim 12,
Wherein the step of displaying on the periphery of the editing area comprises:
Wherein if the number of the candidate regions is large, the candidate region is classified and displayed as a superordinate concept, and if the user selects a tile of the superordinate concept, the candidate region of the subordinate concept for the superordinate concept tile is displayed. Way.
KR1020140154127A 2014-11-07 2014-11-07 System for editing a text and method thereof KR20160054751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020140154127A KR20160054751A (en) 2014-11-07 2014-11-07 System for editing a text and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020140154127A KR20160054751A (en) 2014-11-07 2014-11-07 System for editing a text and method thereof

Publications (1)

Publication Number Publication Date
KR20160054751A true KR20160054751A (en) 2016-05-17

Family

ID=56109391

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140154127A KR20160054751A (en) 2014-11-07 2014-11-07 System for editing a text and method thereof

Country Status (1)

Country Link
KR (1) KR20160054751A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271094A (en) * 2017-07-18 2019-01-25 北京搜狗科技发展有限公司 A kind of method, device and equipment of text editing
CN110619119A (en) * 2019-07-23 2019-12-27 平安科技(深圳)有限公司 Intelligent text editing method and device and computer readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271094A (en) * 2017-07-18 2019-01-25 北京搜狗科技发展有限公司 A kind of method, device and equipment of text editing
CN109271094B (en) * 2017-07-18 2022-02-22 北京搜狗科技发展有限公司 Text editing method, device and equipment
CN110619119A (en) * 2019-07-23 2019-12-27 平安科技(深圳)有限公司 Intelligent text editing method and device and computer readable storage medium
CN110619119B (en) * 2019-07-23 2022-06-10 平安科技(深圳)有限公司 Intelligent text editing method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
JP4829901B2 (en) Method and apparatus for confirming manually entered indeterminate text input using speech input
US9026428B2 (en) Text/character input system, such as for use with touch screens on mobile phones
US9977779B2 (en) Automatic supplementation of word correction dictionaries
US7395203B2 (en) System and method for disambiguating phonetic input
JP5362095B2 (en) Input method editor
US11640503B2 (en) Input method, input device and apparatus for input
US8311829B2 (en) Multimodal disambiguation of speech recognition
US20050027534A1 (en) Phonetic and stroke input methods of Chinese characters and phrases
US9484034B2 (en) Voice conversation support apparatus, voice conversation support method, and computer readable medium
US11030418B2 (en) Translation device and system with utterance reinput request notification
US20150169537A1 (en) Using statistical language models to improve text input
TW200538969A (en) Handwriting and voice input with automatic correction
JP2011254553A (en) Japanese language input mechanism for small keypad
US20170270092A1 (en) System and method for predictive text entry using n-gram language model
JP2007538299A (en) Virtual keyboard system with automatic correction function
US20070288240A1 (en) User interface for text-to-phone conversion and method for correcting the same
US10025772B2 (en) Information processing apparatus, information processing method, and program
CN113268981A (en) Information processing method and device and electronic equipment
KR20160054751A (en) System for editing a text and method thereof
KR20170009486A (en) Database generating method for chunk-based language learning and electronic device performing the same
CN105683891A (en) Inputting tone and diacritic marks by gesture
KR20130128172A (en) Mobile terminal and inputting keying method for the disabled
US20150277752A1 (en) Providing for text entry by a user of a computing device
JP2005018442A (en) Display processing apparatus, method and program, and recording medium
JP2021168020A (en) Voice input device

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination