WO2005004041A1 - 手書き入力装置、手書き入力方法、手書き入力プログラム、および、プログラム記録媒体 - Google Patents
手書き入力装置、手書き入力方法、手書き入力プログラム、および、プログラム記録媒体 Download PDFInfo
- Publication number
- WO2005004041A1 WO2005004041A1 PCT/JP2004/009767 JP2004009767W WO2005004041A1 WO 2005004041 A1 WO2005004041 A1 WO 2005004041A1 JP 2004009767 W JP2004009767 W JP 2004009767W WO 2005004041 A1 WO2005004041 A1 WO 2005004041A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- handwritten information
- prediction candidate
- input
- search target
- information
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/142—Image acquisition using hand-held instruments; Constructional details of the instruments
- G06V30/1423—Image acquisition using hand-held instruments; Constructional details of the instruments the instrument generating sequences of position coordinates corresponding to handwriting
Definitions
- Handwriting input device handwriting input method, handwriting input program, and program recording medium
- the present invention relates to a handwriting input device, a handwriting input method, a handwriting input program, and a program recording medium.
- Patent Document 1 discloses a text input device that searches for a word dictionary, an example dictionary, and the like, and presents a character string candidate in which an omitted part is complemented by inputting an abbreviation symbol in the middle or continuation of a character string to be input. It has been disclosed.
- Patent Document 2 discloses a handwritten character symbol that searches handwritten information similar to handwritten information input by an operator for search specification from a handwritten information dictionary when searching for previously input handwritten information.
- a processing device is disclosed.
- the conventional text input device and handwritten character / symbol processing device have the following problems. That is, the sentence input device described in Patent Document 1 searches a word dictionary, an example dictionary, and the like to complement the omitted part. Therefore, it is possible to reduce the burden of inputting characters when the final output is printed. However, there is a problem that it cannot be used when the handwritten information input by the operator is processed as the input image.
- an object of the present invention is to provide a handwriting input device, a handwriting input method, a handwriting input program, and a handwriting input device capable of reducing the operator's load during handwriting input when processing handwritten information input by the operator as an image. It is to provide a program recording medium.
- a handwriting input device comprises:
- a prediction candidate generation unit (3) for generating a prediction catch of handwriting information intended for input based on the handwritten information input from the input unit (1);
- the display unit (5) is configured to display the selected candidate in the case where the predictive catch is selected by the predictive catch selection unit (4).
- the “handwritten information” in this specification means information relating to a set of one or more strokes constituting a character, a graphic, or the like.
- the “prediction candidate” refers to the handwritten information that the operator finally wants to input. Refers to each candidate obtained at that time.
- the operator inputs handwritten information from the input unit until the desired prediction candidate is displayed on the display unit, and the displayed desired prediction candidate is selected.
- characters and figures can be input without actually inputting all the handwritten information (characters and figures) intended for input. Therefore, when processing the input handwritten information as an image, the labor of handwriting input is greatly reduced, and the input efficiency is greatly improved.
- the prediction candidate generation unit (3) uses, as search target handwritten information, part or all of the handwritten information input from the input unit (1).
- the input handwritten information includes the search target handwritten information,
- the display unit (5) displays the selected prediction candidate together with the search target handwritten information. I have.
- the operator inputs handwritten information from the input unit until the desired prediction candidate is displayed on the display unit, and the displayed desired prediction candidate is selected.
- characters and figures can be input without actually handwriting inputting all the handwritten information (characters and figures) intended for input. Therefore, when input handwritten information is processed as an image, the time and effort of handwritten input is greatly reduced, and input efficiency is greatly improved.
- a handwritten information dictionary in which various handwritten information is registered as a dictionary entry is provided, and the prediction candidate generation unit (3) uses part or all of the handwritten information input from the input unit (1) as search target handwritten information.
- a dictionary entry partially similar to the search target handwritten information is extracted from the handwritten information dictionary, and the extracted dictionary is extracted. Generating a prediction candidate for handwritten information intended to be input based on the entry; the input handwritten information includes the search target handwritten information;
- the display unit (5) displays a part similar to the search target handwritten information from the selected prediction candidate. Is deleted and displayed together with the search target handwritten information.
- the operator inputs handwritten information from the input unit until the desired prediction candidate is displayed on the display unit, and the displayed desired prediction candidate is selected.
- a handwritten information dictionary in which various handwritten information is registered as a dictionary entry is provided, and the prediction candidate generation unit (3) uses part or all of the handwritten information input from the input unit (1) as search target handwritten information. Extracting, from the handwritten information dictionary, a dictionary entry partially similar to the searched handwritten information, generating a prediction candidate of the handwritten information intended to be input based on the extracted dictionary entry;
- the input handwritten information includes the search target handwritten information,
- the display unit (5) sets the already displayed search target handwritten information to a non-display state, The selected prediction candidate is displayed.
- the operator inputs handwritten information from the input unit until the desired prediction candidate is displayed on the display unit, and the displayed desired prediction candidate is selected.
- characters and figures can be input without actually handwriting inputting all the handwritten information (characters and figures) intended for input. Therefore, when input handwritten information is processed as an image, the time and effort of handwritten input is greatly reduced, and input efficiency is greatly improved.
- the prediction candidate generation unit (3) includes: a size and a direction of the search target handwritten information; After changing the size and direction of the entire dictionary entry so that the size and direction of the portion similar to the search target handwritten information in the extracted dictionary entry match, the overall size of the dictionary entry is changed. The prediction candidate is generated based on the dictionary entry whose direction has been changed.
- the display unit (5) when displaying the prediction candidate generated by the prediction candidate generation unit (3), determines the position of a part of the prediction candidate similar to the search target handwritten information or The position where the above part was present is displayed so as to match the position of the above-mentioned handwritten information to be searched.
- the search target handwritten information and the predicted catch are displayed in a continuous state. Therefore, the operator can check the image when the above-mentioned predicted prediction is selected.
- the display unit (5) When displaying the prediction candidate selected by the prediction candidate selection unit (4), the display unit (5) displays the search target handwriting in the dictionary entry used to generate the selected prediction candidate. After determining the degree of deviation of the size and direction of the part similar to the information with respect to the size and direction of the search target handwritten information, and after changing the size and direction of the entire selection prediction candidate in accordance with the degree of deviation, The position of the part similar to the search target handwritten information or the position where the part existed in the selection prediction candidate is displayed so as to match the position of the search target handwritten information.
- the display unit (5) displays the prediction candidate in a display form different from the input handwritten information.
- the prediction candidate selection section (4) is composed of buttons or menus displayed near the prediction candidates by the display section (5).
- the operation for selecting a desired prediction catch from the prediction candidates displayed on the display unit is facilitated, and the operability is improved.
- the prediction candidate selection section (4) is composed of a button or a menu displayed near the writing end point of the search target handwritten information by the display section (5). According to this embodiment, the operation for selecting a desired prediction candidate from the prediction candidates displayed on the display unit is facilitated, and the prediction candidates after handwriting input can be quickly selected. And operability is improved.
- the predictive catch selection section (4) is composed of a button or a menu displayed near the writing end point of the predictive catch on the display section (5).
- the operation for selecting a desired prediction candidate from the prediction candidates displayed on the display unit is facilitated, and the transition to the next handwriting input after the selection of the prediction candidates is facilitated. And operability is improved.
- the handwriting input method of the present invention includes:
- characters and figures are input without actually handwriting inputting all the handwritten information (characters and figures) intended for input. Therefore, when input handwritten information is processed as an image, the time and effort of handwritten input are greatly reduced, and input efficiency is greatly improved.
- the prediction candidate generation step extracts a dictionary entry partially similar to the search target handwritten information from the handwriting information dictionary by using part or all of the input handwritten information as search target handwritten information, and By removing a part similar to the above-described handwritten information to be searched from the dictionary entry thus created, a prediction candidate of the handwritten information that is intended to be input is generated,
- the handwritten information displayed in the display step includes the search target handwritten information, and the selected prediction candidate display step displays the selected prediction candidate together with the search target handwritten information.
- characters and figures are input without actually handwriting inputting all of the handwritten information (characters and figures) intended for input. Therefore, when input handwritten information is processed as an image, the time and effort of handwritten input is greatly reduced, and input efficiency is greatly improved.
- the prediction candidate generation step extracts a dictionary entry partially similar to the search target handwritten information from the handwriting information dictionary, using part or all of the input handwritten information as search target handwritten information.
- the handwritten information displayed in the display step includes the search target handwritten information, and the selected prediction candidate display step deletes a portion similar to the search target handwritten information from the selected prediction candidate, and It is displayed together with the search target handwritten information.
- the prediction candidate generation step extracts a dictionary entry partially similar to the search target handwritten information from the handwriting information dictionary, using part or all of the input handwritten information as search target handwritten information. Generating a predictive cue of the handwritten information intended for input based on the input dictionary entry;
- the handwritten information displayed in the display step includes the search target handwritten information, and the selection prediction candidate display step sets the search target handwritten information that has already been displayed to a non-display state. Display prediction candidates.
- characters and figures are input without actually handwriting inputting all of the handwritten information (characters and figures) intended for input. Therefore, when input handwritten information is processed as an image, the time and effort of handwritten input is greatly reduced, and input efficiency is greatly improved.
- the handwriting input program of the present invention comprises:
- the input unit, the prediction candidate generation unit, the display unit, and the prediction candidate selection unit in the handwriting input device of the present invention are configured to control the input unit, the prediction candidate generation unit, the display unit, and the prediction candidate selection unit in the handwriting input device of the present invention.
- characters and figures can be input only by manually inputting a part of handwritten information (characters and figures) intended to be input. Therefore, when processing the input handwritten information as an image, the labor of handwritten input is greatly reduced, and the input efficiency is greatly improved.
- program recording medium of the present invention comprises:
- a handwriting input program according to the present invention is recorded.
- FIG. 1 is a functional block diagram of the handwriting input device of the present invention.
- FIG. 2 is a diagram showing a specific hardware configuration of the handwriting input device shown in FIG.
- FIG. 3 is a flowchart of the handwriting input processing operation executed under the control of the CPU in FIG.
- FIGS. 4A, 4B, 4C, and 4D show examples of handwritten information prepared in advance in the handwritten information dictionary.
- FIG. 5 is a flowchart of the prediction candidate generation processing operation executed in the handwriting input processing operation shown in FIG.
- 6A, 6B, 6C, and 6D are diagrams showing an example of a search target handwriting and a dictionary entry to be subjected to DP matching.
- FIGS. 7A and 7B are diagrams showing the correspondence between the search handwritten information shown in FIGS. 6A, 6B, 6C and 6D and dictionary entries.
- 8A and 8B are diagrams illustrating an example of search target handwritten information having different sizes and directions and dictionary entries.
- FIGS. 9A and 9B are explanatory diagrams in the case where prediction candidates are generated by matching the size and direction of the dictionary entry in FIGS. 8A and 8B with the handwritten information to be searched.
- FIGS. 1OA and 1OB show an example of a case where a selected prediction candidate is displayed in a window.
- FIGS. 11A and 11B are diagrams showing display examples of prediction candidates different from those in FIGS. 10A and 10B.
- FIG. 12 is a diagram illustrating an example of a case where the candidate selecting unit in FIG. 1 is configured by a button area displayed on the input unit.
- FIGS. 13A and 13B are diagrams illustrating a configuration example of a button area different from that in FIG.
- FIGS. 14A and 14B are explanatory diagrams of determining the display position of the button area shown in FIGS. 13A and 13B.
- FIG. 15 is a diagram illustrating an example of a method of displaying the selection prediction candidates.
- FIG. 16 is a diagram illustrating a method of displaying a selection prediction candidate different from that of FIG.
- FIG. 17 is a diagram illustrating a display example of the selected prediction candidates when the prediction candidates from which the portion corresponding to the search target handwritten information has been deleted are displayed in a window.
- Figures 18A and 18B show the selected prediction candidates when the predicted catch including the part corresponding to the handwritten information to be searched is displayed according to the position, size, and direction of the handwritten information to be searched. It is a figure showing the example of a display of.
- Figure 19 shows a display example of selected prediction candidates when the prediction candidate from which the portion corresponding to the search target handwritten information has been deleted is displayed according to the position and size of the search target handwritten information. is there.
- FIG. 1 is a functional block diagram showing a functional configuration of the handwriting input device according to the present embodiment.
- This handwriting input device is configured on a simple computer, and includes an input unit 1, a storage unit 2, a candidate generation unit 3, a candidate selection unit 4, and a display unit 5.
- the input unit 1 includes a tablet and the like, and strokes such as characters and figures are input by handwriting by an operator. Then, handwritten information including a set of strokes input by handwriting from the input unit 1 is stored in the storage unit 2.
- the candidate generation unit 3 predicts handwritten information intended to be input (that is, handwritten information that the operator ultimately wants to input) based on the input handwritten information stored in the storage unit 2, Generate
- the candidate selection unit 4 is configured by a specific area such as a tablet that constitutes the input unit 1, or is configured by hardware different from the input unit 1 such as a keyboard and buttons, and is provided by an operator. The prediction candidate is selected according to the instruction.
- the display unit 5 displays the handwritten information input from the input unit 1 and the prediction candidates generated by the candidate generation unit 3.
- the prediction candidates selected by the candidate selection unit 4 are displayed.
- the above-mentioned stroke refers to a trajectory of a handwriting obtained from a time point when the input unit 1 touches the input surface to a time point when the contact ends.
- the stroke information refers to the coordinate sequence information, vector sequence information, or image information of the stroke obtained from the input unit 1.
- the stroke information may include time information such as input time and time required for writing, writing speed information, writing pressure information, circumscribed rectangle information, stroke length information, and the like.
- handwritten information refers to information on a set of one or more strokes, and may include, in addition to information on included strokes, various circumscribed rectangle information and various types of information used for shape matching of handwritten information. .
- the input unit 1 and the display unit 5 may be provided separately, or may be provided as a display-integrated tablet formed by stacking transparent tablets on a display device. Further, the input unit 1 may be configured by a device that acquires a handwriting using a camera or the like, or a device that acquires a handwriting by acquiring a three-dimensional position such as a pen or a fingertip.
- FIG. 2 shows the hardware configuration of the handwriting input device shown in FIG.
- the handwriting input device includes a program memory 11 for storing various programs including a program for executing various processes to be described later, a data memory 12 for configuring a storage unit 2 for storing various information including the handwritten information, and an input.
- External storage medium 17 such as F16, CD-ROM (compact disk 'read. Only' memory), etc. is set and external storage medium driver 18 accesses this external storage medium 17, and program memory 1 1.
- CPU Central Processing Unit
- CPU Central Processing Unit
- FIG. 3 is a flowchart of the handwriting input processing operation executed under the control of the CPU 19.
- this handwriting input processing operation is performed every time a stroke input to the input unit 1 is started, and starts when a stroke input to the input unit 1 is started by the operator.
- step SI the stroke being input is displayed on the display unit 5 in real time.
- the storage unit 2 stores the stroke information.
- step S2 it is determined based on information from the input unit 1 whether or not the stroke input has been completed. As a result, if the processing has been completed, the process proceeds to step S3. If the processing has not been completed, the process returns to step S1 to continue the processing of the input Srotalk.
- step S1 is repeated until the end of the stroke input by the operator is detected.
- step S3 it is determined whether or not the candidate generation unit 3 starts generating prediction candidates. As a result, when the generation of the prediction candidate is started, the process proceeds to step S4, and when not started, the handwriting input processing operation is ended as it is.
- the method of determining the start of generation of the prediction candidate is not particularly limited, but, for example, the following method is effective.
- any one of the above-described discrimination methods may be used, or any combination of two or more discrimination methods may be used.
- a gesture that is input by handwriting to the input unit 1 in the same manner as handwritten information may be used.
- the aforementioned gesture is defined by a shape composed of one or more predetermined strokes. This is a trigger for executing a predetermined process.
- there are two methods for distinguishing the handwritten information of characters and figures from the above-described gesture a character / graphic input mode for inputting characters and figures to the input unit 1 and a gesture input mode for inputting the above gesture.
- step S4 the prediction candidate is generated by the candidate generation unit 3 based on the handwriting information stored in the storage unit 2. The method of generating the prediction candidates will be described later.
- step S 5 the display unit 5, the predicted candidates generated in the step S 4 is displayed. Then, when a desired prediction candidate is selected by the candidate selection unit 4, the selected prediction candidate is displayed. The method of displaying the predicted prediction will be described later. After that, the handwriting input processing operation ends.
- the plurality of predetermined values may be set so as to increase at regular intervals, such as 5 ⁇ 10 ⁇ 15, or as 5 ⁇ 8 (+3) ⁇ 10 (+2). The amount of increase may be varied.
- the second or subsequent predetermined value in the above example, “10”, “15” or “8”, “10”
- Prediction candidates can also be generated. By doing so, for example, when generating the above-mentioned prediction candidates every three strokes, if the operator does not find any of the prediction candidates generated and displayed at the time of writing 'Predictions that are generated' and displayed when you write The candidate excludes previously generated prediction candidates (that is, prediction candidates that are not intent). Therefore, it is possible to increase the accuracy rate of the generated prediction.
- step S4 the prediction candidate generation processing operation executed in step S4 in the handwriting input processing operation shown in FIG. 3 will be described in more detail.
- the method of generating a predicted catch there is no particular limitation on the method of generating a predicted catch, but a method of generating the predicted catch by applying a handwritten information search technique as follows is considered. That is, the handwritten information to be generated as a prediction candidate is set as a dictionary entry, and a prediction candidate is generated by searching for a dictionary entry similar to the input handwritten information from the handwritten information dictionary obtained by collecting the dictionary entries.
- the handwritten information dictionary is provided so as to be searchable by the candidate generation unit 3.
- all the dictionary entries may be prepared in the handwritten information dictionary in advance, or the dictionary entries may be updated or deleted as needed according to the handwriting input by the operator.
- FIG. 5 is a flowchart of the above-mentioned predicted prediction generation processing operation.
- the prediction candidate generation processing operation is started.
- step Sll from the input handwritten information stored in the storage unit 2, search information used when searching the handwritten information dictionary is generated.
- the method of determining the input handwritten information to be searched from the stored input handwritten information is roughly divided into the following two methods.
- extraction method (B2a) and extraction method (B2b) it is necessary to obtain the input time order of each stroke, but this is performed by inputting each stroke into the storage unit 2 in a data structure such as an array or a list. This can be realized by storing in chronological order or by storing the input time in the storage unit 2 as the stroke information.
- the instruction from the search target handwritten information instruction unit by the operator is the same as the “instruction to the input unit 1” in the determination method (A 5).
- the search target handwritten information is determined according to the positional relationship between the stroke determined to be the gesture and the strokes of handwritten characters or figures other than the gesture.
- step S12 the handwritten information dictionary is searched based on the generated search information, and a dictionary entry that partially matches the search target handwritten information that is the search information is extracted.
- the method of matching the above-mentioned handwritten information to be searched with the handwritten information dictionary is not particularly limited. However, various pattern matching methods used in online character recognition and OCR (Optical Character Reader) can be applied. It is.
- FIG. 6A shows search target handwritten information
- FIG. 6B shows one dictionary entry.
- the black circles on the strokes indicate the coordinates.
- DP matching may be performed directly using these coordinates.
- a polygonal line approximation process such as sampling is performed, and DP matching is performed using the coordinates of both ends and the polygonal portion of the polygonal line. It is common.
- FIG. 6C corresponds to the coordinates shown in FIG. 6D. Since this match may be a partial match, DP matching must be terminated when the coordinates shown in FIG. 6C correspond to any of the coordinates shown in FIG. 6D.
- Fig. 7A shows the correspondence between the search handwritten information and the dictionary entries in two dimensions. The vertical axis is each coordinate in Fig. 6C, the horizontal axis is each coordinate in Fig. 6D, and the black circle is The correspondence is shown. That is, according to FIG. 7A, the correspondence shown in FIG. 7B is obtained. In the case of the above-mentioned matching method, it is possible that correspondence may be obtained only up to the coordinates in the middle of a certain spoke in the dictionary entry, so that the association is terminated up to the end point of any of the spokes. You may.
- step S13 after the prediction candidates are generated based on the extracted dictionary entries, the process returns to step S5 of the handwriting input processing operation shown in FIG.
- the prediction candidate may be generated using the extracted dictionary entry strokes as they are, or may be matched with the search target handwritten information strokes in the DP matching in step S12.
- the generated part may be deleted and generated.
- the coordinates It is only necessary to generate a predictive catch from V based on the coordinate sequence of the columns P d9 to P d27 before the polygonal line approximation.
- a prediction candidate When a prediction candidate is generated using the extracted dictionary entry strokes as they are, when the generated prediction candidate is selected, the prediction candidate matches the stroke of the search target handwritten information.
- the portion is deleted and displayed, or the search target handwritten information is deleted and only the prediction candidate is displayed. This will be described later.
- a pattern matching method is known that can extract even a dictionary entry (FIG. 8B) having a different size ⁇ direction but a similar shape.
- the size and direction of the above-described handwritten information to be searched match the size and direction of the portion similar to the above-described handwritten information to be searched in the extracted dictionary entry.
- the prediction candidate may be generated based on the dictionary entry with the changed overall size and direction.
- the contents of the handwritten information dictionary may be updated based on the change in the size and direction of the dictionary entry, or the size and direction of the dictionary entry may be temporarily changed to generate prediction candidates.
- the contents of the handwritten information dictionary may not be updated only by changing the information.
- the size and direction of the dictionary entry can be changed by a vector vl connecting the writing start point in the search target handwritten information and the coordinate point farthest from the writing start point. And so that the vector V2 corresponding to the vector vl in the extracted dictionary entry is equal (that is, the directions and magnitudes of both vectors V1 and V2 are equal), This can be realized by generating a prediction candidate based on a rotated and resized version of the dictionary entry.
- the prediction candidate display processing executed in step S5 in the handwriting input processing operation shown in FIG. 3 will be described in further detail.
- the display method of the prediction candidate is not limited, but the following display method is conceivable.
- (C1) A method of displaying all prediction candidates in a specific area.
- (C 2) A method of displaying only the first prediction candidate according to the position, size, and direction of the handwritten information to be searched.
- (C3) A method in which the first prediction candidate is displayed according to the position, size, and direction of the handwritten information to be searched, and the second and subsequent prediction candidates are displayed in a specific area.
- FIG. 1OA shows a display result when a prediction candidate including a portion corresponding to the search target handwritten information is generated. That is, when the input character “te” 21 is the search target handwritten information, the predicted catches 22 to 24 including the part corresponding to the search target handwritten information “te” are displayed.
- FIG. 10B shows a display result when a prediction candidate in which a portion corresponding to the search target handwritten information is deleted is generated.
- the prediction candidates 26 to 28 in which the portion corresponding to the character “te” 25 being input is deleted are displayed.
- the buttons 29 and 30 in FIG. 1 OA and the buttons 31 and 32 in FIG. 1 OB indicate the prediction candidates other than the currently displayed prediction candidates 22-24, 26 to 28. This is a button for displaying.
- the prediction candidates generated according to the size and direction of the search target handwritten information are displayed, but the size and direction of the extracted dictionary entries are displayed. You may also display a prediction sentiment that reflects the as it is. Alternatively, reduced prediction candidates may be displayed in consideration of screen size restrictions and the listability of a plurality of prediction candidates.
- FIGS. 11A and 11B are display examples according to the display method (C 2).
- FIG. 11A shows a display result when a prediction candidate including a portion corresponding to the search target handwritten information is generated.
- the first prediction candidate “sharp” (the thin line in FIG. 11A) including the part corresponding to the character “shi” being input (the thick line in FIG. 11A: handwritten information to be searched) is It is displayed superimposed on the handwritten information to be searched.
- Fig. 11B shows the display result when the predicted catch is generated by deleting the part corresponding to the handwritten information to be searched.
- the first prediction candidate “Yap J” the thin line in FIG.
- buttons 41 in FIG. 11A and the button 42 in FIG. 11B are buttons for displaying prediction candidates other than the currently displayed prediction candidate. is there.
- FIG. 11A the above-described handwritten information to be searched and the prediction candidates are displayed so that they can be identified by changing the line thickness. Showing it is also effective.
- the writing is continued without selecting the displayed prediction candidate, the currently generated prediction candidate is generated. If the displayed prediction candidates are hidden or displayed in a different display form, such as by displaying the color or density more lightly, the effect of improving the visibility during writing can be obtained.
- the candidate selection unit 4 is configured by a specific area such as a tablet constituting the input unit 1 or hardware different from the input unit 1 such as a keyboard and buttons.
- the specific area of the input unit 1 is, for example, a rectangular area that includes each of the prediction candidates 22-24 and 26 to 28 in FIGS. 10A and 10B.
- the button areas 43 to 45 displayed in a rectangular area including each of the plurality of prediction candidates may be set as the specific area of the input unit 1.
- a prediction candidate in the rectangular area corresponding to the button area is selected. Note that portions within the rectangular area other than the button areas 43 to 45 can be input by handwriting.
- the button areas 43 to 45 are provided in the rectangular area. In the latter case, the parts in the rectangular area other than the button areas 43 to 45 can be input by handwriting. Even if the rectangular area is set in the vicinity of the display position, problems are unlikely to occur.
- (D 2) As shown in FIG. 13B, a method of displaying a button 47 for selecting a prediction candidate near the writing end point Pe 2 of the displayed prediction candidate.
- the writing end point of the prediction candidate in the above selection method (D 2) refers to the writing end point of the last stroke when the handwritten information of the above-mentioned prediction candidate is input by actual writing. For example, if stroke information and stroke coordinate sequence information included in handwritten information are stored in chronological order, the last coordinate of the last stroke corresponds to that.
- the selection method (D 1) since the distance from the writing end point (handwriting input end point) of the search target handwritten information to the prediction candidate selection button 46 is short, the prediction candidate is selected from the handwriting input power. Can be performed quickly, improving operability.
- the selection method (D 2) the position near the writing end point when handwriting information intended for input is actually written from the input unit 1 without selecting a prediction candidate. Thus, since the operation of selecting the predictive catch is completed, the operator can start the next handwriting input from the vicinity thereof, thereby improving the input efficiency and the operational feeling.
- a circumscribed rectangle of only the search target handwritten information may be targeted.
- the prediction candidate selection button and the prediction candidate may contact each other.
- the button may be displayed so that the closest point that does not touch the prediction target is touched or the button is pressed. Needs to be displayed translucently.
- This display position determination method (E 1) requires a small amount of processing.
- the button is pressed even if there is space around the writing end point. The drawback is that it is displayed far away.
- the display position may be set to a vertex of a regular polygon centered on the writing end point of the search target handwritten information.
- Fig. 14B it is a regular hexagon. If any of the vertices of the regular polygon comes into contact with the handwritten information to be searched or each stroke of the prediction candidate, the same processing is performed for a plurality of points whose distance from the writing end point is d 2 (> dl) Should be performed. If the contact is made even when the distance is d 2, the same processing is performed for a plurality of points at d 3 (> d 2).
- the same processing is repeated by increasing the distance from the writing end point until a point where no contact is finally found, and a button is displayed at the position of the finally obtained point.
- the display position determination method (E 2) requires a larger processing amount than the display position determination method (E 1). If there is a space in, a button can be displayed near the end point.
- the candidate selection unit 4 In the configuration of the candidate selection unit 4 described above, it is effective to use a menu or the like capable of giving a processing instruction other than selection, instead of the candidate selection button.
- FIG. 15 A display example in the case of the display processing method (F 1) is shown in FIG.
- the display processing method (F 1) is a method in which the strokes actually input by handwriting are used as they are.
- Fig. 16 shows a display example in the case of the display processing method (F2).
- the display processing method (F 2) is a method of using a prediction candidate instead of an input stroke.
- the size and direction of the selected prediction candidate are changed in accordance with the handwritten information to be searched. It is not necessary if this has already been done at the time of generation. Also, of course, it is possible to keep the size and direction of the prediction candidate unchanged. '
- FIG. 10B a selection is made when a prediction candidate in which a portion corresponding to the search target handwritten information is deleted is displayed in a window, or when a similar display is performed in a predetermined display area.
- a display processing method of a prediction candidate will be described.
- a display similar to the search target handwritten information is displayed in accordance with the position where the search target handwritten information was present. What is necessary is just to display in a form.
- FIG. 10A it is unnecessary to change the size and the direction when the prediction candidate is already generated at the time of generating the above-described prediction candidate.
- FIG. 17 shows a display example when this display processing method is used.
- the prediction candidate including the portion corresponding to the search target handwritten information is searched.
- the selection prediction candidate there are the following two display processing methods of the selection prediction candidate.
- (G 1) A method in which a portion corresponding to the handwritten information to be searched is deleted from the above-mentioned prediction candidates and displayed in the same display form as the input handwritten information according to the position of the handwritten information to be searched.
- G2 A method in which the search target handwritten information is deleted (hidden), and the prediction candidate is displayed in the same display form as the input handwritten information according to the position where the search target handwritten information was present.
- FIGS. 18A and 18B show display examples by the above display processing methods (Gl) and (G2).
- method 1 in FIGS. 18A and 18B corresponds to the above display processing method (G 1), and the first character “shi” of the displayed prediction candidate “sharp” is replaced with the “shi” of the search target handwritten information. Has been replaced.
- method 2 corresponds to the above display processing method (G 2), and the displayed prediction candidate “sharp” is displayed as it is.
- FIG. 11B a display processing method for displaying a prediction candidate in which a portion corresponding to the above-described handwritten information to be searched is deleted according to the position, size and direction of the handwritten information to be searched Will be described. In this case, it is only necessary to display the selected predicted catch in the same display form as the search target handwritten information.
- Figure 19 shows a display example.
- the method of changing the size and direction of the selection prediction candidate performed when displaying the selection prediction candidate and aligning the position with the search target handwritten information is not particularly limited. And do it.
- the dictionary used when the above-mentioned catch generation unit 3 generates the selection prediction candidate is read.
- the vector v2 shown in FIG. 9B of the part corresponding to the search target handwritten information in the dictionary entry is compared with the vector v1 shown in FIG. 9A of the search target handwritten information, and the vector v2 Obtain the degree of deviation of the size and direction of the vector vl from the size and direction.
- the size and direction of the entire selection prediction candidate are changed according to the degree of the shift, and the size of the selection prediction candidate is changed.
- the direction and the direction substantially match the size and direction of the search target handwritten information.
- the position of the portion corresponding to the search target handwritten information in the selected prediction candidate (or the position where the portion corresponding to the search target handwritten information in the selected prediction candidate was present) is determined by the search target. It is displayed in accordance with the position where the handwritten information was present (or the position of the search target handwritten information).
- information relating to a set of strokes input by handwriting from input unit 1 is stored in storage unit 2 as handwritten information.
- a handwritten information dictionary is formed and stored as a dictionary entry using the input handwritten information and handwritten information prepared in advance.
- the candidate generation unit 3 stores it in the storage unit 2.
- the handwritten information to be searched is determined from the handwritten information. Further, the handwritten information dictionary is searched based on the handwritten information to be searched, and a predicted catch is generated based on a dictionary entry that matches the handwritten information to be searched. Then, among the prediction candidates displayed on the display unit 5, the selected prediction candidates selected by the catch detection selection unit 4 are displayed on the display unit 5 as if they were handwritten information input from the input unit 1.
- the operation when processing the information input by handwriting from the input unit 1 as an image as it is, the operation may be performed by handwriting input from the input unit 1 until a desired prediction candidate is generated and displayed.
- a prediction candidate corresponding to the handwritten information intended for input can be obtained at the stage when the catch is displayed. In this way, the burden of handwriting input on the operator can be reduced.
- a plurality of processing methods are exemplified for the prediction candidate generation processing and the prediction candidate display processing by the candidate generation unit 3 and the prediction candidate selection processing by the candidate selection unit 4. Or to run selectively.
- the present invention is not limited to this, and may have a configuration in which switching can be performed as appropriate according to the instruction of the operator.
- This embodiment is based on a handwriting input program for realizing the various functions described in the first embodiment, and a compilation in which the handwriting input program is recorded.
- the present invention relates to a data readable recording medium.
- This recording medium is a memory necessary for performing a handwriting input process by the handwriting input device shown in FIG. 2.
- the program memory 11 itself may be a program medium, or an external storage medium driver. It may be a program medium (magnetic tape, CD-ROM, or the like) that is mounted on the program reading device as 18 and is read.
- the program stored in the program medium may be directly accessed and executed by the CPU 19, or may be read out once and read out from the predetermined program storage area in FIG. (For example, the program may be loaded into a program memory area of the program memory 11) and then read and executed by the CPU 19. It is assumed that the above-mentioned program for the password is stored in the device in advance.
- the above-mentioned program medium is configured to be separable from the main body of the apparatus, and is a tape system such as a magnetic tape or a cassette tape, a magnetic disk such as a flexible disk or a hard disk, or a CD-ROM, MO (magneto-optical) disk , MD (mini-disc), DVD (digital versatile disc) and other optical discs, IC (integrated circuit) cards (including memory cards), optical cards and other card systems, mask ROM, EPR OM ( Any medium that can carry a fixed program, including semiconductor memory systems such as UV, line erase type ROM, EEPROM (electrically erase type ROM), and flash ROM, may be used.
- the recording medium in the present embodiment means that the program media having the above configuration is added to the handwriting processing operation shown in FIG. It is recorded by a handwriting program that executes the examples.
- the handwriting input device has a configuration connectable to a communication network 15 including the Internet via a communication I / F 16. Therefore, the program medium may be a medium that carries the program in a fluid manner by downloading from the communication network 15 or the like. In this case, it is assumed that a download program to be downloaded from the communication network 15 is stored in the device in advance. Or another It is assumed that it is installed in the apparatus main body in advance from an external recording medium.
- what is recorded on the recording medium is not limited to only a program, but data can also be recorded.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Character Discrimination (AREA)
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003191169A JP2005025566A (ja) | 2003-07-03 | 2003-07-03 | 手書き入力装置、手書き入力方法、手書き入力プログラム、および、プログラム記録媒体 |
JP2003-191169 | 2003-07-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005004041A1 true WO2005004041A1 (ja) | 2005-01-13 |
Family
ID=33562350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/009767 WO2005004041A1 (ja) | 2003-07-03 | 2004-07-02 | 手書き入力装置、手書き入力方法、手書き入力プログラム、および、プログラム記録媒体 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2005025566A (ja) |
WO (1) | WO2005004041A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049204A (zh) * | 2012-12-17 | 2013-04-17 | 上海海知信息技术有限公司 | 在笔迹图像上移动光标的方法、输入法以及输入系统 |
CN103218153A (zh) * | 2012-12-17 | 2013-07-24 | 上海海知信息技术有限公司 | 一种在笔迹上图像进行换行操作的方法 |
US9274704B2 (en) | 2013-08-02 | 2016-03-01 | Kabushiki Kaisha Toshiba | Electronic apparatus, method and storage medium |
US9298366B2 (en) | 2014-03-28 | 2016-03-29 | Kabushiki Kaisha Toshiba | Electronic device, method and computer readable medium |
US9606981B2 (en) | 2013-11-08 | 2017-03-28 | Kabushiki Kaisha Toshiba | Electronic apparatus and method |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011115313A1 (ko) * | 2010-03-18 | 2011-09-22 | Kim Jin-Wook | 터치스크린을 이용한 문자입력방법 및 장치 |
KR100983194B1 (ko) | 2010-03-18 | 2010-09-20 | 김진욱 | 터치스크린을 이용한 문자입력방법 및 장치 |
JP2013025390A (ja) * | 2011-07-15 | 2013-02-04 | Metamoji Corp | 手書き入力方法 |
JP5832980B2 (ja) * | 2012-09-25 | 2015-12-16 | 株式会社東芝 | 手書き入力支援装置、方法およびプログラム |
JP6189451B2 (ja) * | 2013-12-06 | 2017-08-30 | 株式会社東芝 | 手書き文書情報を処理するための電子機器および方法 |
JP6010253B2 (ja) * | 2014-03-11 | 2016-10-19 | 株式会社東芝 | 電子機器、方法およびプログラム |
JP6270565B2 (ja) * | 2014-03-18 | 2018-01-31 | 株式会社東芝 | 電子機器および方法 |
JP6062487B2 (ja) * | 2015-05-13 | 2017-01-18 | 株式会社東芝 | 電子機器、方法及びプログラム |
CN105094381B (zh) * | 2015-07-21 | 2018-01-09 | 网易(杭州)网络有限公司 | 一种书写处理方法和装置 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02186491A (ja) * | 1989-01-13 | 1990-07-20 | Toshiba Corp | 文字図形認識装置 |
JPH07225663A (ja) * | 1994-02-09 | 1995-08-22 | Fujitsu Ltd | 手書情報入力装置 |
JPH08106513A (ja) * | 1994-10-07 | 1996-04-23 | Pfu Ltd | 手書き文字認識装置 |
JPH096893A (ja) * | 1995-06-19 | 1997-01-10 | Canon Inc | 情報処理装置及び方法 |
JPH10154224A (ja) * | 1996-11-26 | 1998-06-09 | Sharp Corp | データ処理装置 |
JPH10214267A (ja) * | 1997-01-29 | 1998-08-11 | Sharp Corp | 手書き文字記号処理装置および手書き文字記号処理装置の制御プログラムを記録した媒体 |
JP2000076302A (ja) * | 1998-08-27 | 2000-03-14 | Casio Comput Co Ltd | 画像検索方法,画像検索装置および電子スチルカメラ |
JP2000148794A (ja) * | 1998-08-31 | 2000-05-30 | Canon Inc | 画像検索装置及びその方法、コンピュ―タ可読メモリ |
-
2003
- 2003-07-03 JP JP2003191169A patent/JP2005025566A/ja active Pending
-
2004
- 2004-07-02 WO PCT/JP2004/009767 patent/WO2005004041A1/ja active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02186491A (ja) * | 1989-01-13 | 1990-07-20 | Toshiba Corp | 文字図形認識装置 |
JPH07225663A (ja) * | 1994-02-09 | 1995-08-22 | Fujitsu Ltd | 手書情報入力装置 |
JPH08106513A (ja) * | 1994-10-07 | 1996-04-23 | Pfu Ltd | 手書き文字認識装置 |
JPH096893A (ja) * | 1995-06-19 | 1997-01-10 | Canon Inc | 情報処理装置及び方法 |
JPH10154224A (ja) * | 1996-11-26 | 1998-06-09 | Sharp Corp | データ処理装置 |
JPH10214267A (ja) * | 1997-01-29 | 1998-08-11 | Sharp Corp | 手書き文字記号処理装置および手書き文字記号処理装置の制御プログラムを記録した媒体 |
JP2000076302A (ja) * | 1998-08-27 | 2000-03-14 | Casio Comput Co Ltd | 画像検索方法,画像検索装置および電子スチルカメラ |
JP2000148794A (ja) * | 1998-08-31 | 2000-05-30 | Canon Inc | 画像検索装置及びその方法、コンピュ―タ可読メモリ |
Non-Patent Citations (1)
Title |
---|
FUKUSHIMA, S. ET AL.: "Yosoku Pen Nyuryoku Interface to Sono Tegaki Sosa Sakugen Koka", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 37, no. 1, 15 January 1996 (1996-01-15), pages 23 - 30, XP002985319 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103049204A (zh) * | 2012-12-17 | 2013-04-17 | 上海海知信息技术有限公司 | 在笔迹图像上移动光标的方法、输入法以及输入系统 |
CN103218153A (zh) * | 2012-12-17 | 2013-07-24 | 上海海知信息技术有限公司 | 一种在笔迹上图像进行换行操作的方法 |
US9274704B2 (en) | 2013-08-02 | 2016-03-01 | Kabushiki Kaisha Toshiba | Electronic apparatus, method and storage medium |
US9606981B2 (en) | 2013-11-08 | 2017-03-28 | Kabushiki Kaisha Toshiba | Electronic apparatus and method |
US9298366B2 (en) | 2014-03-28 | 2016-03-29 | Kabushiki Kaisha Toshiba | Electronic device, method and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
JP2005025566A (ja) | 2005-01-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210406578A1 (en) | Handwriting-based predictive population of partial virtual keyboards | |
JP5423525B2 (ja) | 手書き入力装置、手書き入力方法及び手書き入力プログラム | |
EP1435561B1 (en) | Method and apparatus for recognizing and associating handwritten information in various languages | |
JP4742132B2 (ja) | 入力装置、画像処理プログラムおよびコンピュータ読み取り可能な記録媒体 | |
JP3744997B2 (ja) | 文字認識装置及びその方法 | |
US8659567B2 (en) | Electronic device having handwritten mathematical formula recognition function | |
US20060001656A1 (en) | Electronic ink system | |
WO2005004041A1 (ja) | 手書き入力装置、手書き入力方法、手書き入力プログラム、および、プログラム記録媒体 | |
JP2007317022A (ja) | 手書文字処理装置及び手書文字処理方法 | |
CN114365075B (zh) | 用于选择图形对象的方法和对应装置 | |
CN104272322B (zh) | 显示控制装置、以及显示装置的控制方法 | |
US9405558B2 (en) | Display-independent computerized guidance | |
US20150073801A1 (en) | Apparatus and method for selecting a control object by voice recognition | |
JP4393415B2 (ja) | 手書き入力装置、手書き入力プログラム、および、プログラム記録媒体 | |
CN101421693B (zh) | 使用写来向电子设备输入数据 | |
JP5567097B2 (ja) | 電子機器、手書き文書表示方法、及び表示プログラム | |
JP4817297B2 (ja) | 文字検索装置 | |
JP3292752B2 (ja) | ジェスチャー処理装置およびジェスチャー処理方法 | |
US7979795B2 (en) | System and method for inputting syllables of a phonetic script into a computer | |
JP2003196593A (ja) | 文字認識装置および文字認識方法および文字認識プログラム | |
JP4148867B2 (ja) | 筆跡処理装置 | |
JP4207089B2 (ja) | 文章入力装置及び方法 | |
JP4357240B2 (ja) | 文字認識装置、文字認識方法、プログラムおよび記憶媒体 | |
KR102677199B1 (ko) | 그래픽 객체를 선택하기 위한 방법 및 대응하는 디바이스 | |
JP3591319B2 (ja) | 文字入力装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
122 | Ep: pct application non-entry in european phase |