CN103268198A - Gesture input method and device - Google Patents

Gesture input method and device Download PDF

Info

Publication number
CN103268198A
CN103268198A CN2013101976401A CN201310197640A CN103268198A CN 103268198 A CN103268198 A CN 103268198A CN 2013101976401 A CN2013101976401 A CN 2013101976401A CN 201310197640 A CN201310197640 A CN 201310197640A CN 103268198 A CN103268198 A CN 103268198A
Authority
CN
China
Prior art keywords
input
stroke
gesture
track
gesture operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013101976401A
Other languages
Chinese (zh)
Inventor
高精鍊
周尼克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Guobi Technology Co Ltd
Original Assignee
Guangdong Guobi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Guobi Technology Co Ltd filed Critical Guangdong Guobi Technology Co Ltd
Priority to CN2013101976401A priority Critical patent/CN103268198A/en
Publication of CN103268198A publication Critical patent/CN103268198A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Character Discrimination (AREA)

Abstract

The invention discloses a gesture input method. The gesture input method comprises the steps of step1, receiving gesture operation input through a terminal, step2, determining whether the gesture operation input is character strokes according to the number of contacts and characteristics of the gesture operation input, step3, displaying the track of the input strokes on a screen interface when the gesture operation input is character stroke input, and carrying out corresponding processing on the track of the strokes, and step4, processing the gesture operation input in a system mode or in an application default mode when the gesture operation input is not the character stroke input. The invention further discloses a gesture input device which can differentiate types of the gesture operation input in an intelligent mode. The gesture input method and the gesture input device can achieve hand input of the strokes in a double-finger mode on a touch screen, and the hand input of the strokes cannot be mixed up with ordinary gesture operation.

Description

Gesture input method and device
Technical Field
The invention relates to the field of intelligent terminals, in particular to a gesture input method and a gesture input device.
Background
At present, intelligent terminals such as mobile phones, tablet computers and the like are equipped with capacitive screens supporting multi-point touch, a user usually uses one finger to perform gesture operations such as screen sliding, button clicking, handwriting input and the like, and certainly, the user can use two fingers to enlarge and reduce pictures and the like through specific gesture operations.
As shown in fig. 1, generally, a text edit box is clicked to select a handwriting input method, or a dedicated gesture search application is entered, that is, handwriting can be performed only in a text input mode, but gesture input cannot be directly performed on a desktop or other application interfaces, because the system cannot distinguish whether such gesture input is a normal gesture operation of a Graphical User Interface (GUI) or a handwritten stroke input.
Disclosure of Invention
Based on the defects of the prior art, the primary object of the present invention is to provide a gesture input method, which intelligently identifies the type of gesture operation input, so that a user can directly perform handwriting input on an application interface, and meanwhile, the method is compatible with common gesture operation, and no confusion is caused.
Another object of the present invention is to provide a gesture input device, which intelligently identifies the type of gesture operation input, so that a user can directly perform handwriting input on an application interface, and simultaneously can be compatible with common gesture operations without entering a text input mode.
In order to achieve the purposes, the invention adopts the following technical scheme:
the first object of the present invention is a gesture input method, comprising the steps of: the terminal receives gesture operation input;
determining whether the touch points are character strokes or not according to the number and the characteristics of the touch points input by the gesture operation;
if the character stroke input is judged, displaying the input stroke track on a screen interface, and carrying out corresponding processing on the stroke track;
otherwise, processing the gesture operation input in a default mode of a system or an application.
Further, the specific step of determining whether the character strokes are character strokes or not according to the number and the characteristics of the contacts input by the gesture operation comprises the following steps:
and if the number of the contact points is two and the gesture characteristics of the two contact points have consistency, judging that the character stroke is input.
Further, the specific step of detecting that the gesture features of the two contacts have consistency includes:
when a second contact is detected to be pressed within a first predetermined time, if the gesture movement distance of the first contact is within a first predetermined threshold, and the distance between the two contacts is within a second predetermined threshold at that time, and thereafter the change in distance between the two contacts within a second predetermined time is within a third predetermined threshold, then the gesture characteristics of the two contacts are determined to be consistent.
Further, the specific step of detecting that the gesture features of the two contacts have consistency includes:
after detecting two contact points, if the distance change between the two contact points is within a predetermined threshold within a predetermined time, determining that the gesture features of the two contact points have consistency.
Further, it is characterized in that if it is detected that any one of the two touch points is lifted and that the two touch points are not pressed within a predetermined time, the character stroke input is ended;
recognizing characters corresponding to the stroke tracks, and performing local and/or network retrieval according to the recognized characters; or,
the stroke track is saved and associated with the content in the current interface.
Further, when it is detected that both the contacts have been lifted and it is not detected that both the contacts have been pressed within a predetermined time, the character stroke input is ended;
if the distance between the two contact points at the time of ending the input and the distance between the two contact points at the time of starting the input differ by more than a predetermined threshold value, the subsequent processing is cancelled.
Further, the gesture moving track of any one of the two contact points is used as the track of character stroke input;
or taking the average value of the gesture moving tracks of the two touch points as the track of the character stroke input.
Further, after the character stroke input is finished, the component which obtains the focus in the current interface receives and processes the stroke track and/or the character recognized according to the stroke track. Or after the character stroke input is finished, receiving and processing the stroke track and/or the character recognized according to the stroke track by a component closest to the handwriting starting position in the current interface. Or displaying a processing mode option after the character stroke input is finished, and processing the stroke track and/or the character recognized according to the stroke track according to the selected processing mode option.
A second aspect of the present invention is a gesture input device, including:
the input unit is used for receiving gesture operation input, and the types of the gesture operation input comprise common gesture operation and character stroke input;
the recognition unit is used for judging the type of the gesture operation input according to the number and the characteristics of the contacts input by the gesture operation;
and the execution unit is used for executing corresponding processing according to the judgment result of the identification unit, and if the character stroke input is judged, displaying the input stroke track on a screen interface and performing corresponding processing.
Further, the recognition unit detects that the number of the contact points is two, and the gesture characteristics of the two contact points are consistent, and then the character stroke input is judged.
Further, the recognition unit also recognizes the character corresponding to the stroke track, and the execution unit also includes the following modules:
and the retrieval module is used for performing local retrieval or network retrieval according to the recognized characters as keywords after the recognition unit recognizes the characters corresponding to the stroke track, and displaying a matching result on a screen interface.
Further, the apparatus further comprises:
and the stroke recognition switch option is used for judging the type of the gesture operation input by the recognition unit only when the stroke recognition switch option is detected to be started, otherwise, the gesture operation input is processed in a default mode of a system or an application.
Further, after character stroke input is finished, receiving and processing the stroke track and/or characters identified according to the stroke track by a component for obtaining a focus in a current interface of the device or a component closest to a handwriting starting position; alternatively, processing mode options are displayed by the device, and the stroke track and/or the character recognized according to the stroke track are processed according to the selected processing mode options.
Compared with the prior art, the method and the device have the advantages that the input type is intelligently identified through the number and the characteristics of the contacts input by the gesture operation, whether the input type is the character stroke or not is judged, the user can directly carry out handwriting input on a terminal desktop or an application interface through two fingers, the handwriting input is not confused with common GUI gesture operations such as screen sliding, clicking and zooming, and the input is quicker and more convenient. Especially, the handwriting input is directly carried out on the desktop, the local (such as an address book) or network contents (such as commodity information) can be retrieved more quickly, and the efficiency of man-machine interaction is greatly improved. Besides handwriting input of characters, the method can be used for inputting and storing original handwriting of the handwriting strokes, for example, notes are made on documents or pictures, and the input operation of the original handwriting and the operation of GUI (graphical user interface) operation such as page turning, picture amplification and the like cannot be mixed.
The advantages of the present invention are far beyond the above listed points, and are limited to the space and not repeated. It is further emphasized that: any other technical changes which may be brought about by the realization of the proposed solution according to the invention, and the advantages brought about by such changes, although not explicitly described herein, are obvious to those skilled in the art and to those skilled in the business arts.
The invention is described in detail below with reference to the following figures and examples:
drawings
FIG. 1 is a diagram illustrating an interface for handwriting input of text in the prior art;
FIG. 2 is a schematic structural diagram of a gesture input apparatus according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a gesture input method according to an embodiment of the invention;
fig. 4 is a schematic diagram of a touch screen double-finger handwriting operation according to an embodiment of the present invention.
Detailed Description
The gesture input method is realized in a software form on the intelligent terminal, and can be operated in various hardware devices, such as terminal devices of intelligent mobile phones, tablet computers, intelligent televisions, game machines and the like. For the most common GUI common operations of gesture input such as screen sliding or clicking, the intelligent terminal system or application has a default conventional processing mode, such as screen sliding or clicked item starting, long-press and double-click operation processing and the like. If handwriting input is to be directly performed on a desktop or other application interfaces, the key is to distinguish common gesture operation from stroke input. In embodiments of the present invention, if it is not determined that the gesture input is a text stroke input, it may be handled in a system or application default manner.
FIG. 2 is a diagram of an example of a gesture input device according to the present invention. The gesture input device in this embodiment includes an input unit 1, configured to receive gesture operation input, where the type of the gesture operation input may include a common gesture operation and a character stroke input; the recognition unit 2 is used for judging the type of gesture operation input according to the number and the characteristics of the contacts input by the gesture operation, and further recognizing characters corresponding to stroke tracks; and the execution unit 3 is used for executing corresponding processing according to the judgment result of the identification unit 2, if the character stroke input is judged, displaying the input stroke track on a screen interface, and performing corresponding processing on the stroke track, if the character corresponding to the stroke track is identified by the identification unit 2, performing corresponding processing according to the identified character, or directly storing the stroke track, if the stroke track is used as the original stroke remark of the current content, and when the content is checked next time, calling out the stored original stroke remark.
Further, the execution unit 3 may further include a retrieval module 31, configured to perform local retrieval or network retrieval using the identified text as a keyword, and present a matching result on a screen interface. Such as searching for the contents of contacts, short messages, applications, etc. of the local address book. If the current interface is a shopping application, keyword related item information or the like may be retrieved from a network database.
Considering that some users may be unaccustomed or unable to enter text by handwriting, or that the application does not require this functionality, it is desirable to mask the handwriting recognition mode. The intelligent terminal can also provide stroke recognition switch options, only when the stroke recognition switch options are detected to be started, the type of the gesture operation input can be judged by the recognition unit in the current interface window, and otherwise, the processing is carried out according to a system default operation instruction, such as screen sliding or clicking operation. This switch option may be system level, set in system preferences; or application level, in which the stroke recognition switch option is turned on. In addition, if the current common input mode is detected, double-finger handwriting can be automatically shielded, so that the user can input by handwriting through a single finger.
In this embodiment, for example, the intelligent terminal equipped with the touch screen is implemented, and double-finger handwriting is adopted, if the index finger and the middle finger can be used, the recognition unit detects that the number of the contacts is two, and the gesture features of the two contacts have consistency, it is determined that the character strokes are input. When a User uses two fingers to perform handwriting, the characteristics of the two fingers and the touch points of the touch screen have consistency, and the invention utilizes the consistency of the gesture characteristics to realize intelligent distinguishing of common gesture operations of double-finger handwriting and a Graphical User Interface (GUI for short). Like the zooming in and out of the double fingers in the common gesture operation, the distance between the two touch points is gradually and obviously increased or decreased, and when the double fingers are used for handwriting input, the relative positions of the two touch points are basically stable, and the distance is not obviously changed.
Fig. 3 is a schematic main flow chart of an example of the gesture input method of the present invention, which is implemented on a smart phone, and includes the following steps:
301. and the intelligent terminal receives gesture operation input on the touch screen. For example, the gesture operation input on the touch screen can be monitored in the applications of a system desktop, an application list, a short message list, a contact list or a lock screen of the intelligent terminal.
302. And determining whether the character strokes are character strokes according to the number and the characteristics of the contact points input by the gesture operation. Specifically, if the number of the contact points is two and the gesture characteristics of the two contact points are consistent, the character stroke input is judged. The preferred embodiment of the present invention is to use two-finger handwriting, but may also use three-finger handwriting, and the character stroke input is determined as long as the gesture features of a plurality of touch points are determined to have consistency. As shown in FIG. 4, which is a schematic diagram of the handwriting with two fingers according to the present embodiment, there are two contact points of the index finger and the middle finger, and the index finger on the left side can be used as the handwriting stroke track.
The consistency detection may specifically detect that the gesture movement distance of the first contact is within a first predetermined threshold (e.g. 60 pixels) and the distance between the two contacts is within a second predetermined threshold (e.g. 25 pixels) when the pressing of the second contact is detected within a first predetermined time (e.g. 0.2 seconds), and then the change in the distance between the two contacts is within a third predetermined threshold (e.g. 20 pixels) within a second predetermined time (e.g. 0.2 seconds), so as to determine that the gesture characteristics of the two contacts have consistency.
If the pressing of the second contact point is not detected within the first preset time, the operation of the GUI common gesture of the single finger is judged, and even if the pressing of the second contact point is detected later, the character stroke input is not triggered. Optionally, the gesture movement distance of the first contact point may be further defined, and if the first contact point has moved a longer distance (e.g., 60 pixels) when the second contact point is pressed, the text stroke input is also excluded and the normal gesture operation is performed.
Further, in order to distinguish the zooming gesture operation, such as the zooming operation, the two fingers are close to each other at the initial time, and the distance is increased after a certain time, so that the initial distance between the two contacts can be recorded when the two contacts are detected, the change difference between the current distance and the initial distance of the two contacts is detected in the gesture moving process of the two fingers, and if the change exceeds the preset threshold within the second preset time, the GUI common gesture operation is determined. Here, the second predetermined time cannot be set too long, otherwise the user feels a noticeable delay in the stroke display at the time of the two-finger handwriting, which is preferable within 1 second. In addition, in order to avoid delay caused by judging whether character stroke input needs a certain time, a lighter semitransparent stroke track can be displayed before the type cannot be determined by double-finger operation, after the type is determined subsequently, if the character stroke input is performed, the display of the stroke track (comprising the drawn semitransparent stroke track) is enhanced, and otherwise, the stroke track is eliminated.
A relatively simple consistency judgment method can also be adopted, and after two touch points are detected, if the distance change between the two touch points within a preset time is within a preset threshold value, the gesture characteristics of the two touch points are determined to have consistency. This also substantially distinguishes the most common two-finger zoom GUI operations.
303. If the character stroke is judged to be input, displaying the input stroke track on the screen interface, and carrying out corresponding processing on the stroke track. If any one of the two contact points is detected to be lifted and the two contact points are not detected to be pressed within the preset time, the character stroke input is finished, characters corresponding to the stroke track can be identified subsequently, local and/or network retrieval is carried out according to the identified characters, or the stroke track is directly stored.
The text stroke input may also be terminated upon detecting that both contacts have been lifted and that no two contacts have been detected to be pressed within a predetermined time. For the input of characters, for example, Chinese characters and words or English words generally require multiple continuous gesture operations to complete the input. If the interval between the ending time of one gesture operation input (when all the contact points are lifted) and the starting time of the next gesture operation input is within the set threshold value range (such as 1 second), namely two double-finger gesture operation inputs in quick succession can be determined to be the same group of gesture operation inputs.
If the difference between the distance between the two contact points when the input is finished and the distance between the two contact points when the input is started is smaller than a preset threshold value, recognizing characters corresponding to the stroke track, and performing local and/or network retrieval according to the recognized characters; otherwise, the subsequent processing is cancelled. Therefore, if a user wants to cancel the currently input characters in the handwriting process, the user can release the two fingers on the screen, so that the contact distance is increased to exceed the threshold value, the purpose of canceling the input is achieved, and the operation is very simple and convenient.
After the handwriting input is completed, the gesture movement track of any one of the two touch points can be used as the track of character stroke input, for example, the track of the touch point positioned on the left side or the lower side is selected as the track of stroke input. Or taking the average value of the gesture moving tracks of the two touch points as the track of character stroke input, averaging the coordinates of the two touch points at each moment in the handwriting process, and taking the average value coordinate track as the track of character stroke input.
After the character stroke input is finished, the component which obtains the focus in the current interface can receive and process the stroke track and/or the character recognized according to the stroke track. And if the currently focused component is a text edit box, inserting the character recognized according to the stroke track into the text edit box.
Or receiving and processing the stroke track and/or the character recognized according to the stroke track by a component closest to the handwriting starting position in the current interface. If the handwriting initiation point is in a picture content, the obtained stroke track is stored and associated with the picture. Thus, the original handwriting annotation of the picture can be realized. More flexibly, after the character stroke input is finished, processing mode options can be displayed, and the stroke track and/or the character recognized according to the stroke track are processed according to the selected processing mode options.
304. Otherwise, processing the gesture operation input in a default mode of a system or an application. If the input is judged to be the common GUI gesture operation input, the screen interface is made to slide in the corresponding direction, including sliding left and right or up and down, and clicking, double clicking, long pressing or zooming and the like are carried out.
In addition, in the embodiment, double-finger handwriting is mainly adopted, so that the accidents of two touch points are further analyzed. When a user clicks an icon or the like with a single finger, the user may touch the screen with other fingers inadvertently, which is usually very short, so that the gesture operation input may be processed in a default manner by the system or by the application if it is below a predetermined threshold (e.g. 5 pixels) at the end of the gesture operation input, according to the length of the track for detecting the second contact.
In addition, for a system which adopts a camera to recognize gesture operations, such as a smart television or a video game machine, different from the operation of a touch screen of a mobile phone, the gesture operations, such as selection, page turning and the like, can be performed by the action of the whole palm. Therefore, the mode of determining whether the touch points are character strokes or not is different according to the number and the characteristics of the touch points input by the gesture operation, the actual contact is avoided through the camera recognition, and the number of fingers in the gesture can be used as the number of the touch points. Specifically, the handwriting input can be performed by using the index finger. If one touch point is detected and the trajectory of the touch point moving in the predetermined time is greater than a predetermined threshold, the handwritten stroke input may be determined. And when the number of the contact points is detected to be zero, namely when the fingers are all retracted, finishing handwriting input.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (15)

1. A method of gesture input, the method comprising the steps of:
the terminal receives gesture operation input;
determining whether the touch points are character strokes or not according to the number and the characteristics of the touch points input by the gesture operation;
if the character stroke input is judged, displaying the input stroke track on a screen interface, and carrying out corresponding processing on the stroke track;
otherwise, processing the gesture operation input in a default mode of a system or an application.
2. The method according to claim 1, wherein the specific step of determining whether the gesture operation input is a character stroke according to the number and the characteristics of the contacts comprises:
and if the number of the contact points is two and the gesture characteristics of the two contact points have consistency, judging that the character stroke is input.
3. The method according to claim 2, wherein the specific step of detecting that the gesture features of the two contact points have consistency comprises:
when a second contact is detected to be pressed within a first predetermined time, if the gesture movement distance of the first contact is within a first predetermined threshold, and the distance between the two contacts is within a second predetermined threshold at that time, and thereafter the change in distance between the two contacts within a second predetermined time is within a third predetermined threshold, then the gesture characteristics of the two contacts are determined to be consistent.
4. The method according to claim 2, wherein the specific step of detecting that the gesture features of the two contact points have consistency comprises:
after detecting two contact points, if the distance change between the two contact points is within a predetermined threshold within a predetermined time, determining that the gesture features of the two contact points have consistency.
5. The method according to claim 3 or 4, wherein if a lift of either of the two touch points is detected and a press of both touch points is not detected within a predetermined time, the text stroke input is ended;
recognizing characters corresponding to the stroke tracks, and performing local and/or network retrieval according to the recognized characters; or,
the stroke track is saved and associated with the content in the current interface.
6. The method of claim 3 or 4, wherein the text stroke input is terminated upon detecting that both contacts have been lifted and that no two contacts have been pressed within a predetermined time;
if the distance between the two contact points at the time of ending the input and the distance between the two contact points at the time of starting the input differ by more than a predetermined threshold value, the subsequent processing is cancelled.
7. The method according to claim 3 or 4, characterized in that the gesture movement track of any one of the two contact points is used as the track of character stroke input;
or taking the average value of the gesture moving tracks of the two touch points as the track of the character stroke input.
8. The method of claim 2, wherein the stroke track and/or the text recognized from the stroke track is received and processed by a component that obtains focus in a current interface after text stroke input is completed.
9. The method of claim 2, wherein the stroke track and/or the text recognized from the stroke track is received and processed by a component of the current interface that is closest in distance to the handwritten starting location after the text stroke input is completed.
10. The method of claim 2, wherein after the character stroke input is completed, a processing mode option is displayed, and the stroke track and/or the character recognized according to the stroke track is processed according to the selected processing mode option.
11. A gesture input apparatus, characterized in that the apparatus comprises:
the input unit is used for receiving gesture operation input, and the types of the gesture operation input comprise common gesture operation and character stroke input;
the recognition unit is used for judging the type of the gesture operation input according to the number and the characteristics of the contacts input by the gesture operation;
and the execution unit is used for executing corresponding processing according to the judgment result of the identification unit, and if the character stroke input is judged, displaying the input stroke track on a screen interface and performing corresponding processing.
12. The apparatus according to claim 10, wherein the recognition unit determines that the input is a character stroke input if the number of the touch points is two and the gesture features of the two touch points have consistency.
13. The apparatus of claim 10, wherein the recognition unit further recognizes a word corresponding to the stroke track, and wherein the execution unit further comprises:
and the retrieval module is used for performing local retrieval or network retrieval according to the recognized characters as keywords after the recognition unit recognizes the characters corresponding to the stroke track, and displaying a matching result on a screen interface.
14. The apparatus of claim 10, further comprising:
and the stroke recognition switch option is used for judging the type of the gesture operation input by the recognition unit only when the stroke recognition switch option is detected to be started, otherwise, the gesture operation input is processed in a default mode of a system or an application.
15. The device of claim 10, wherein the stroke track and/or the recognized text based on the stroke track is received and processed by a component in the current interface of the device that gets focus or is closest to the handwriting start position after the text stroke input is finished, or a processing mode option is displayed by the device, and the stroke track and/or the recognized text based on the stroke track is processed based on the selected processing mode option.
CN2013101976401A 2013-05-24 2013-05-24 Gesture input method and device Pending CN103268198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013101976401A CN103268198A (en) 2013-05-24 2013-05-24 Gesture input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013101976401A CN103268198A (en) 2013-05-24 2013-05-24 Gesture input method and device

Publications (1)

Publication Number Publication Date
CN103268198A true CN103268198A (en) 2013-08-28

Family

ID=49011833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013101976401A Pending CN103268198A (en) 2013-05-24 2013-05-24 Gesture input method and device

Country Status (1)

Country Link
CN (1) CN103268198A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104199608A (en) * 2014-08-20 2014-12-10 Tcl通讯(宁波)有限公司 Method for fast starting recording on touch terminal and touch terminal
CN104808897A (en) * 2015-04-08 2015-07-29 苏州三星电子电脑有限公司 Electronic device unlocking method and device
TWI596525B (en) * 2015-08-03 2017-08-21 聯想(新加坡)私人有限公司 Information handling method and information handling device
CN110413187A (en) * 2018-04-26 2019-11-05 广州视源电子科技股份有限公司 Method and device for processing annotations of interactive intelligent equipment
CN112306242A (en) * 2020-11-09 2021-02-02 幻境虚拟现实(广州)智能科技研究院有限公司 Interaction method and system based on book-space gestures
CN113849106A (en) * 2021-08-27 2021-12-28 北京鸿合爱学教育科技有限公司 Page-turning handwriting processing method and device, electronic device and storage medium
CN114415931A (en) * 2022-01-13 2022-04-29 湖南新云网科技有限公司 Electronic whiteboard display method and device, electronic whiteboard and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322692A1 (en) * 2008-06-25 2009-12-31 Samsung Electronics Co., Ltd. Character input apparatus and character input method
CN102135838A (en) * 2011-05-05 2011-07-27 汉王科技股份有限公司 Method and system for partitioned input of handwritten character string
CN102880422A (en) * 2012-09-27 2013-01-16 深圳Tcl新技术有限公司 Method and device for processing words of touch screen by aid of intelligent equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322692A1 (en) * 2008-06-25 2009-12-31 Samsung Electronics Co., Ltd. Character input apparatus and character input method
CN102135838A (en) * 2011-05-05 2011-07-27 汉王科技股份有限公司 Method and system for partitioned input of handwritten character string
CN102880422A (en) * 2012-09-27 2013-01-16 深圳Tcl新技术有限公司 Method and device for processing words of touch screen by aid of intelligent equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104199608A (en) * 2014-08-20 2014-12-10 Tcl通讯(宁波)有限公司 Method for fast starting recording on touch terminal and touch terminal
CN104199608B (en) * 2014-08-20 2019-01-04 Tcl通讯(宁波)有限公司 The method of quick open record and touch terminal on touching terminal
CN104808897A (en) * 2015-04-08 2015-07-29 苏州三星电子电脑有限公司 Electronic device unlocking method and device
TWI596525B (en) * 2015-08-03 2017-08-21 聯想(新加坡)私人有限公司 Information handling method and information handling device
US10007421B2 (en) 2015-08-03 2018-06-26 Lenovo (Singapore) Pte. Ltd. Natural handwriting detection on a touch surface
CN110413187A (en) * 2018-04-26 2019-11-05 广州视源电子科技股份有限公司 Method and device for processing annotations of interactive intelligent equipment
CN110413187B (en) * 2018-04-26 2021-12-03 广州视源电子科技股份有限公司 Method and device for processing annotations of interactive intelligent equipment
CN112306242A (en) * 2020-11-09 2021-02-02 幻境虚拟现实(广州)智能科技研究院有限公司 Interaction method and system based on book-space gestures
CN113849106A (en) * 2021-08-27 2021-12-28 北京鸿合爱学教育科技有限公司 Page-turning handwriting processing method and device, electronic device and storage medium
CN113849106B (en) * 2021-08-27 2023-12-29 北京鸿合爱学教育科技有限公司 Page turning handwriting processing method, device, electronic device and storage medium
CN114415931A (en) * 2022-01-13 2022-04-29 湖南新云网科技有限公司 Electronic whiteboard display method and device, electronic whiteboard and storage medium

Similar Documents

Publication Publication Date Title
US11592980B2 (en) Techniques for image-based search using touch controls
US10489047B2 (en) Text processing method and device
CN103064620B (en) Touch screen operation method and touch screen terminal
EP2874383B1 (en) System and method for controlling slide operation auxiliary input in portable terminal devices
CN103268198A (en) Gesture input method and device
EP3002664B1 (en) Text processing method and touchscreen device
US9898111B2 (en) Touch sensitive device and method of touch-based manipulation for contents
US20130263013A1 (en) Touch-Based Method and Apparatus for Sending Information
US10248635B2 (en) Method for inserting characters in a character string and the corresponding digital service
KR20130024220A (en) Input device and method on terminal equipment having a touch module
CN107688399B (en) Input method and device and input device
US11379116B2 (en) Electronic apparatus and method for executing application thereof
CN103116616B (en) webpage collection method and communication terminal
CN103218160A (en) Man-machine interaction method and terminal
CN106415472A (en) Gesture control method, device, terminal apparatus and storage medium
CN108710457B (en) Interaction method and terminal equipment
CN104571866A (en) Screen capture method
KR20150023151A (en) Electronic device and method for executing application thereof
CN105930062B (en) Check the method and device of file in file
CN103309612A (en) Method, device and equipment for processing information of graphic interface text field of mobile equipment
CN105573653A (en) Multi-object selecting method and terminal
EP2899623A2 (en) Information processing apparatus, information processing method, and program
CN105446629A (en) Content pane switching method, device and terminal
CN102778999B (en) Mobile terminal and full screen handwriting processing method thereof
CN104182240A (en) Method and device for starting application programs and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130828