CN111949132A - Gesture control method based on touch and talk pen and touch and talk pen - Google Patents

Gesture control method based on touch and talk pen and touch and talk pen Download PDF

Info

Publication number
CN111949132A
CN111949132A CN202010837741.0A CN202010837741A CN111949132A CN 111949132 A CN111949132 A CN 111949132A CN 202010837741 A CN202010837741 A CN 202010837741A CN 111949132 A CN111949132 A CN 111949132A
Authority
CN
China
Prior art keywords
stroke
preset
gesture
action
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010837741.0A
Other languages
Chinese (zh)
Inventor
胡峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipian Technology Shenzhen Co ltd
Original Assignee
Maipian Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipian Technology Shenzhen Co ltd filed Critical Maipian Technology Shenzhen Co ltd
Priority to CN202010837741.0A priority Critical patent/CN111949132A/en
Publication of CN111949132A publication Critical patent/CN111949132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture control method based on a touch and talk pen and the touch and talk pen, wherein the method comprises the following steps: associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a touch and read gesture and a non-touch and read gesture, and the preset function comprises a touch and read function and a non-touch and read function correspondingly; detecting stroke actions executed by a reading pen on a reading book; judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation; and controlling the touch and talk pen to execute a preset function. By the invention, when stroke action is carried out, the action of point reading and non-point reading can be naturally combined together without changing the pen holding posture or interrupting the current point reading operation, thereby being convenient for users to use. In addition, the stroke action is combined with position judgment, the operation is carried out at different positions on the reading book, and the realized functions are different.

Description

Gesture control method based on touch and talk pen and touch and talk pen
Technical Field
The invention relates to the technical field of reading and writing equipment, in particular to a gesture control method based on a touch and read pen and the touch and read pen.
Background
In the prior art, a reading pen is a new generation of intelligent reading and learning tool and is popular among children of low ages and parents, the basic principle of the reading pen is to perform selection operation on a matched reading book, and the reading pen makes various sounds according to the position of the reading book.
In order to meet various requirements of users, various manufacturers add various additional functions on the basic point reading function, such as: listening to music and stories on the reading pen; follow-up reading and comparison; spoken language evaluation, and the like. In order to control various functions on the touch and talk pen, products on the market currently are generally controlled by the following means:
1. and arranging a key (or the key and screen display) on the touch and talk pen: the user can select proper functions through various key operations; this approach may enable a selection of many functions; however, this method requires the user to change the pen holding posture when using the pen to operate the keys, which interferes with the normal touch-and-talk process;
2. a touch screen: similar problems with keys; meanwhile, the size of the screen is limited, so that the operation is more inconvenient; if the screen is enlarged, the volume of the click-to-read pen is larger;
3. and (3) voice: the user can press a special voice key and then read out some voice commands; the operation is more convenient, but the problems of low accuracy of voice recognition, easy influence of environment and the like exist;
4. pairing the mobile phone or the tablet: the touch and talk pen is connected with intelligent equipment such as a mobile phone or a tablet computer and the like through Bluetooth or Wi-Fi, and then sends an instruction through the intelligent equipment, and the operation is also inconvenient as a matched mobile phone or tablet computer is required;
5. the point-reading book is printed with a special command area: the corresponding area is clicked to perform a specific operation. This approach has limited instructions that can be implemented.
Disclosure of Invention
The invention aims to provide a gesture control method based on a touch and talk pen and the touch and talk pen, and aims to solve the problems that the existing touch and talk pen is inconvenient to operate and the like.
In a first aspect, an embodiment of the present invention provides a gesture control method based on a touch and talk pen, including:
associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a touch and read gesture and a non-touch and read gesture, and the preset function comprises a touch and read function and a non-touch and read function correspondingly;
detecting stroke actions executed by a reading pen on a reading book;
judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and controlling the touch and talk pen to execute a preset function.
In a second aspect, an embodiment of the present invention provides a touch and talk pen-based device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the gesture control method according to the first aspect when executing the computer program.
In a third aspect, an embodiment of the present invention provides a touch and talk pen, which includes:
the storage unit is used for associating preset gestures with preset functions in advance and storing corresponding relations, wherein the preset gestures comprise click-to-read gestures and non-click-to-read gestures, and the preset functions comprise click-to-read functions and non-click-to-read functions correspondingly;
the detection unit is used for detecting stroke actions executed by the point reading pen on the point reading book;
the judging unit is used for judging whether the stroke action is a preset gesture or not, and if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and the execution unit is used for controlling the touch and talk pen to execute a preset function.
The embodiment of the invention provides a gesture control method based on a touch and talk pen and the touch and talk pen, wherein the method comprises the following steps: associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a touch and read gesture and a non-touch and read gesture, and the preset function comprises a touch and read function and a non-touch and read function correspondingly; detecting stroke actions executed by a reading pen on a reading book; judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation; and controlling the touch and talk pen to execute a preset function. By the method provided by the embodiment of the invention, when the stroke action is carried out, the pen holding posture does not need to be changed, the current touch and talk operation does not need to be interrupted, the touch and talk actions can be naturally combined, and the use by a user is facilitated. In addition, the stroke action is combined with position judgment, the operation is carried out at different positions on the reading book, and the realized functions are different.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a gesture control method based on a touch and talk pen according to an embodiment of the present invention;
FIGS. 2a-2f are schematic diagrams of a first type of stroke actions according to an embodiment of the present invention;
FIGS. 3a-3s are schematic diagrams of a second type of stroke actions according to embodiments of the present invention;
fig. 4 is a schematic block diagram of a touch and talk pen according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a diagram illustrating a gesture control method based on a touch and talk pen according to an embodiment of the present invention, which includes steps S101 to S104:
s101, associating preset gestures with preset functions in advance and storing corresponding relations, wherein the preset gestures comprise click-to-read gestures and non-click-to-read gestures, and the preset functions comprise click-to-read functions and non-click-to-read functions correspondingly;
s102, detecting stroke actions executed by a reading pen on a reading book;
s103, judging whether the stroke action is a preset gesture or not, and if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and S104, controlling the touch and talk pen to execute a preset function.
The method of the embodiment of the invention provides a simple and convenient user interaction control method. Compared with the function operation and switching method of the traditional touch and talk pen, when the user performs stroke action, the pen holding posture does not need to be changed, the current touch and talk operation does not need to be interrupted, touch and talk actions can be naturally combined together, and the touch and talk pen is convenient for the user to use. In other embodiments, when the user performs a stroke action on the reading book, a more intelligent function can be realized according to the position of the stroke, the specific stroke action and the like.
Specifically, in step S101, a predetermined gesture is associated with a predetermined function, so that when it is detected that the stroke motion of the user operating the touch and talk pen is the predetermined gesture, the predetermined function can be acquired and executed.
In the embodiment of the invention, the preset gestures comprise a touch and read gesture and a non-touch and read gesture, and the preset functions comprise a touch and read function and a non-touch and read function. In a simple aspect, the touch and talk gesture is a touch and talk gesture of the existing touch and talk pen, and the corresponding touch and talk function is a touch and talk function of the existing touch and talk pen.
In general, the reading of the existing reading pen is completed by a clicking operation, so the reading gesture is generally to put the reading pen on the reading book for a short time and then lift the reading pen, and the head of the reading pen is not moved much on the writing. The invention utilizes the characteristic to add the non-touch-and-read gesture and distinguish the non-touch-and-read gesture from the touch-and-read gesture. The non-touch-and-read gesture means that the touch-and-read pen is placed on the touch-and-read book to have a stroke action, namely the touch-and-read pen point moves on the touch-and-read book.
In step S102, there are various ways to detect the stroke motion performed by the reading pen on the reading book.
In one embodiment, the step S102 includes:
the first method comprises the following steps: shooting a stroke image of the reading pen on the reading book through a camera built in the reading pen, and identifying the stroke image to obtain a stroke action;
and the second method comprises the following steps: and detecting stroke actions of the point reading pen on the point reading book through a motion sensor arranged in the point reading pen.
For the first mode, a camera is built in the click-to-talk pen, and the mounting position of the camera may be the pen point, or may be set at other positions, such as the pen holder, according to the requirement. Continuously shooting stroke images on the reading book (or micro-lattices on the reading book) through a built-in camera, and acquiring specific stroke images, positions of possible strokes on the specific book and moving tracks of the strokes on the book by analyzing the content and relative changes of the images on the series of shot images (or the micro-lattices on the reading book); and then recognizing the stroke image, the position and the track to obtain a corresponding stroke action. In the prior art, a camera is arranged outside the reading pen, and the action of the reading pen or the action of the hand is shot by the external camera, but the first mode adopted by the embodiment of the invention is to shoot the stroke image or the change of a micro-lattice (also called a microspur lattice) on the reading book by the reading pen, so as to reversely push the stroke action, namely obtain the gesture. Therefore, the first mode adopted by the embodiment of the invention is completely different from the mode adopted by the prior art, and the embodiment of the invention can be recognized by the touch and talk pen, namely, the gesture recognition can be independently completed, so that the recognition difficulty is reduced, and the recognition is more accurate. In addition, by the mode, the user can use the touch and talk pen more conveniently, and can complete operation only by using the touch and talk pen without carrying or placing other equipment.
In the second method, a motion sensor is built in the point-and-read pen, the motion sensor may be a sensor such as a gyroscope, and the detection function may be realized by detecting a stroke motion of the point-and-read pen on the point-and-read book, such as a lifting, a lowering, a moving, and the like, by the motion sensor built in the point-and-read pen.
In addition, the embodiment of the present invention may also combine the second manner with the first manner for use, for example, the first manner may determine a contact position of the pen tip of the touch and talk pen on the book, and the second manner may distinguish some gestures, for example, the pen tip of the touch and talk pen is stationary but the pen body swings, and the stroke action corresponds to some predetermined functions, so as to implement corresponding functions through the stroke action.
In step S103, it is determined whether the stroke motion is a predetermined gesture, and if yes, a predetermined function is obtained according to the predetermined gesture and the corresponding relationship.
If the preset gesture is a click-to-read gesture, the corresponding traditional click-to-read function is obtained, and if the preset gesture is a non-click-to-read gesture, the corresponding non-click-to-read function is obtained.
In one embodiment, the non-click gesture may be defined as a relatively acceptable stroke motion, as shown in fig. 2a-2f (the solid line represents the actual stroke motion, and the dotted line represents the trend of the stroke motion): left stroke, right stroke, top stroke, bottom stroke, oblique top stroke, oblique bottom stroke, etc., and may also be other stroke actions, as shown in fig. 3a-3 s: left drawing + folding line, right drawing + folding line, upper drawing + folding line, lower drawing + folding line, drawing, W-shaped stroke, '>' shaped stroke, N-shaped stroke, stroke symmetrical to 'N', right upper arc + folding line stroke, right lower arc + folding line stroke, and 'x' shaped stroke formed by connecting strokes. Other two-stroke or multi-stroke actions are of course possible. In the actual operation, the user can easily understand and memorize the stroke action and can easily recognize the stroke action; for the actions of two strokes and multiple strokes, the interval time between the strokes is compared and judged with a preset threshold value; if the interval time is greater than the threshold value, the stroke action can be judged as a stroke action; if the interval time is less than the threshold, a two-stroke action or a multi-stroke action may be considered.
According to the gestures, corresponding functions can be associated. For example, drawing left + polyline, which may correspond to a left key function; drawing a broken line on the right, wherein the broken line can correspond to the function of a right key; drawing plus fold lines up, which can correspond to the functions of key-up; drawing down and folding lines, which can correspond to the functions of key-down; the hook can correspond to the function of the confirmation key; the W-shaped stroke can correspond to the double-click function of the confirmation key; a ">" shaped stroke, which can be played correspondingly; an "N" shaped stroke, which may correspond to a pause key function; strokes symmetrical to "N" may also correspond to pause key functions; the upper right arc + broken line stroke can correspond to the volume + key function; the right lower arc + broken line stroke can correspond to the volume-key function; an "x" shaped stroke, which is a concatenation of strokes, may correspond to a cancel key function. It should be noted that the above-mentioned stroke actions are only examples, and in other embodiments, other stroke actions not shown or illustrated may be adopted to achieve the same function or achieve other functions.
Therefore, when a user reads the pen, the user can execute the stroke action without changing the current pen holding posture, the purpose of simulating function keys is achieved, a certain function is executed, and the natural and smooth use experience is obtained. For example, when a user reads a (foreign language or chinese) book, if the spoken language evaluation function provided by the reading pen needs to be accessed, the user can execute an up stroke or a down stroke (or a left stroke, a right stroke, etc.) to access the corresponding spoken language evaluation function.
In one embodiment, the step S103 includes:
judging whether the stroke action is a preset gesture or not and whether the stroke action is in a character range or not;
and if the stroke action is a preset gesture and the stroke action is in a character range, acquiring a preset function aiming at the character.
In this embodiment, both the stroke motion and the position of the stroke motion need to be perceived. If the stroke motion is a predetermined gesture and the stroke motion is within a character range, a predetermined function for the character is acquired.
The stroke motion in the character range may mean that the stroke motion surrounds the character, such as a circle or a square or other patterns, and may be a closed stroke motion or a non-closed stroke motion. Further, enclosing a character may be understood as enclosing the entire character, or may be understood as enclosing a substantial portion of the character. Further stroke actions may also refer to other meanings within the character scope, as explained in the following embodiments.
For example, a reading book can have a plurality of pronunciations, such as Chinese pronunciation, English pronunciation, male voice, female voice, dialect, etc., and the switching can be conveniently realized through the embodiment of the invention: when the touch and reading operation (namely the touch and reading gesture) is carried out in the traditional mode, the sound can be produced normally; when a small circle is drawn on the character, the mode is changed
And (6) sounding. Besides the mode of changing the voice, other intelligent functions which can sense the environment can be realized through different stroke actions and the positions of the strokes.
That is, the stroke action in the present invention also combines with the position judgment, and the operation is performed at different positions on the reading book, and the implemented functions are different. That is, the present invention can determine the position of the pointing-reading pen on the pointing-reading book, and even if the same gesture operation (stroke motion) is performed, different meanings may be expressed (i.e., different functions are realized).
In an embodiment, if the stroke motion is a predetermined gesture and the stroke motion is within a character range, acquiring a predetermined function for the character includes:
if the stroke action is a circle and the stroke action surrounds a preset range of the character, acquiring a specified pronunciation function aiming at the character;
and if the stroke action is an underline and the stroke action is positioned below the character, acquiring a translation function aiming at the character.
In the embodiment of the present invention, the stroke action in the character range may mean that the stroke action surrounds a predetermined range of the character, or that the stroke action is located below the character.
If the stroke is taken as a circle and the stroke motion encloses a predetermined range of the character, a specified pronunciation function for the character is obtained, such as pronouncing in english, and the like. Of course, the circle in the present embodiment is actually a broad concept, and may refer to a closed figure, or a figure close to a closed figure, and so on. In addition, the corresponding function of the stroke action can be adjusted.
It can be known from the foregoing embodiment that if the stroke is left or right, the stroke is a corresponding left key and right key function, but in this embodiment, the stroke action is combined with the click-to-read position to implement a predetermined function, specifically, if the underline can also be left or right, it is only determined whether the stroke action is located below the character, and if so, the translation function for the character is obtained. Thus, when the function is executed subsequently, the character can be translated.
In the embodiment of the invention, after the stroke action is recognized and determined to be positioned below the character, the position of the character can be obtained and compared with the position information edited in advance, so that the content of the character, such as a word, a phrase or a sentence, can be obtained, and the corresponding translation operation can be subsequently executed.
In the prior art, a scanning pen with a translation function is also provided, but the principle of the scanning pen in the prior art is to slide and scan a word or a sentence on a point reading book, then splice images through a camera on a pen point and perform OCR recognition, so as to obtain the scanned word or sentence, and make a translation.
The principle of the embodiment of the invention is completely different from that of the scanning pen: the scanning pen identifies character information in the spliced image through OCR and then realizes a translation function; the embodiment of the invention inquires the corresponding character content and translates the character content by identifying the stroke action and the position of the stroke action for reading the book and comparing the stroke action with the position information edited in advance.
Meanwhile, compared with a scanning pen, the touch and talk pen in the embodiment of the invention can also realize more flexible functions than the scanning pen. For example, more detailed operations such as word circling, word colluding and the like can be realized.
For example, a translation function may be set for a general underline; if the stroke action is set as underline + circled words, if the user executes the stroke action, the user gives more importance to the character content, the character content can be added into a list of key words or sent to a matched mobile phone app, and a special word book can be set in the app to store the key character content so as to facilitate learning and memorizing.
In addition, the stroke action can be set as underline + word-checking or single word-checking, if the user executes the stroke action, it indicates that the word has been memorized and understood by the user, the character content can be added to the completed list, or sent to the paired mobile phone app, and a special word can be set in the app to store the memorized character content, so as to review the character.
In one embodiment, the step S103 includes:
acquiring the last stroke action before the current stroke action;
judging whether the current stroke action is a preset gesture and whether the last stroke action and the current stroke action are related actions, if the current stroke action is the preset gesture and the last stroke action and the current stroke action are related actions, acquiring a preset function related to the last stroke action according to the preset gesture and the corresponding relation, and if the current stroke action is the preset gesture and the last stroke action and the current stroke action are not related actions, acquiring a preset function which is not related to the last stroke action according to the preset gesture and the corresponding relation.
In the present embodiment, if too many stroke actions are set, the user is required to memorize the stroke actions, and also the functions corresponding to the stroke actions are required to be memorized, which may increase the burden on the user. Embodiments of the present invention may determine the particular operations to be performed based on context.
For example, if the stroke action is checked, the last stroke action needs to be acquired, and if the last stroke action is taken as the operation of sliding up and down or left and right to simulate the function key operation, the check can be interpreted as the confirmation of the previous sliding up and down or left and right; if there was no such up-down or left-right sliding operation before, and the ticking action was performed on the area of the character, then this ticking may be interpreted as a corresponding word-ticking function, such as translation or recording, etc.
In one embodiment, the step S103 includes:
and judging whether the stroke action is a double-click action according to the click time interval, and if so, acquiring the function corresponding to the double-click action according to the double-click action and the corresponding relation.
In the embodiment of the present invention, the double-click action generally represents a special meaning, so the embodiment of the present invention separately defines the stroke action, determines whether the stroke action is the double-click action, and defines a corresponding function for the double-click action in advance.
In this embodiment, whether the stroke motion is a double-click motion is determined according to a single click time interval, for example, if the click time interval is smaller than a preset time threshold, the stroke motion is determined to be the double-click motion, so that a function corresponding to the double-click motion can be obtained.
In one embodiment, the step S103 includes:
and judging whether the stroke action is a double-click action according to the click time interval and the contact ratio of the click position, and if so, acquiring the function corresponding to the double-click action according to the double-click action and the corresponding relation.
In the previous embodiment, whether the double-click action is performed or not is determined according to the time interval between two click operations, in this embodiment, on the basis of the previous embodiment, an overlap ratio condition of a click position is additionally added, and only when the click time interval is smaller than a preset time threshold and the overlap ratio of the click position is greater than a preset overlap ratio threshold, a stroke action is determined as the double-click action, and then a function corresponding to the double-click action is obtained, so that the occurrence of a misjudgment situation can be prevented.
The function using scenes of the double-click action can be various, one of the functions which is more in line with the use habits of people is used for emphasizing the importance of the click-to-read area and executing a different operation. Such as: click-to-read may be a normal click-to-read operation; when the user clicks the point reading area, the point reading area or the characters or pictures corresponding to the clicking position are considered to be important, corresponding character contents can be stored, or the corresponding character contents are sent to a matched mobile phone app, and special word books can be set in the app to store the character contents.
In one embodiment, the step S103 includes:
judging whether the stroke action is a preset gesture or not and whether the position of the stroke action is in a preset area or not, and if so, acquiring a function of switching modes;
the predetermined area includes one or more of an upper left corner, an upper right corner, a lower left corner, and a lower right corner of the book.
The contents of the point reading are different for the point reading. However, in general, the top left corner, the top right corner, the bottom left corner, and the bottom right corner of a page are generally not likely to place actual content or important content. Therefore, in the embodiment of the present invention, one or more of the 4 positions may be used as a special function region, and the upper left corner region is preferably used in the embodiment of the present invention. When the user performs a predetermined gesture, for example, various stroke actions such as single click, double click, etc., the function of switching modes can be realized. For example, if the stroke motion is a double-click motion, the mode can be switched between a normal textbook reading mode and a word learning mode.
Of course, in the embodiment of the present invention, multiple groups of gestures and corresponding functions may be set for different modes, and the gestures in different modes may be the same or different. Therefore, the same stroke action is executed in different modes, the achieved functions can be completely different, the processing has the advantages that the burden of a user caused by excessive gestures is avoided, the user only needs to remember a plurality of commonly used gestures, and different effects can be achieved according to different modes.
Referring to fig. 4, an embodiment of the invention further provides a touch and talk pen 400, which includes:
a storage unit 401, configured to associate a predetermined gesture with a predetermined function in advance and store a corresponding relationship, where the predetermined gesture includes a touch and talk gesture and a non-touch and talk gesture, and the predetermined function includes a touch and talk function and a non-touch and talk function correspondingly;
a detection unit 402 for detecting a stroke action performed by the reading pen on the reading book;
a determining unit 403, configured to determine whether the stroke motion is a predetermined gesture, and if so, obtain a predetermined function according to the predetermined gesture and the corresponding relationship;
an execution unit 404, configured to control the touch and talk pen to execute a predetermined function.
The specific technical details of the above-mentioned touch and talk pen have been described in the foregoing method, and are not described herein again.
The embodiment of the invention also provides a touch and talk pen, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the gesture control method when executing the computer program.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A gesture control method based on a point-and-read pen is characterized by comprising the following steps:
associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a touch and read gesture and a non-touch and read gesture, and the preset function comprises a touch and read function and a non-touch and read function correspondingly;
detecting stroke actions executed by a reading pen on a reading book;
judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and controlling the touch and talk pen to execute a preset function.
2. The method for controlling gestures based on a touch and talk pen according to claim 1, wherein the detecting the stroke actions performed by the touch and talk pen on the touch and talk book comprises:
shooting a stroke image of the reading pen on the reading book through a camera built in the reading pen, and identifying the stroke image to obtain a stroke action;
and/or detecting stroke actions of the reading pen on the reading book through a motion sensor built in the reading pen.
3. The method as claimed in claim 1, wherein the determining whether the stroke motion is a predetermined gesture, and if so, acquiring a predetermined function according to the predetermined gesture and the corresponding relationship comprises:
judging whether the stroke action is a preset gesture or not and whether the stroke action is in a character range or not;
and if the stroke action is a preset gesture and the stroke action is in a character range, acquiring a preset function aiming at the character.
4. The method for controlling gestures based on a touch and talk pen according to claim 3, wherein if the stroke motion is a predetermined gesture and the stroke motion is within a character range, acquiring a predetermined function for the character comprises:
if the stroke action is a circle and the stroke action surrounds a preset range of the character, acquiring a specified pronunciation function aiming at the character;
and if the stroke action is an underline and the stroke action is positioned below the character, acquiring a translation function aiming at the character.
5. The method as claimed in claim 1, wherein the determining whether the stroke motion is a predetermined gesture, and if so, acquiring a predetermined function according to the predetermined gesture and the corresponding relationship comprises:
acquiring the last stroke action before the current stroke action;
judging whether the current stroke action is a preset gesture and whether the last stroke action and the current stroke action are related actions, if the current stroke action is the preset gesture and the last stroke action and the current stroke action are related actions, acquiring a preset function related to the last stroke action according to the preset gesture and the corresponding relation, and if the current stroke action is the preset gesture and the last stroke action and the current stroke action are not related actions, acquiring a preset function which is not related to the last stroke action according to the preset gesture and the corresponding relation.
6. The method as claimed in claim 1, wherein the determining whether the stroke motion is a predetermined gesture, and if so, acquiring a predetermined function according to the predetermined gesture and the corresponding relationship comprises:
judging whether the stroke action is a double-click action according to the click time interval, and if so, acquiring a function corresponding to the double-click action according to the double-click action and the corresponding relation;
or judging whether the stroke action is a double-click action according to the click time interval and the contact ratio of the click position, and if so, acquiring the function corresponding to the double-click action according to the double-click action and the corresponding relation.
7. The method for controlling gestures based on a touch and talk pen according to claim 1, wherein the detecting the stroke actions performed by the touch and talk pen on the touch and talk book comprises:
comparing the interval time between strokes to a predetermined threshold;
if the interval time is larger than the threshold value, judging the stroke action as a stroke action; if the interval time is less than the threshold, the stroke is determined to be a two stroke action or a multi-stroke action.
8. The method as claimed in claim 1, wherein the determining whether the stroke motion is a predetermined gesture, and if so, acquiring a predetermined function according to the predetermined gesture and the corresponding relationship comprises:
judging whether the stroke action is a preset gesture or not and whether the position of the stroke action is in a preset area or not, and if so, acquiring a function of switching modes;
the predetermined area includes one or more of an upper left corner, an upper right corner, a lower left corner, and a lower right corner of the book.
9. A stylus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the gesture control method of any one of claims 1 to 8 when executing the computer program.
10. A point-and-read pen, comprising:
the storage unit is used for associating preset gestures with preset functions in advance and storing corresponding relations, wherein the preset gestures comprise click-to-read gestures and non-click-to-read gestures, and the preset functions comprise click-to-read functions and non-click-to-read functions correspondingly;
the detection unit is used for detecting stroke actions executed by the point reading pen on the point reading book;
the judging unit is used for judging whether the stroke action is a preset gesture or not, and if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and the execution unit is used for controlling the touch and talk pen to execute a preset function.
CN202010837741.0A 2020-08-19 2020-08-19 Gesture control method based on touch and talk pen and touch and talk pen Pending CN111949132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010837741.0A CN111949132A (en) 2020-08-19 2020-08-19 Gesture control method based on touch and talk pen and touch and talk pen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010837741.0A CN111949132A (en) 2020-08-19 2020-08-19 Gesture control method based on touch and talk pen and touch and talk pen

Publications (1)

Publication Number Publication Date
CN111949132A true CN111949132A (en) 2020-11-17

Family

ID=73358510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010837741.0A Pending CN111949132A (en) 2020-08-19 2020-08-19 Gesture control method based on touch and talk pen and touch and talk pen

Country Status (1)

Country Link
CN (1) CN111949132A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0689124A1 (en) * 1994-06-21 1995-12-27 Canon Kabushiki Kaisha Handwritten information processing apparatus and method
CN1955889A (en) * 2005-10-25 2007-05-02 尤卫建 Pen-type character input operation method and device
CN101317148A (en) * 2005-12-30 2008-12-03 国际商业机器公司 Hand-written input method and apparatus based on video
CN102455869A (en) * 2011-09-29 2012-05-16 北京壹人壹本信息科技有限公司 Method and device for editing characters by using gestures
CN103186268A (en) * 2011-12-29 2013-07-03 盛乐信息技术(上海)有限公司 Handwriting input method and system
CN103809791A (en) * 2012-11-12 2014-05-21 广东小天才科技有限公司 Multifunctional reading method and system
CN108052938A (en) * 2017-12-28 2018-05-18 广州酷狗计算机科技有限公司 A kind of point-of-reading device
CN208216358U (en) * 2018-02-05 2018-12-11 武汉商贸职业学院 A kind of English teaching Multifunctional template ruler pen

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0689124A1 (en) * 1994-06-21 1995-12-27 Canon Kabushiki Kaisha Handwritten information processing apparatus and method
CN1955889A (en) * 2005-10-25 2007-05-02 尤卫建 Pen-type character input operation method and device
CN101317148A (en) * 2005-12-30 2008-12-03 国际商业机器公司 Hand-written input method and apparatus based on video
CN102455869A (en) * 2011-09-29 2012-05-16 北京壹人壹本信息科技有限公司 Method and device for editing characters by using gestures
CN103186268A (en) * 2011-12-29 2013-07-03 盛乐信息技术(上海)有限公司 Handwriting input method and system
CN103809791A (en) * 2012-11-12 2014-05-21 广东小天才科技有限公司 Multifunctional reading method and system
CN108052938A (en) * 2017-12-28 2018-05-18 广州酷狗计算机科技有限公司 A kind of point-of-reading device
CN208216358U (en) * 2018-02-05 2018-12-11 武汉商贸职业学院 A kind of English teaching Multifunctional template ruler pen

Similar Documents

Publication Publication Date Title
US9104306B2 (en) Translation of directional input to gesture
US10146318B2 (en) Techniques for using gesture recognition to effectuate character selection
US9740399B2 (en) Text entry using shapewriting on a touch-sensitive input panel
KR101795574B1 (en) Electronic device controled by a motion, and control method thereof
EP2680110B1 (en) Method and apparatus for processing multiple inputs
CN107436691B (en) Method, client, server and device for correcting errors of input method
KR102284238B1 (en) Input display device, input display method, and program
US20120050530A1 (en) Use camera to augment input for portable electronic device
US20110205148A1 (en) Facial Tracking Electronic Reader
CN109614845A (en) Manage real-time handwriting recognition
JP2010067104A (en) Digital photo-frame, information processing system, control method, program, and information storage medium
JP2011141905A (en) Integrated keypad system
US8326597B2 (en) Translation apparatus, method, and computer program product for detecting language discrepancy
KR20240059509A (en) Display method and apparatus, dictionary pen, electronic appliance, and storage medium
CN112799530A (en) Touch screen control method and device, electronic equipment and storage medium
CN112163513A (en) Information selection method, system, device, electronic equipment and storage medium
US20140297257A1 (en) Motion sensor-based portable automatic interpretation apparatus and control method thereof
CN111949132A (en) Gesture control method based on touch and talk pen and touch and talk pen
CN111553356B (en) Character recognition method and device, learning device and computer readable storage medium
JP6710893B2 (en) Electronics and programs
CN113709322A (en) Scanning method and related equipment thereof
WO2022071448A1 (en) Display apparatus, display method, and program
CN111522488B (en) Interaction method for calling task panel by mobile phone terminal
CN111176439A (en) Reading control method based on visual tracking, intelligent glasses and system
KR20130128143A (en) Apparatus and method for controlling interface using hand gesture and computer-readable recording medium with program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination