CN108874139B - Target interaction method and system cooperatively driven by visual focus and hand motion tracking - Google Patents

Target interaction method and system cooperatively driven by visual focus and hand motion tracking Download PDF

Info

Publication number
CN108874139B
CN108874139B CN201810636848.1A CN201810636848A CN108874139B CN 108874139 B CN108874139 B CN 108874139B CN 201810636848 A CN201810636848 A CN 201810636848A CN 108874139 B CN108874139 B CN 108874139B
Authority
CN
China
Prior art keywords
cursor
visual focus
user
target
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810636848.1A
Other languages
Chinese (zh)
Other versions
CN108874139A (en
Inventor
程时伟
朱安杰
范菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201810636848.1A priority Critical patent/CN108874139B/en
Publication of CN108874139A publication Critical patent/CN108874139A/en
Application granted granted Critical
Publication of CN108874139B publication Critical patent/CN108874139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The vision focus and hand motion tracking cooperative driving target interaction method comprises the following steps: visual focus tracking, hand motion tracking, and target interaction fusing visual focus and hand motion. The invention also provides a system for realizing the method, which comprises the following modules that are connected in sequence and feed data: the system comprises a visual focus tracking module, a hand motion tracking module and a target interaction module for fusing a visual focus and hand motion.

Description

Target interaction method and system cooperatively driven by visual focus and hand motion tracking
Technical Field
The invention relates to a target interaction method and a target interaction system on a large-scale display interface.
Background
With the gradual popularization of projection screens, large-scale LED displays and the like in the work and life of people, the human-computer interaction facing to large-scale display interfaces is rapidly developed. Target selection is an important operation in a large display interface interaction scene, and is generally divided into two steps: 1) locating an object to be selected; 2) the currently located target is then confirmed. In the conventional mouse operation mode, a user needs to use a mouse to move a cursor for a long distance on a large display interface, so that the operation load of the user is increased. Under the background, the interaction method based on human eye visual focus tracking is gradually applied to target selection operation facing a large-scale display interface, and because the visual focus tracking has strong directivity, positioning and selection of a target can be completed quickly. In addition, the visual focus represents the interaction intention of the user to a certain extent, and the naturalness of the implicit interaction is further improved. Therefore, the invention provides a target interaction method and a target interaction system driven by the coordination of visual focus and hand motion tracking, and the target selection driven by multi-channel fusion is realized. The target selection on a large-scale display interface is realized through an interaction mechanism that the visual focus points to the target and the hand moves to confirm the target, particularly when the individual size of the target is small and the distance between the targets is small, the selection range of the visual focus tracking cursor is changed through the hand movement, so that the tracking pointing precision of the visual focus can be improved, and the operation difficulty can be reduced; meanwhile, the target selection is more accurate and rapid by combining the cursor stabilization, the secondary selection and the optimization mechanism thereof.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a target interaction method driven by the coordination of visual focus and hand motion tracking.
The invention discloses a target interaction method and a system driven by vision focus and hand motion tracking in a coordinated mode, which comprises the following steps:
(1) visual focus tracking;
(2) hand motion tracking;
(3) fusing target interaction of visual focus and hand motion;
the invention also provides a target interaction system driven by the coordination of the visual focus and the hand motion tracking, which comprises the following modules connected in sequence and feeding data:
(1) a visual focus tracking module;
(2) a hand motion tracking module;
(3) a target interaction module fusing a visual focus and hand movement;
the invention has the advantages that: the target interaction method driven by the coordination of the visual focus and the hand motion tracking is provided, the visual focus points to a target, the hand motion is selected and confirmed, and efficient and natural target selection on a large-scale display interface is realized. And when the target size is small and the target distance is small, the interaction process is comprehensively improved through the stability of the visual focus cursor, secondary selection and a corresponding optimization mechanism, the precision requirement on the tracking of the visual focus is effectively reduced, and the efficiency and the accuracy of completing the interaction task on a large-scale display interface are obviously improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention.
FIGS. 2 a-2 b are schematic diagrams of a scalable visual focus tracking cursor of the method of the present invention, wherein FIG. 2a is the visual focus tracking cursor in an initial state; FIG. 2b is the visual focus tracking cursor enlarged by the "zoom" gesture.
FIGS. 3 a-3 b are schematic diagrams of a modified visual focus tracking cursor of the method of the present invention, wherein FIG. 3a is the visual focus tracking cursor in an initial state; fig. 3b is a view of the visual focus tracking cursor moved to a new position after the visual focus has moved substantially.
FIG. 4 is a schematic diagram of the secondary selection function of the method of the present invention.
FIGS. 5 a-5 b are schematic diagrams of the selection of a target based on a preselected list according to the method of the present invention, wherein FIG. 5a is a visual focus tracking cursor covering a plurality of closely spaced targets, the currently selected target being at the far left end; FIG. 5b is a visual focus tracking cursor overlaying a plurality of closely spaced objects, with the currently selected object at the rightmost end, after introduction of a pre-selection optimization mechanism.
Fig. 6 is a schematic diagram of the basic logical structure of the system of the present invention.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown and described, and in which the invention is not limited to the disclosed embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a target interaction method cooperatively driven by a visual focus and a hand motion tracking according to an embodiment of the present invention is provided, and the steps in the method are specifically described as follows:
(1) visual focus tracking;
the method comprises the steps of shooting a human eye image by using a camera device with an infrared light source, carrying out binarization processing on the human eye image, and then filtering by using a Gaussian kernel function to remove noise in the human eye image. And further extracting features of the human eye image to obtain the center of the pupil of the human eye and the center of an infrared light reflection spot (also called Purkinje spot), calculating a vector formed by the center and the infrared light reflection spot, finally performing a calibration process, establishing a mapping relation between the vector and the visual focus of the user on the large display interface, and further obtaining plane coordinates of other visual focuses through fitting calculation.
(2) Hand motion tracking;
the method comprises the steps of acquiring a hand motion model of a person by using an existing hand motion tracking device, and tracking and identifying different gestures such as ' lifting up ', ' zooming ', waving ' and the like according to hand motion characteristics. A "lift" gesture is characterized by a hand motion range greater than a certain distance H (e.g., H may be set to 20 cm) and a motion direction that is vertical from low to high; a "zoom" gesture is characterized by a hand from a fist-closed state to a palm-opened state (open), or from a palm-opened state to a fist-closed state (contracted), and setting the eyeball radius at fist-closed to Rw and the eyeball radius at palm-opened to Rz, requiring Rw/Rz > K (e.g., K may be set to 60%); a "hand-waving" gesture is characterized by a hand motion direction that is horizontal moving from left to right or right to left, and a motion speed that is greater than S (e.g., S may be set to 30 cm/sec).
(3) Fusing target interaction of visual focus and hand motion;
as shown in fig. 2a, the circle center of the cursor is a point a, the radius of the cursor is r, the user can control the movement of the circular cursor by using the visual focus, and after the cursor covers the target, the user performs selection confirmation by a "lift" gesture, and finally selects the target. In addition, when the size of the target is too small to be covered by the cursor, the user can expand the cursor radius r by a "zoom" gesture, as shown in FIG. 2 b.
The cursor is stable. As shown in fig. 3a, the current actual visual focus of the user is set to be g, and the distance between the actual visual focus and the center of the cursor is set to be d. When the user's visual focus is within the cursor radius (d < ═ r), the position of the cursor does not change. Only when the user's visual focus shifts to a larger extent, i.e. the visual focus is beyond the cursor range (d > r), will the cursor position change accordingly, as shown in fig. 3 b. The cursor eliminates instability caused by visual focus tracking accuracy errors (namely, a user focuses on cursor stability when selecting a target), and can timely adjust the cursor position according to large visual focus transfer (the user focuses on cursor flexibility when moving the cursor), so that the cursor can effectively help the user to better select a small-size target.
And (5) secondary selection. As shown in fig. 4, a large number of small-sized targets are closely arranged on a large display interface, and after a user performs initial positioning through visual focus tracking, the user performs selection of a given target (a rectangle with a thick border in fig. 4) through secondary selection by using hand motion tracking. Firstly, the user enlarges the size of the cursor through a 'zooming' gesture, and reduces the requirement on the tracking precision of the visual focus when selecting a small-size target. All objects covered by the cursor will be added to the pre-selection list (e.g. 4 objects covered by the cursor in fig. 4). There is a selection pointer in the pre-selection list that, when the user confirms the selection by means of a "lift" gesture, selects the target to which the pointer points, which target will turn dark. The user controls the movement of the selection pointer through the hand waving gesture, and when the user points to the target which the user wants to select, the user confirms the selection through the lifting gesture again, so that the accurate selection of the set target is completed.
And (4) an optimization mechanism. As shown in fig. 5a, when the objects are distributed too densely, the objects covered by the cursor at the same time are too many, so that the time for screening the objects by using the secondary selection is too long (for example, the objects need to be screened one by one from left to right through "waving"), and the selection efficiency is obviously reduced. For this reason, the priority is set according to the degree of likelihood that the target is selected, and the target having a high priority will be more easily selected. The specific method comprises the following steps: when the user uses the visual focus cursor to make a selection, the coordinates of all the visual focuses in the past period of time T (for example, T may be set to 2 seconds) of the user are recorded, and the average value of the coordinates of the visual focuses is used as the most likely visual focus coordinate of the current user (the visual focus is called a predicted visual focus). The closer the target position is to the predicted visual focus, the higher its priority; and vice versa. In addition, the target priority in the preselected list will also dynamically change accordingly in response to changes in the predicted visual focus position. As shown in fig. 5b, i is the predicted visual focus, which is closest to the rightmost target, so that the rightmost target has the highest priority, and is set as the first position in the preselection list, and is automatically preselected, and at this time, the user does not need to swing the hand to switch for multiple times, but directly uses the 'lifting' gesture to confirm the selection, so that the selection efficiency is obviously improved.
As shown in fig. 6, the basic logic structure of the target interaction system driven by the coordination of the visual focus and the hand motion tracking provided by the embodiment of the present invention is schematically shown. For convenience of explanation, only portions related to the embodiments of the present invention are shown. The functional modules/units in the system can be hardware modules/units and software modules/units, and mainly comprise the following modules which are sequentially connected and feed data:
(1) the visual focus tracking module is used for extracting a pupil and a purkinje spot in a human eye image, calculating the center coordinates of the pupil and the purkinje spot, establishing a pupil-cornea reflection vector by using the center of the pupil and the center of the purkinje spot, further establishing a visual focus mapping model, and calculating the coordinates of a visual focus on a large display interface;
(2) the hand motion tracking module is used for acquiring a user two-hand model, detecting hand motion information of a user in real time according to motion characteristics of the user two-hand model, and identifying and distinguishing different gestures;
(3) and the target interaction module for fusing the visual focus and the hand motion receives the visual focus coordinate data obtained by real-time calculation, the detected hand motion information and the related gestures, and forms a corresponding interaction instruction after fusion, controls the cursor movement on a large-scale display interface, and particularly provides functions of cursor stabilization, secondary selection and corresponding selection optimization.
In the embodiment of the present invention, each module may be integrated into a whole, may also be separately deployed, or may be further split into a plurality of sub-modules. The modules may be distributed in the system according to the embodiment description, or may be correspondingly changed in one or more systems different from the system according to the embodiment of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product.
The modules or steps of the embodiments of the present invention may be implemented by a general purpose computing device, or alternatively, they may be implemented by program code executable by the computing device, so that they may be stored in a storage device and executed by the computing device, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps thereof may be fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (2)

1. The vision focus and hand motion tracking cooperative driving target interaction method comprises the following steps:
(1) visual focus tracking;
shooting a human eye image by using a camera device with an infrared light source, carrying out binarization processing on the human eye image, and then filtering by using a Gaussian kernel function to remove noise in the human eye image; extracting the characteristics of the human eye image to obtain the center of the pupil of the human eye and the center of the infrared light reflection spot, calculating a vector formed by the center of the pupil of the human eye and the center of the infrared light reflection spot, finally performing a calibration process, establishing a mapping relation between the vector and the visual focus of the user on a large display interface, and further obtaining plane coordinates of other visual focuses through fitting calculation;
(2) hand motion tracking;
acquiring a hand motion model of a person by using an existing hand motion tracking device, and tracking and identifying gestures of 'lifting up', 'zooming' and 'waving' according to hand motion characteristics; the 'lift' gesture is characterized in that the hand movement range is larger than a certain distance H, and the movement direction is vertically from low to high; the 'zooming' gesture is characterized in that the hand is in a fist-making state to a palm-opening state or in a fist-making state from the palm-opening state, the radius of an eyeball when a fist is made is set to Rw, the radius of the eyeball when the palm is opened is set to Rz, and Rw/Rz is required to be more than K; the hand waving gesture is characterized in that the hand movement direction is a horizontal direction moving from left to right or from right to left, and the movement speed is greater than S;
(3) fusing target interaction of visual focus and hand motion;
the circle center of the cursor is a point a, the radius of the cursor is r, the user controls the movement of the circular cursor by using the visual focus, and after the cursor covers the target, the user selects and confirms through a 'lifting' gesture, and finally selects the target; when the size of the target is too small to be covered by the cursor, the user can enlarge the cursor radius r through a 'zooming' gesture;
the cursor is stable; setting the current actual visual focus of a user as g, setting the distance between the actual visual focus and the center of a cursor as d, and when the visual focus of the user is within the radius range of the cursor, namely d is less than r, the position of the cursor cannot be changed, and only when the visual focus of the user is transferred in a larger range and exceeds the range of the cursor, namely d is greater than r, the position of the cursor can be correspondingly changed; the cursor eliminates the instability caused by the tracking precision error of the visual focus, namely, the user focuses on the stability of the cursor when selecting the target, and can adjust the position of the cursor in time according to the visual focus transfer with larger amplitude, so that the user focuses on the flexibility of the cursor when moving the cursor, and the user is effectively helped to select the small-size target better;
secondary selection; a large number of small-size targets are closely arranged on a large display interface, and after a user performs primary positioning through visual focus tracking, the user completes selection of a set target through secondary selection by using hand motion tracking; firstly, a user enlarges the size of a cursor through a 'zooming' gesture, and reduces the requirement on the tracking precision of a visual focus when a small-size target is selected; all objects covered by the cursor will be added to the pre-selection list; a selection pointer exists in the preselection list, and when a user confirms selection through a 'lifting' gesture, a target pointed by the pointer is selected and changed into a dark color; the user controls the movement of the selection pointer through the hand waving gesture, and when the user points to the target which the user wants to select, the user confirms the selection through the lifting gesture again to finish the accurate selection of the set target;
an optimization mechanism; when the targets are distributed too densely, the targets covered by the cursor at the same time are too many, so that the time for screening the targets by utilizing secondary selection is too long, and the selection efficiency is obviously reduced; for this reason, the priority is set according to the probability that the target is selected, and the target with high priority is easier to be selected; the specific method comprises the following steps: when a user selects by using a visual focus cursor, recording the coordinates of all visual focuses of the user within a period of time T in the past, and taking the average value of the coordinates of the visual focuses as the most possible visual focus coordinate of the current user, wherein the visual focus is called a predicted visual focus; the closer the target position is to the predicted visual focus, the higher its priority; and vice versa; in addition, the target priority in the preselection list is dynamically changed according to the change of the predicted visual focus position; the i is a predicted visual focus, is closest to the rightmost target, so that the rightmost target has the highest priority, is set as the first position in a preselection list and is automatically preselected, and at the moment, a user does not need to swing hands for multiple times to switch, and only needs to directly confirm selection by a 'lifting' gesture, so that the selection efficiency is improved.
2. A system for implementing the visual focus and hand motion tracking cooperatively driven target interaction method as claimed in claim 1, comprising the following modules connected in sequence and feeding data:
the visual focus tracking module is used for extracting a pupil and a purkinje spot in a human eye image, calculating the center coordinates of the pupil and the purkinje spot, establishing a pupil-cornea reflection vector by using the center of the pupil and the center of the purkinje spot, further establishing a visual focus mapping model, and calculating the coordinates of a visual focus on a large display interface;
the hand motion tracking module is used for acquiring a user two-hand model, detecting hand motion information of a user in real time according to motion characteristics of the user two-hand model, and identifying and distinguishing different gestures;
and the target interaction module for fusing the visual focus and the hand motion receives the visual focus coordinate data obtained by real-time calculation, the detected hand motion information and the related gestures, and forms a corresponding interaction instruction after fusion, controls the cursor movement on a large-scale display interface, and particularly provides functions of cursor stabilization, secondary selection and corresponding selection optimization.
CN201810636848.1A 2018-06-20 2018-06-20 Target interaction method and system cooperatively driven by visual focus and hand motion tracking Active CN108874139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810636848.1A CN108874139B (en) 2018-06-20 2018-06-20 Target interaction method and system cooperatively driven by visual focus and hand motion tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810636848.1A CN108874139B (en) 2018-06-20 2018-06-20 Target interaction method and system cooperatively driven by visual focus and hand motion tracking

Publications (2)

Publication Number Publication Date
CN108874139A CN108874139A (en) 2018-11-23
CN108874139B true CN108874139B (en) 2021-02-02

Family

ID=64340756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810636848.1A Active CN108874139B (en) 2018-06-20 2018-06-20 Target interaction method and system cooperatively driven by visual focus and hand motion tracking

Country Status (1)

Country Link
CN (1) CN108874139B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109917914B (en) * 2019-03-05 2022-06-17 河海大学常州校区 Interactive interface analysis and optimization method based on visual field position
CN110244844A (en) * 2019-06-10 2019-09-17 Oppo广东移动通信有限公司 Control method and relevant apparatus
CN111290575A (en) * 2020-01-21 2020-06-16 中国人民解放军空军工程大学 Multichannel interactive control system of air defense anti-pilot weapon

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201638151U (en) * 2010-01-29 2010-11-17 周幼宁 Device for realizing virtual display and virtual interactive operation
WO2011106797A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
CN102447835A (en) * 2011-10-29 2012-05-09 合肥博微安全电子科技有限公司 Non-blind-area multi-target cooperative tracking method and system
CN103513771A (en) * 2013-10-09 2014-01-15 深圳先进技术研究院 System and method for using intelligent glasses in cooperation with interactive pen
CN103777754A (en) * 2014-01-10 2014-05-07 上海大学 Hand motion tracking device and method based on binocular infrared vision
CN105204629A (en) * 2015-09-02 2015-12-30 成都上生活网络科技有限公司 3D (3-dimensional) gesture recognition method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201638151U (en) * 2010-01-29 2010-11-17 周幼宁 Device for realizing virtual display and virtual interactive operation
WO2011106797A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
CN102447835A (en) * 2011-10-29 2012-05-09 合肥博微安全电子科技有限公司 Non-blind-area multi-target cooperative tracking method and system
CN103513771A (en) * 2013-10-09 2014-01-15 深圳先进技术研究院 System and method for using intelligent glasses in cooperation with interactive pen
CN103777754A (en) * 2014-01-10 2014-05-07 上海大学 Hand motion tracking device and method based on binocular infrared vision
CN105204629A (en) * 2015-09-02 2015-12-30 成都上生活网络科技有限公司 3D (3-dimensional) gesture recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
designing eye gaze visualizations for online shopping social recommendations;shiwei Cheng et al.;《In Proceeding of the 2013 conference on Computer supported cooperative work companion》;20131231;第125-128页 *
面向阅读教学的眼动数据可视化批注方法;程时伟 等;《浙江工业大学学报》;20171231;第45卷(第6期);第610-614页 *

Also Published As

Publication number Publication date
CN108874139A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
US10913463B2 (en) Gesture based control of autonomous vehicles
US9569010B2 (en) Gesture-based human machine interface
RU2439653C2 (en) Virtual controller for display images
US9360965B2 (en) Combined touch input and offset non-touch gesture
EP3090331B1 (en) Systems with techniques for user interface control
AU2013329127B2 (en) Touchless input for a user interface
EP4193244A1 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US11782514B2 (en) Wearable device and control method thereof, gesture recognition method, and control system
CN108874139B (en) Target interaction method and system cooperatively driven by visual focus and hand motion tracking
IL261580A (en) System and method for deep learning based hand gesture recognition in first person view
US20140123077A1 (en) System and method for user interaction and control of electronic devices
US20140333585A1 (en) Electronic apparatus, information processing method, and storage medium
US20230013169A1 (en) Method and device for adjusting the control-display gain of a gesture controlled electronic device
CN103425409B (en) The control method and device of Projection Display
CN105912101B (en) Projection control method and electronic equipment
WO2020080107A1 (en) Information processing device, information processing method, and program
US10416761B2 (en) Zoom effect in gaze tracking interface
EP4339746A1 (en) Touchless user-interface control method including time-controlled fading
US11860373B2 (en) User interfaces provided by wearable smart eye-glasses
WO2020170851A1 (en) Information processing device, information processing method, and program
CN117917620A (en) Light projection system and method
Shintani et al. Evaluation of a pointing interface for a large screen based on regression model with image features
CN116166161A (en) Interaction method based on multi-level menu and related equipment
Onuki et al. Combined use of rear touch gestures and facial feature detection to achieve single-handed navigation of mobile devices
CN109246286A (en) Control method, system, equipment and the storage medium of intelligent terminal application operating

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant