CN109696958A - A kind of gestural control method and system based on depth transducer gesture identification - Google Patents

A kind of gestural control method and system based on depth transducer gesture identification Download PDF

Info

Publication number
CN109696958A
CN109696958A CN201811432459.3A CN201811432459A CN109696958A CN 109696958 A CN109696958 A CN 109696958A CN 201811432459 A CN201811432459 A CN 201811432459A CN 109696958 A CN109696958 A CN 109696958A
Authority
CN
China
Prior art keywords
hand
characteristic
image
control method
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811432459.3A
Other languages
Chinese (zh)
Inventor
朱德清
黄骏
周晓军
余建男
洪哲鸣
李骊
王行
盛赞
李朔
杨淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Huajie Imi Software Technology Co Ltd
Original Assignee
Nanjing Huajie Imi Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Huajie Imi Software Technology Co Ltd filed Critical Nanjing Huajie Imi Software Technology Co Ltd
Priority to CN201811432459.3A priority Critical patent/CN109696958A/en
Publication of CN109696958A publication Critical patent/CN109696958A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of gestural control method and system based on depth transducer gesture identification, including activation and mouse pointer position control;Depth image is specifically obtained, hand-characteristic is extracted from every frame depth image and is matched with the hand-characteristic of preset first hand-type, activation when meeting matching condition;When meeting the second hand-type matching condition, the Virtual input events of control mouse pointer position are sent to system;When continuous image persistently meets matching condition, control mouse pointer coordinate is moved as hand position converts.The metamorphosis that the present invention passes through detection palm, in conjunction with system virtualization incoming event system, re-map screen coordinate, icon on screen is moved to target position, also extendable functions carry out other operations such as clicking, that this control method operates is convenient, identification operating habit that is accurate, low to environmental requirement and meeting user, very convenient.

Description

A kind of gestural control method and system based on depth transducer gesture identification
Technical field
The present invention relates to interactively enter control, especially a kind of gestural control method based on depth transducer gesture identification And system.
Background technique
With the development of body-sensing technology, the remote manipulation to smart machine is realized using human action, especially It has been implemented in the technology simply manipulated to the smart machine with graphic interface at a distance.Secondly, ultra-large type is shown Screen is also widely used, and the demand that the image of large-screen is operated using gesture is come into being.
But the method precision and accuracy of screen are controlled using the variation of human body relative position in the prior art It is not high, and need to control by the movement of human body, not only the requirement to environment is high, is also not easy to user's operation, responds Speed and convenience are not ideal.
Summary of the invention
Goal of the invention: in view of the above-mentioned drawbacks of the prior art, the present invention is intended to provide a kind of be based on depth transducer The gestural control method and system of gesture identification.
Technical solution: a kind of gestural control method based on depth transducer gesture identification, including S1: activation;S2: mouse Pointer position control;
Wherein, S1 is specifically included: obtain depth image, extracted from every frame depth image hand-characteristic and with it is preset The hand-characteristic of first hand-type is matched, and is activated when the match conditions are met, shows mouse after activation over the display Icon;
S2 is specifically included:
S2.1: extracted from every frame depth image after activation hand-characteristic and with the hand of preset second hand-type spy Sign is matched, and when the match conditions are met, the Virtual input events of control mouse pointer position is sent to system;
S2.2: continuation extracted from every frame depth image hand-characteristic and with the hand-characteristic of preset second hand-type into Row matching calculates the hand displacement in consecutive image, while being mapped to screen when continuous image persistently meets matching condition Coordinate system, control mouse pointer coordinate are moved as hand position converts, and show being moved through for mouse icon over the display Journey.
Further, the mouse pointer position control method include: in step S2.2 continuous image persistently meet After matching condition, matched if the hand-characteristic extracted in depth image no longer meets with the hand-characteristic of preset second hand-type When condition, the Virtual input events for mouse pointer position of removing controls are sent to system.
Further, in the mouse pointer position control method, mouse pointer position of removing controls is being sent to system Virtual input events after, static mouse icon is shown on display and is being more than to cancel to mouse icon after the default time limit Display.
Further, further include click method: extracted from every frame depth image after activation hand-characteristic and with it is pre- If the hand-characteristic of third hand-type matched, when the match conditions are met, send mouse to system and click virtual input thing Part.
Further, it is that left mouse button is clicked or double-clicked or clicks by right key that the mouse, which clicks Virtual input events,.
Further, further include two-hit method: extracted from every frame depth image after activation hand-characteristic and with it is pre- If the hand-characteristic of the 4th hand-type matched, when the two frame hand-characteristics at interval meet matching condition and interval time is small When given threshold, double click Virtual input events are sent to system.
Further, further include S0 before S1: typing gesture model is preset the hand-characteristic of each hand-type, and is preset not The gesture maps virtual incoming event of homochirality.
Further, it is described extracted from every frame depth image hand-characteristic and with the hand of preset first hand-type spy Sign is matched specifically: being obtained every frame depth image, is identified the coordinate of each key point of hand in image, and calculates each frame gesture Freedom degree parameter;According to each frame gesture freedom degree parameter, determine the parameter preset similarity of whole frame and with default similarity threshold Value is compared, and meets matching condition when meeting default similarity threshold.
Further, the coordinate of each key point of hand is specifically to mark each key point of hand working as in the identification image Initial position co-ordinates in preceding depth image;The depth image is normalized;According to the depth after normalized Image is spent, preset multilayer convolutional neural networks model is based on, obtains multiple selection areas comprising each key point of hand;To each Selection area carries out non-maxima suppression operation, obtains optimal selection area;After carrying out image to optimal selection area Reason, obtains final position coordinate of each key point of hand in depth image.
A kind of gestural control system based on depth transducer gesture identification, including processing module, processing module is for holding The above-mentioned gestural control method based on depth transducer gesture identification of row, further includes the depth camera for shooting depth image Head and for display display.
The utility model has the advantages that the present invention is re-mapped by the metamorphosis of detection palm in conjunction with system virtualization incoming event system To screen coordinate, the icon on screen is moved to target position, extendable functions is gone back and carries out other operations, this control such as clicking That method processed operates is convenient, identification operating habit that is accurate, low to environmental requirement and meeting user, very convenient.
Specific embodiment
The technical program is described in detail below by a most preferred embodiment.
As the precision of Gesture Recognition, accuracy are substantially improved, it is used widely and mentions for gesture control Having supplied may.The present invention provides a kind of gestural control method and system based on depth transducer gesture identification, system include For shooting the depth camera of depth image, for the display and processing module of display, processing module is used for lower section Method specifically comprises the following steps:
S0: typing gesture model presets the hand-characteristic of each hand-type, and the gesture maps virtual input of default not homochirality Event;
S1: activation specifically includes: obtain depth image, extracted from every frame depth image hand-characteristic and with it is default The hand-characteristic of the first hand-type matched, activated when the match conditions are met, show mouse after activation over the display It marks on a map;The first hand-type selects specific hand-type in the present embodiment, i.e. the first hand-type is exclusively used in activating in its system environments, no Other purposes are used further to, in this way in order to avoid judging by accident.Convenient system detection at any time arrives when the movement, display mouse or Person's others can represent the control of gesture.
Hand-characteristic is extracted from every frame depth image and carries out matching tool with the hand-characteristic of preset first hand-type Body is: obtaining every frame depth image, identifies the coordinate of each key point of hand in image, and calculates each frame gesture freedom degree parameter; According to each frame gesture freedom degree parameter, determines the parameter preset similarity of whole frame and is compared with default similarity threshold, Meet matching condition when meeting default similarity threshold.
The coordinate of each key point of hand is specifically to mark each key point of hand in current depth figure in the identification image Initial position co-ordinates as in;The depth image is normalized;According to the depth image after normalized, base In preset multilayer convolutional neural networks model, multiple selection areas comprising each key point of hand are obtained;To each selection area Non-maxima suppression operation is carried out, optimal selection area is obtained;Post processing of image is carried out to optimal selection area, is obtained in one's hands Final position coordinate of each key point in portion in depth image.
What is used when being matched below with the second hand-type, the 3rd 4th etc. hand-types is also the above method, is only substituted for each It is compared from the hand-characteristic of hand-type.
S2: mouse pointer position control specifically includes:
S2.1: extracted from every frame depth image after activation hand-characteristic and with the hand of preset second hand-type spy Sign is matched, and when the match conditions are met, the Virtual input events of control mouse pointer position is sent to system;System is located Reason system sends Virtual input events to system and refers to that the related API of the incoming event module in calling system is sent to system It touches, the events such as acceleration.Event type can be different and different with operating system platform, for example report in Android platform Down, move, up event, corresponding is activation, function that is mobile, stopping movement.
S2.2: continuation extracted from every frame depth image hand-characteristic and with the hand-characteristic of preset second hand-type into Row matching calculates the hand displacement in consecutive image, while being mapped to screen when continuous image persistently meets matching condition Coordinate system, control mouse pointer coordinate are moved as hand position converts, and show being moved through for mouse icon over the display Journey.
Mouse pointer position control method includes: after continuous image persistently meets matching condition in step S2.2, if When the hand-characteristic extracted in depth image is no longer met with the hand-characteristic matching condition of preset second hand-type, to system Send the Virtual input events for mouse pointer position of removing controls.
Each hand-type refers to the gesture model of different palm forms, such as the five fingers close up palm opening, the five fingers separate palm Open, grasp, thumb hold up other fingers hold, scissors hand etc..In actual use if used between different hand-types Conflict will not be generated can also represent the operation on different opportunitys using identical hand-type.
The first hand-type is defined as the hand-type opened hand in the present embodiment, including the five fingers close up expansion and the five fingers are separately opened up It opens, can match, and the first hand-type is set as default conditions, the mouse shape shown on screen is arrowhead form;By second-hand Type is defined as grasping, and when user makes the second hand-type, the mouse shape shown on screen is circle.
After the Virtual input events for sending mouse pointer position of removing controls to system, static mouse is shown on display It marks on a map and cancels the display to mouse icon after being more than to preset the time limit.
Cancel the display to mouse icon when mouse pointer position of removing controls, is conducive to whole visual effect and presents, again It can be reactivated when needing to control mouse pointer;
Further include click method: extracted from every frame depth image after activation hand-characteristic and with preset third hand The hand-characteristic of type is matched, and when the match conditions are met, is sent mouse to system and is clicked Virtual input events;Mouse is clicked Virtual input events are that left mouse button is clicked or double-clicked or clicks by right key.Different hand-types can be selected as needed, can also be used This method switch window, only need to be similar setting one hand-type for being used for switch window
Further include two-hit method: extracted from every frame depth image after activation hand-characteristic and with preset 4th hand The hand-characteristic of type is matched, and is less than given threshold when the two frame hand-characteristics at interval meet matching condition and interval time When, double click Virtual input events are sent to system.
It also include body-sensing action recognition module in system of the invention, when people makes specific movement, when lifting the right side When hand, action recognition module normalizes the world coordinate system of hand, while being mapped to screen coordinate system, at this moment can show on screen Customized mouse control.
Illustratively, for example user makes the movement that both hands are waved, clock for 5 seconds, then system just will pop up arrow-shaped The mouse control of shape, this means that gesture is activated, and next can use gesture to move the image on screen.
Screen picture, such as system desktop icon are shown on display, user produces mobile to screen picture Demand.User lifts hand and keeps the first hand-type running fix to the position of screen picture.Then, hand is transformed to second-hand by user When type grasps, after making the movement, the information namely palm that photographic device captures continuous hand-type transformation are by unfoldable shape It is transformed to grasp the procedural information of form, triggering processing system shows the cursor of circle over the display, while reporting virtual Incoming event.Finally, user keeps the second hand-type mobile, the mouse icon on screen can be with setting about moving, when being moved to screen Behind upper desired position, when hand is transformed to the five fingers expansion by grasping by user, after photographic device captures this continuous information, Triggering processing system, which reports, virtual lifts incoming event.Screen picture has been moved to target position in this way.
The above is only the preferred embodiment of the present invention, for those skilled in the art, are not taking off Under the premise of from the principle of the invention, several improvements and modifications can also be made, these improvements and modifications also should be regarded as of the invention Protection scope.

Claims (10)

1. a kind of gestural control method based on depth transducer gesture identification, which is characterized in that including S1: activation;S2: mouse Pointer position control;
Wherein, S1 is specifically included: being obtained depth image, hand-characteristic is extracted from every frame depth image and with preset first The hand-characteristic of hand-type is matched, and is activated when the match conditions are met, shows mouse icon after activation over the display;
S2 is specifically included:
S2.1: extracted from every frame depth image after activation hand-characteristic and with the hand-characteristic of preset second hand-type into Row matching sends the Virtual input events of control mouse pointer position to system when the match conditions are met;
S2.2: continuation extracted from every frame depth image hand-characteristic and with the hand-characteristic of preset second hand-type carry out Match, when continuous image persistently meets matching condition, calculates the hand displacement in consecutive image, while being mapped to screen coordinate System, control mouse pointer coordinate are moved as hand position converts, and show the moving process of mouse icon over the display.
2. a kind of gestural control method based on depth transducer gesture identification according to claim 1, which is characterized in that The mouse pointer position control method includes: after continuous image persistently meets matching condition in step S2.2, if depth When the hand-characteristic extracted in image is no longer met with the hand-characteristic matching condition of preset second hand-type, sent to system The Virtual input events for mouse pointer position of removing controls.
3. a kind of gestural control method based on depth transducer gesture identification according to claim 2, which is characterized in that In the mouse pointer position control method, after the Virtual input events for sending mouse pointer position of removing controls to system, Static mouse icon is shown on display and cancels the display to mouse icon after being more than to preset the time limit.
4. a kind of gestural control method based on depth transducer gesture identification according to claim 1, which is characterized in that Further include click method: extracting hand-characteristic and the hand with preset third hand-type from every frame depth image after activation Feature is matched, and when the match conditions are met, is sent mouse to system and is clicked Virtual input events.
5. a kind of gestural control method based on depth transducer gesture identification according to claim 4, which is characterized in that It is that left mouse button is clicked or double-clicked or clicks by right key that the mouse, which clicks Virtual input events,.
6. a kind of gestural control method based on depth transducer gesture identification according to claim 1, which is characterized in that Further include two-hit method: extracting hand-characteristic and the hand with preset 4th hand-type from every frame depth image after activation Feature is matched, when the two frame hand-characteristics at interval meet matching condition and interval time is less than given threshold, to system Send double click Virtual input events.
7. a kind of gestural control method based on depth transducer gesture identification according to claim 1, which is characterized in that Further include S0 before the step S1: typing gesture model presets the hand-characteristic of each hand-type, and the hand of default not homochirality Gesture maps virtual incoming event.
8. a kind of gestural control method based on depth transducer gesture identification according to claim 1, which is characterized in that It is described to extract hand-characteristic from every frame depth image and matched specifically with the hand-characteristic of preset first hand-type: Every frame depth image is obtained, identifies the coordinate of each key point of hand in image, and calculates each frame gesture freedom degree parameter;According to each Frame gesture freedom degree parameter determines the parameter preset similarity of whole frame and is compared with default similarity threshold, works as satisfaction Meet matching condition when default similarity threshold.
9. a kind of gestural control method based on depth transducer gesture identification according to claim 8, which is characterized in that It is first in current depth image specifically to mark each key point of hand for the coordinate of each key point of hand in the identification image Beginning position coordinates;The depth image is normalized;According to the depth image after normalized, based on preset Multilayer convolutional neural networks model obtains multiple selection areas comprising each key point of hand;Non- pole is carried out to each selection area Big value inhibits operation, obtains optimal selection area;Post processing of image is carried out to optimal selection area, obtains each key of hand Final position coordinate of the point in depth image.
10. a kind of gestural control system based on depth transducer gesture identification, which is characterized in that described including processing module Processing module requires the gesture control side based on depth transducer gesture identification described in any one of 1-9 for perform claim Method further includes the depth camera for shooting depth image and the display for display.
CN201811432459.3A 2018-11-28 2018-11-28 A kind of gestural control method and system based on depth transducer gesture identification Pending CN109696958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811432459.3A CN109696958A (en) 2018-11-28 2018-11-28 A kind of gestural control method and system based on depth transducer gesture identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811432459.3A CN109696958A (en) 2018-11-28 2018-11-28 A kind of gestural control method and system based on depth transducer gesture identification

Publications (1)

Publication Number Publication Date
CN109696958A true CN109696958A (en) 2019-04-30

Family

ID=66230182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811432459.3A Pending CN109696958A (en) 2018-11-28 2018-11-28 A kind of gestural control method and system based on depth transducer gesture identification

Country Status (1)

Country Link
CN (1) CN109696958A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928432A (en) * 2019-10-24 2020-03-27 中国人民解放军军事科学院国防科技创新研究院 Ring mouse, mouse control device and mouse control system
CN112817443A (en) * 2021-01-22 2021-05-18 歌尔科技有限公司 Display interface control method, device and equipment based on gestures and storage medium
CN113095243A (en) * 2021-04-16 2021-07-09 推想医疗科技股份有限公司 Mouse control method and device, computer equipment and medium
WO2021244650A1 (en) * 2020-06-05 2021-12-09 北京字节跳动网络技术有限公司 Control method and device, terminal and storage medium
CN114148838A (en) * 2021-12-29 2022-03-08 淮阴工学院 Elevator non-contact virtual button operation method
CN114327229A (en) * 2020-09-25 2022-04-12 宏碁股份有限公司 Image-based gesture control method and electronic device using same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234492A1 (en) * 2010-03-29 2011-09-29 Ajmera Rahul Gesture processing
CN103150116A (en) * 2013-03-05 2013-06-12 福建升腾资讯有限公司 RDP-based method for magnification display of cloud desktop
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture
CN108509049A (en) * 2018-04-19 2018-09-07 北京华捷艾米科技有限公司 The method and system of typing gesture function
CN108549878A (en) * 2018-04-27 2018-09-18 北京华捷艾米科技有限公司 Hand detection method based on depth information and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234492A1 (en) * 2010-03-29 2011-09-29 Ajmera Rahul Gesture processing
CN103150116A (en) * 2013-03-05 2013-06-12 福建升腾资讯有限公司 RDP-based method for magnification display of cloud desktop
CN108446073A (en) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 A kind of method, apparatus and terminal for simulating mouse action using gesture
CN108509049A (en) * 2018-04-19 2018-09-07 北京华捷艾米科技有限公司 The method and system of typing gesture function
CN108549878A (en) * 2018-04-27 2018-09-18 北京华捷艾米科技有限公司 Hand detection method based on depth information and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928432A (en) * 2019-10-24 2020-03-27 中国人民解放军军事科学院国防科技创新研究院 Ring mouse, mouse control device and mouse control system
CN110928432B (en) * 2019-10-24 2023-06-23 中国人民解放军军事科学院国防科技创新研究院 Finger ring mouse, mouse control device and mouse control system
WO2021244650A1 (en) * 2020-06-05 2021-12-09 北京字节跳动网络技术有限公司 Control method and device, terminal and storage medium
CN114327229A (en) * 2020-09-25 2022-04-12 宏碁股份有限公司 Image-based gesture control method and electronic device using same
CN112817443A (en) * 2021-01-22 2021-05-18 歌尔科技有限公司 Display interface control method, device and equipment based on gestures and storage medium
CN113095243A (en) * 2021-04-16 2021-07-09 推想医疗科技股份有限公司 Mouse control method and device, computer equipment and medium
CN114148838A (en) * 2021-12-29 2022-03-08 淮阴工学院 Elevator non-contact virtual button operation method

Similar Documents

Publication Publication Date Title
CN109696958A (en) A kind of gestural control method and system based on depth transducer gesture identification
US11617942B2 (en) Information processing device, control method of information processing device, and program
EP2049976B1 (en) Virtual controller for visual displays
CN109074217B (en) Application for multi-touch input detection
JP5631535B2 (en) System and method for a gesture-based control system
KR101844390B1 (en) Systems and techniques for user interface control
CN106502570A (en) A kind of method of gesture identification, device and onboard system
CN107209582A (en) The method and apparatus of high intuitive man-machine interface
WO2021035646A1 (en) Wearable device and control method therefor, gesture recognition method, and control system
WO2016026365A1 (en) Man-machine interaction method and system for achieving contactless mouse control
CN111656313A (en) Screen display switching method, display device and movable platform
CN109144598A (en) Electronics mask man-machine interaction method and system based on gesture
Bordegoni et al. A dynamic gesture language and graphical feedback for interaction in a 3d user interface
CN111343341B (en) One-hand mode implementation method based on mobile equipment
CN107367966B (en) Man-machine interaction method and device
CN105242795A (en) Method for inputting English letters by azimuth gesture
CN114148838A (en) Elevator non-contact virtual button operation method
CN101446859B (en) Machine vision based input method and system thereof
JPH09237151A (en) Graphical user interface
JP5788853B2 (en) System and method for a gesture-based control system
CN111477054A (en) Traffic police commands gesture training system based on Kinect
JP2021009552A (en) Information processing apparatus, information processing method, and program
JPS62150423A (en) Coordinate input device
CN111694432B (en) Virtual hand position correction method and system based on virtual hand interaction
CN115437499A (en) Virtual video identification control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190430

RJ01 Rejection of invention patent application after publication