WO2018076848A1 - Procédé de commande de geste pour visiocasque de réalité virtuelle - Google Patents

Procédé de commande de geste pour visiocasque de réalité virtuelle Download PDF

Info

Publication number
WO2018076848A1
WO2018076848A1 PCT/CN2017/095028 CN2017095028W WO2018076848A1 WO 2018076848 A1 WO2018076848 A1 WO 2018076848A1 CN 2017095028 W CN2017095028 W CN 2017095028W WO 2018076848 A1 WO2018076848 A1 WO 2018076848A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
gesture
mouse
control method
fist
Prior art date
Application number
PCT/CN2017/095028
Other languages
English (en)
Chinese (zh)
Inventor
胡青剑
Original Assignee
蔚来汽车有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 蔚来汽车有限公司 filed Critical 蔚来汽车有限公司
Publication of WO2018076848A1 publication Critical patent/WO2018076848A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • the present invention relates to the field of human-computer interaction, and in particular to a virtual reality head-display device gesture control method.
  • the virtual reality head-mounted device is a new generation of head-mounted display, such as a virtual reality helmet, which provides an immersive virtual reality human-computer interaction mode, which uses a computer to generate a virtual world of three-dimensional space in a head-mounted display.
  • a virtual reality helmet which provides an immersive virtual reality human-computer interaction mode, which uses a computer to generate a virtual world of three-dimensional space in a head-mounted display.
  • Human-computer interaction Human-computer interaction.
  • the human-computer interaction of the existing virtual reality head-display device usually uses a conventional input method such as a mouse and a Bluetooth handle to interact with the device, which is costly and inconvenient to carry.
  • the present invention provides a virtual reality head display device gesture control method, which realizes the action of the mouse in the virtual reality head display device through the hand action, and gets rid of the hardware auxiliary input device.
  • the method for controlling gesture of a virtual reality head display device comprises the following steps:
  • Step S11 the hand recognition is performed by the video collection device, and whether the mouse follow state is entered according to the time of rest in the collection area;
  • Step S12 after entering the mouse follow state, the hand motion is collected frame by frame by the video collection device, and the mouse follows the hand in the display system according to the position information of the hand in each consecutive frame image;
  • step S13 in the mouse follow state, the hand gesture detected by the video capture device is used to identify the change of the hand gesture, and the click confirmation operation is implemented.
  • the recognition of the hand in the acquired image is performed using a strong classifier constructed by the AdaBoost algorithm.
  • the method of constructing a strong classifier using the AdaBoost algorithm is:
  • Step S22 initializing the weights w t, i , t of the training samples as the number of cycles;
  • Step S231 normalizing the weight:
  • Step S232 for each feature f, train a classifier h(x i , f, p, ⁇ ), that is, estimate the residual of the equation with respect to q t :
  • Step S233 selecting the classifier with the smallest residual h t
  • Step S234 updating the weight
  • ⁇ t is the residual corresponding to the classifier h t selected in step S233.
  • Step S24 finally combining into a strong classifier:
  • the hand gestures identified in the method include two gestures, a fist fist gesture and a non-grip fist gesture, wherein the non-grip gesture is used to control the mouse to follow the hand movement in the display system, and the fist gesture is used to implement click confirmation of the mouse in the display system. operating.
  • the method of the fist gesture for realizing the click confirmation operation of the mouse in the display system is: the hand changes from the non-grip state to the fist state, and moves forward, and the moving distance is less than the set distance threshold, and the moving speed is less than The set speed threshold performs a click confirmation operation of the mouse in the display system.
  • the recognition of the hand in the acquired image requires pre-processing of image segmentation of the image, separating the hand in the image from the background.
  • the method for preprocessing the image segmentation is to perform region recognition based on a difference between a skin color and a background color.
  • the pre-processing method of the image segmentation is a contour-based image segmentation method.
  • the invention recognizes the opponent part of the video collecting device, and controls the mouse to follow the hand in the display system according to the position information of the hand in the continuous frame image, and realizes the click confirmation operation by judging the change of the hand gesture.
  • the hardware auxiliary input device is rid of, which greatly facilitates the use of the user and improves the operation feeling.
  • FIG. 1 is a schematic flow chart of a gesture control method of a virtual reality head display device according to the present invention.
  • the invention is designed for the current virtual reality head display device, and is suitable for a virtual reality head display device with a video capture device (such as a mobile phone camera or a virtual reality head display device external camera or a corresponding sensor), such as VR glasses or virtual reality. helmet.
  • a video capture device such as a mobile phone camera or a virtual reality head display device external camera or a corresponding sensor
  • VR glasses or virtual reality. helmet When the operation needs to be performed in the scene of the virtual reality head display device display, the camera intelligently recognizes the hand and simulates a virtual mouse arrow (or similar virtual mark) in the VR virtual scene, by recognizing the movement track of the hand, The operation of selecting confirmation in the VR virtual scene.
  • the movement selection is performed by recognizing the movement trajectory of the hand, confirming that the operation is a forward click, and determining the confirmation operation by calculating the difference between the distances.
  • a virtual reality head display device gesture control method proposed by the present invention includes the following steps:
  • Step S11 the hand recognition is performed by the video collection device, and whether the mouse follow state is entered according to the time of rest in the collection area;
  • Step S12 after entering the mouse follow state, the hand motion is collected frame by frame by the video collection device, and the mouse follows the hand in the display system according to the position information of the hand in each consecutive frame image;
  • step S13 in the mouse follow state, the hand gesture detected by the video capture device is used to identify the change of the hand gesture, and the click confirmation operation is implemented.
  • the hand gestures identified in this embodiment include two gestures, a fist fist gesture and a non-grip fist gesture.
  • the non-grip gesture is used to control the mouse to follow the hand movement in the display system, and the fist gesture is used to implement a click confirmation operation of the mouse in the display system.
  • the fist punch gesture is used to implement the click confirmation operation of the mouse in the display system: the hand changes from the non-grip state to the fist state, and moves forward, and the moving distance is less than the set distance threshold, and the moving speed is less than the set
  • the speed threshold is used to perform a click confirmation operation of the mouse in the display system.
  • the non-grip state is that at least one finger is in an open state. The recognition rate of the fist and non-grip state is very high, which is conducive to fast and accurate control judgment.
  • the recognition of the hand in the acquired image requires pre-processing of image segmentation, and the hand in the image is separated from the background.
  • the preprocessing method of image segmentation is that region recognition can be performed based on the difference between the skin color and the background color, and a contour-based image segmentation method can also be employed.
  • the recognition of the hand in the acquired image is performed by using a strong classifier constructed by the AdaBoost algorithm.
  • the method of constructing a strong classifier using the AdaBoost algorithm is:
  • Step S22 initializing the weight of the training sample w t, i ; t is the number of cycles;
  • Step S231 normalizing the weight, as shown in formula (1):
  • Step S232 for each feature f, train a classifier h(x i , f, p, ⁇ ), that is, estimate the residual of the equation with respect to q t , as shown in formula (2):
  • Step S233 selecting the classifier h t with the smallest residual, as shown in formulas (3) and (4)
  • Step S234 updating the weight, as shown in formulas (5) and (6)
  • ⁇ t is the residual corresponding to the classifier h t (x) selected in step S233.
  • Step S24 finally combining into a strong classifier, as shown in formulas (7) and (8)

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé de commande de geste pour un visiocasque de réalité virtuelle, le procédé comprenant : l'étape S11, la reconnaissance d'une main au moyen d'un dispositif de collecte vidéo et la détermination, en fonction de l'instant auquel la main est statique à l'intérieur d'une zone de collecte, s'il faut entrer dans un état de suivi de souris ; l'étape S12, lors de l'entrée dans l'état de suivi de souris, la collecte de mouvements de la main, au moyen du dispositif de collecte vidéo, trame par trame et, en fonction des informations de position de la main dans les différentes trames consécutives des images, la commande de la souris dans le système d'affichage pour suivre de manière correspondante la main en mouvement ; et l'étape S13, dans l'état de suivi de souris, la détermination du changement de geste de la main, au moyen de la reconnaissance de la main dans les images collectées par le dispositif de collecte vidéo, effectuant l'opération de confirmation au clic. La présente invention met en œuvre les actions de la souris dans un visiocasque de réalité virtuelle, au moyen de mouvements de la main, sans avoir besoin d'un dispositif d'entrée assisté par matériel.
PCT/CN2017/095028 2016-10-27 2017-07-28 Procédé de commande de geste pour visiocasque de réalité virtuelle WO2018076848A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610958045.9A CN107015636A (zh) 2016-10-27 2016-10-27 虚拟现实头显设备手势控制方法
CN201610958045.9 2016-10-27

Publications (1)

Publication Number Publication Date
WO2018076848A1 true WO2018076848A1 (fr) 2018-05-03

Family

ID=59439481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095028 WO2018076848A1 (fr) 2016-10-27 2017-07-28 Procédé de commande de geste pour visiocasque de réalité virtuelle

Country Status (2)

Country Link
CN (1) CN107015636A (fr)
WO (1) WO2018076848A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (zh) * 2008-08-15 2009-01-14 华南理工大学 基于视线跟踪和手势识别的人机交互方法及装置
CN101763515A (zh) * 2009-09-23 2010-06-30 中国科学院自动化研究所 一种基于计算机视觉的实时手势交互方法
CN103530613A (zh) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 一种基于单目视频序列的目标人手势交互方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI454966B (zh) * 2012-04-24 2014-10-01 Wistron Corp 手勢控制方法及手勢控制裝置
CN104182132B (zh) * 2014-08-07 2017-11-14 天津三星电子有限公司 一种智能终端手势控制方法及智能终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (zh) * 2008-08-15 2009-01-14 华南理工大学 基于视线跟踪和手势识别的人机交互方法及装置
CN101763515A (zh) * 2009-09-23 2010-06-30 中国科学院自动化研究所 一种基于计算机视觉的实时手势交互方法
CN103530613A (zh) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 一种基于单目视频序列的目标人手势交互方法

Also Published As

Publication number Publication date
CN107015636A (zh) 2017-08-04

Similar Documents

Publication Publication Date Title
Kumar et al. A multimodal framework for sensor based sign language recognition
Sharma et al. Human computer interaction using hand gesture
US10095033B2 (en) Multimodal interaction with near-to-eye display
CN105739702B (zh) 用于自然人机交互的多姿态指尖跟踪方法
Agrawal et al. A survey on manual and non-manual sign language recognition for isolated and continuous sign
US9063573B2 (en) Method and system for touch-free control of devices
Qi et al. Computer vision-based hand gesture recognition for human-robot interaction: a review
JP2009265809A (ja) 情報端末装置
CN107357414B (zh) 一种点击动作的识别方法及点击动作识别装置
Yanik et al. Use of kinect depth data and growing neural gas for gesture based robot control
Simos et al. Greek sign language alphabet recognition using the leap motion device
Xue et al. A Chinese sign language recognition system using leap motion
Nooruddin et al. HGR: Hand-gesture-recognition based text input method for AR/VR wearable devices
CN107918507A (zh) 一种基于立体视觉的虚拟触摸板方法
Enikeev et al. Recognition of sign language using leap motion controller data
Abdallah et al. An overview of gesture recognition
US10095308B2 (en) Gesture based human machine interface using marker
WO2018076848A1 (fr) Procédé de commande de geste pour visiocasque de réalité virtuelle
Dhamanskar et al. Human computer interaction using hand gestures and voice
Gangrade et al. Real time sign language recognition using depth sensor
Bakheet A fuzzy framework for real-time gesture spotting and recognition
CN113807280A (zh) 一种基于Kinect的虚拟船舶机舱系统与方法
Samantaray et al. Hand gesture recognition using computer vision
Khan et al. Gesture recognition using Open-CV
Tazhigaliyeva et al. Slirs: Sign language interpreting system for human-robot interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866126

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 7.8.19)

122 Ep: pct application non-entry in european phase

Ref document number: 17866126

Country of ref document: EP

Kind code of ref document: A1