CN109408037A - A kind of non-contacting mixing control method based on " hand+face expression+head pose " - Google Patents

A kind of non-contacting mixing control method based on " hand+face expression+head pose " Download PDF

Info

Publication number
CN109408037A
CN109408037A CN201811066599.3A CN201811066599A CN109408037A CN 109408037 A CN109408037 A CN 109408037A CN 201811066599 A CN201811066599 A CN 201811066599A CN 109408037 A CN109408037 A CN 109408037A
Authority
CN
China
Prior art keywords
gesture
mixing
face
interaction
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811066599.3A
Other languages
Chinese (zh)
Inventor
殷继彬
于鲲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201811066599.3A priority Critical patent/CN109408037A/en
Publication of CN109408037A publication Critical patent/CN109408037A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The present invention relates to a kind of non-contacting mixing control methods based on " hand+face expression+head pose ", belong to the interaction gesture field in computer man-machine interacting.The invention includes the following steps: carrying out the identification of face critical point detection, the state of head pose, hand gesture first;Then customized facial gesture, head gesture, gesture interaction order;Mixing control method interactive interface is finally designed, and verifies the accuracy and accuracy of mixing control gesture.The present invention is using Face++ platform, Leap Motion as tool, face critical point detection and Leap Motion gestures detection according to Face++ platform design a set of new mixing interaction gesture, multimodal human-computer interaction is enriched, is of great significance to the development of computer man-machine interacting.

Description

A kind of non-contacting mixing control method based on " hand+face expression+head pose "
Technical field
The present invention relates to a kind of non-contacting mixing control methods based on " hand+face expression+head pose ", in particular to A kind of face critical point detection technology being combined under artificial intelligence, Leap Motion gesture tracer technique and it is based on JMF Human face detection tech and dynamic gesture tracer technique are combined the one kind designed by the camera image acquisition technique of frame The non-contacting mixing control method based on " hand+face expression+head pose ", the multichannel belonged in computer man-machine interacting are handed over Mutual gesture scope.
Background technique
With the rapid development of artificial intelligence, the mankind are gradually into the AI epoch.Based on this trend, Baidu, Ali, Tencent, The artificial intelligent platform such as Face++ develops rapidly, and the trend that a hundred flowers blossom occurs.The present invention is with Face++ platform Based on face critical point detection technology, the mixing interaction control method that designs.
In recent years, as artificial intelligence is including recognition of face, Application on Voiceprint Recognition, speech recognition, gesture identification, posture knowledge Not, there is the trend merged into each other in the progress of emotion recognition etc., intelligent algorithm and human-computer interaction.Because meeting the friendship of people Mutual mode, multi-channel intelligent human-computer interaction (the Multi-Modal Human Computer based on artificial intelligence approach Interaction, MMHCI) be considered as the following naturally human-computer interaction one of major way.
In today that artificial intelligence rapidly develops, the combination of artificial intelligence and human-computer interaction has become inevitable trend, This will carry out huge prospect to the following man-machine interacting strip.Facial gesture and head pose are combined simultaneously, it can be preferably Inquire into application value existing for new interaction channel.Realize the non-contacting mixing control based on " hand+face expression+head pose " Method is of great significance for the exploration and research of multimodal human-computer interaction technology.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of non-contacting mixed based on " hand+face expression+head pose " Combination control method carries out the identification of face critical point detection, the state of head pose, hand gesture first;Then customized face Portion's gesture, head gesture, gesture interaction order;Mixing control method interactive interface is finally designed, and verifies mixing control gesture Accuracy and accuracy.The technology reduces the dependence to touch-screen equipment, while solving people in computer man-machine interacting With the confinement problems of the necessary close contact of computer.
The technical solution adopted by the present invention is that: a kind of non-contacting mixing control based on " hand+face expression+head pose " Method includes the following steps:
(1) official website Face++ is first logged into, then register account number and logs on to oneself console, creation face is crucial Point detection application;
(2) the camera environment based on JMF frame is configured, allows the available picture stream to camera, as " face table The input source of feelings " and " head pose ";
(3) the face critical point detection application that code calls backstage creation, the face key point for needing to detect then are write As shown in Fig. 2, obtaining eye state, mouth states and head pose detected value as shown in Figure 3;
(4) environment for configuring Leap Motion, can allow Leap Motion dynamically to obtain gesture, as gesture in this way Input source;
(5) it writes code and realizes that the dynamic gesture of Leap Motion obtains in code;
(6) according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixes interaction hand Gesture;
(7) computer interactive command corresponding to customized mixing interaction gesture;
(8) mixing gesture is carried out with customized interactive command corresponding, generates corresponding feedback effects;
(9) design mixing interaction gesture, which is applied, identifies mixing interaction gesture;
(10) accuracy and accuracy of contrived experiment verifying mixing interaction gesture;
The specific steps of the non-contacting mixing control method based on " hand+face expression+head pose " of one kind are such as Under:
Step1, the official website Face++ is first logged into, then register account number and logs on to oneself console, then clicks " application management " clicks " creation API key " creation face critical point detection application, generates " API key " and " API Secret";
Step2, camera environment of the configuration based on JMF frame, then write the data flow of Code obtaining camera and life Input source at Image object, as " face expression " and " head pose ";
Step3, the face critical point detection application that code calls backstage creation is write, the face key point for needing to detect is such as Shown in Fig. 2, left eye state (left_eye_status), right eye state (right_eye_status), mouth states are obtained (mouthstatus) and head pose angle (headpose) detected value as shown in Figure 3;
Step4, the SDK file for downloading Leap Motion, and configure the environment of local system, can allow Leap in this way Motion dynamically obtains gesture, the input source as gesture;
Step5, instantiation Controller object, and four kinds of gesture identifications of Leap Motion are opened, it enables a system to The various gestures of Leap Motion are dynamically identified in code;
Step6, according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixing is handed over Mutual gesture;
Computer interactive command corresponding to Step7, customized mixing interaction gesture, including choose, right click, left click, under Sliding, upper cunning, left cunning, right cunning, amplification, diminution order;
Step8, customized mixing gesture is carried out with customized interactive command corresponding, and tests whether gesture one by one Whether success is corresponded to order;
Step9, design mixing interaction gesture code are put into a target to mixing interaction gesture verifying in main interface, According to the color of this target, size, position, whether Bei Xuanzhong state feeds back mixing gesture;
The accuracy of Step10, contrived experiment verifying mixing gesture.
Specifically, specific step is as follows by the Step10:
Step10.1, addition Correct parameter and Error parameter, every progress in interaction gesture code are mixed in Step9 Mixed once gesture identification, either correct or mistake, will all be recorded;
Step10.2, each gesture carry out 50 experiments, then the correct number with mistake of analysis obtains accuracy rate;
Step10.3, design accuracy experiment, design a main interface, the inside is put into five different targets in position, leads to Cross the size for changing them and they the distance between, come test mixing gesture accuracy;
Step10.4, accuracy experimental result is analyzed, obtains the accuracy of mixing gesture.
The beneficial effects of the present invention are:
Present human-computer interaction is mainly based on contact, for example keyboard, mouse, handwriting pad etc., this mode are necessary It is that hand is directly contacted with the device for receiving information, has thus fettered the flexibility of the freedom and message recipient part of manpower. And usage scenario has some limitations, such as doctor should not use touch manner to computer operation during operation.It is non- The effective application of contact human-computer interaction technology in medicine, can break through and reduce the office of conventional contact human-computer interaction technology Limit, to constantly enhance the level of therapeutic treatment, the present invention has significant application value to medical domain, to the nothing of medical surgery Bacterium operating environment has the non-contacting mixing control method based on " hand+face expression+head pose " of potentiality out of the ordinary to have just In identification, natural, intuitive, simple characteristic can allow user that can interact in a more natural way with equipment, while to a certain degree On enrich man-machine interactive mode, provide the user good experience.The invention enables user can get rid of it is specific Physical equipment, thus allow man-machine information interaction seem more directly with nature.
Detailed description of the invention
Fig. 1 is that the present invention is based on the overview flow charts of the mixing control method of " hand+face expression+head pose ";
Fig. 2 is 106 critical point detection figures of face in the present invention;
Fig. 3 is the angle figure of head pose in the present invention.
Specific embodiment
Gather the drawings and specific embodiments below, the present invention is further illustrated.
Embodiment 1: as shown in Figure 1-3, a kind of non-contacting mixing controlling party based on " hand+face expression+head pose " Method includes the following steps:
(1) official website Face++ is first logged into, then register account number and logs on to oneself console, creation face is crucial Point detection application;
(2) the camera environment based on JMF frame is configured, allows the available picture stream to camera, as " face table The input source of feelings " and " head pose ";
(3) the face critical point detection application that code calls backstage creation, the face key point for needing to detect then are write As shown in Fig. 2, obtaining eye state, mouth states and head pose detected value as shown in Figure 3;
(4) environment for configuring Leap Motion, can allow Leap Motion dynamically to obtain gesture, as gesture in this way Input source;
(5) it writes code and realizes that the dynamic gesture of Leap Motion obtains in code;
(6) according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixes interaction hand Gesture;
(7) computer interactive command corresponding to customized mixing interaction gesture;
(8) mixing gesture is carried out with customized interactive command corresponding, generates corresponding feedback effects;
(9) design mixing interaction gesture, which is applied, identifies mixing interaction gesture;
(10) accuracy and accuracy of contrived experiment verifying mixing interaction gesture;
Specific step is as follows for the non-contacting mixing control method based on " hand+face expression+head pose " of one kind:
Step1, the official website Face++ is first logged into, then register account number and logs on to oneself console, then clicks " application management " clicks " creation API key " creation face critical point detection application, generates " API key " and " API Secret";
Step2, camera environment of the configuration based on JMF frame, then write the data flow of Code obtaining camera and life Input source at Image object, as " face expression " and " head pose ";
Step3, the face critical point detection application that code calls backstage creation is write, the face key point for needing to detect is such as Shown in Fig. 2, left eye state (left_eye_status), right eye state (right_eye_status), mouth states are obtained (mouthstatus) and head pose angle as shown in Figure 3 (headpose) detected value;
Step4, the SDK file for downloading Leap Motion, and configure the environment of local system, can allow Leap in this way Motion dynamically obtains gesture, the input source as gesture;
Step5, instantiation Controller object, and four kinds of gesture identifications of Leap Motion are opened, it enables a system to The various gestures of Leap Motion are dynamically identified in code;
Step6, according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixing is handed over Mutual gesture, such as cursor is controlled with a finger, " face expression " and head pose are interacted as input quantity;
Computer interactive command corresponding to Step7, customized mixing interaction gesture, including choose, right click, left click, under The order such as sliding, upper cunning, left cunning, right cunning, amplification, diminution;
Step8, customized mixing gesture is carried out with customized interactive command corresponding, and tests whether gesture one by one Whether success is corresponded to order;
Step9, design mixing interaction gesture are applied to mixing interaction gesture verifying, and a target is put into main interface, According to the color of this target, size, position, whether selected fair state feeds back mixing gesture;
The accuracy of Step10, contrived experiment verifying mixing gesture.
Further, specific step is as follows by the Step10:
Step10.1, addition Correct parameter and Error parameter, every progress in interaction gesture code are mixed in Step9 Mixed once gesture identification, either correct or mistake, will all be recorded;
Step10.2, each gesture carry out 50 experiments, then the correct number with mistake of analysis obtains accuracy rate;
Step10.3, design accuracy experiment, design a main interface, the inside is put into five different targets in position, leads to Cross the size for changing them and they the distance between, come test mixing gesture accuracy;
Step10.4, accuracy experimental result is analyzed, obtains the accuracy of mixing gesture.
The solution of the present invention is described in detail below with reference to specific example:
Example 1: specific step is as follows for a kind of amplifying operation of non-contacting mixing control method:
Step1, the official website Face++ is first logged into, then register account number and logs on to oneself console, then clicks " application management " clicks " creation API key " creation face critical point detection application, generates " API key ": i9V7dr9ZuxT JBwABQiqGLBGVDFqXR0Hi and " API Secret ": evEytGqlCDM_9dQsImM2KydJETuJS2fH;
Step2, camera environment of the configuration based on JMF frame, then pass through initialization CaptureDeviceManager Then object generates Image object, while so that camera is constantly in work using timer to obtain the data flow of camera In work, input source of the still photo as " face expression " and " head pose " is shot;
Step3, the face critical point detection application that code calls backstage creation is write, the face key point for needing to detect is such as Shown in Fig. 2, the numerical value for obtaining the no_glass_eye_open under left eye state left_eye_status at this time is 95.225;It obtains The numerical value of no_glass_eye_open under the right eye state right_eye_status got is 99.942;The mouth got Open parameter values under bar state mouthstatus are 99.386 and head pose angle is as shown in figure 3, under headpose " roll_angle ": -36.014675;
Step4, the SDK file for downloading Leap Motion, and configure the environment of local system, can allow Leap in this way Motion dynamically obtains gesture, the input source as gesture;
Step5, instantiation Controller object, and four kinds of gesture identifications of Leap Motion are opened, it enables a system to The various gestures of Leap Motion are dynamically identified in code, and the three dimensional space coordinate of LeapMotion is mapped to Two-dimensional coordinate binds a finger as virtual mouse, is used for selection target;
Step6, the state value opened according to the left eye obtained in Step3 are 95.225, and the state value that right eye is opened is 99.942, the state value that mouth opens is 99.386, and head angle is -36.014675, it can be determined that current face expression is left Eye is opened, and right eye is opened, and mouth opens;Head pose is negative, it can be determined that is torticollis to the left;It is advised according to customized gesture Then it is judged as amplifying operation, target is amplified;
Step7, Step2-Step7, test accuracy rate and accurate rate are repeated;
Step8, amplifying gesture is carried out to 50 experiments, wherein correct number is the number 2 of 48 and mistake, accuracy rate is 96%.
The present invention using Face++ platform, Leap Motion as tool, according to Face++ platform face critical point detection and Leap Motion gestures detection designs a set of new mixing interaction gesture, multimodal human-computer interaction is enriched, to computer The development of human-computer interaction is of great significance.
In conjunction with attached drawing, the embodiment of the present invention is explained in detail above, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (3)

1. a kind of non-contacting mixing control method based on " hand+face expression+head pose ", it is characterised in that: including as follows Step:
(1) official website Face++ is first logged into, then register account number and logs on to oneself console, creation face key point inspection Survey application;
(2) the camera environment based on JMF frame, the available picture stream to camera, as " face expression " and " head are configured The input source of portion's posture ";
(3) the face critical point detection application that code calls backstage creation is then write, eye state, mouth states and head are obtained The detected value of portion's posture;
(4) environment for configuring Leap Motion, can allow Leap Motion dynamically to obtain gesture, as the defeated of gesture in this way Enter source;
(5) it writes code and realizes that the dynamic gesture of Leap Motion obtains in code;
(6) according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixes interaction gesture;
(7) computer interactive command corresponding to customized mixing interaction gesture;
(8) mixing gesture is carried out with customized interactive command corresponding, generates corresponding feedback effects;
(9) design mixing interaction gesture, which is applied, identifies mixing interaction gesture;
(10) accuracy and accuracy of contrived experiment verifying mixing interaction gesture.
2. the non-contacting mixing control method based on " hand+face expression+head pose " of one kind according to claim 1, It is characterized by: specific step is as follows:
Step1, the official website Face++ is first logged into, then register account number and logs on to oneself console, then clicks " application Management " is clicked " creation API key " creation face critical point detection application, is generated " API key " and " API Secret ";
Step2, camera environment of the configuration based on JMF frame, then write the data flow of Code obtaining camera and generation Image object, the input source as " face expression " and " head pose ";
Step3, the face critical point detection application that code calls backstage creation is write, obtains left eye state left_eye_ The inspection of status, right eye state right_eye_status, mouth states mouthstatus and head pose angle headpose Measured value;
Step4, the SDK file for downloading Leap Motion, and configure the environment of local system, can allow LeapMotion in this way Gesture is dynamically obtained, the input source as gesture;
Step5, instantiation Controller object, and four kinds of gesture identifications of Leap Motion are opened, it enables a system in generation The various gestures of Leap Motion are dynamically identified in code;
Step6, according to the state of the eyes of acquisition, head and hand, customized " hand+face expression+head pose " mixes interaction hand Gesture;
Computer interactive command corresponding to Step7, customized mixing interaction gesture, including choose, right click, left click, downslide, on Sliding, left cunning, right cunning, amplification, diminution order;
Step8, customized mixing gesture is carried out with customized interactive command corresponding, and tests whether gesture and life one by one It enables and whether corresponds to success;
Step9, design mixing interaction gesture code are put into a target to mixing interaction gesture verifying in main interface, according to The color of this target, size, position, whether Bei Xuanzhong state feeds back mixing gesture;
The accuracy of Step10, contrived experiment verifying mixing gesture.
3. the non-contacting mixing control method based on " hand+face expression+head pose " of one kind according to claim 2, It is characterized by: specific step is as follows by the Step10:
Step10.1, addition Correct parameter and Error parameter in interaction gesture code are mixed in Step9, it is every to carry out once Gesture identification is mixed, either correct or mistake will be all recorded;
Step10.2, each gesture carry out 50 experiments, then the correct number with mistake of analysis obtains accuracy rate;
Step10.3, design accuracy experiment, design a main interface, the inside is put into five different targets in position, by changing Become they size and they the distance between, come test mixing gesture accuracy;
Step10.4, accuracy experimental result is analyzed, obtains the accuracy of mixing gesture.
CN201811066599.3A 2018-09-13 2018-09-13 A kind of non-contacting mixing control method based on " hand+face expression+head pose " Pending CN109408037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811066599.3A CN109408037A (en) 2018-09-13 2018-09-13 A kind of non-contacting mixing control method based on " hand+face expression+head pose "

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811066599.3A CN109408037A (en) 2018-09-13 2018-09-13 A kind of non-contacting mixing control method based on " hand+face expression+head pose "

Publications (1)

Publication Number Publication Date
CN109408037A true CN109408037A (en) 2019-03-01

Family

ID=65464802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811066599.3A Pending CN109408037A (en) 2018-09-13 2018-09-13 A kind of non-contacting mixing control method based on " hand+face expression+head pose "

Country Status (1)

Country Link
CN (1) CN109408037A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349268A (en) * 2019-06-11 2019-10-18 华南理工大学 A kind of reconstructing method of 3 D human body posture, expression and gesture
CN110472589A (en) * 2019-08-19 2019-11-19 中国银行股份有限公司 A kind of method, apparatus and system of behavior authentication
CN113391754A (en) * 2021-06-28 2021-09-14 昆明理工大学 Wrist rotation menu interaction technology based on LeapMotion sensor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN105653037A (en) * 2015-12-31 2016-06-08 张小花 Interactive system and method based on behavior analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117117A (en) * 2010-01-06 2011-07-06 致伸科技股份有限公司 System and method for control through identifying user posture by image extraction device
CN105653037A (en) * 2015-12-31 2016-06-08 张小花 Interactive system and method based on behavior analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349268A (en) * 2019-06-11 2019-10-18 华南理工大学 A kind of reconstructing method of 3 D human body posture, expression and gesture
CN110472589A (en) * 2019-08-19 2019-11-19 中国银行股份有限公司 A kind of method, apparatus and system of behavior authentication
CN113391754A (en) * 2021-06-28 2021-09-14 昆明理工大学 Wrist rotation menu interaction technology based on LeapMotion sensor
CN113391754B (en) * 2021-06-28 2024-03-12 昆明理工大学 Wrist rotation menu interaction technology based on leap motion sensor

Similar Documents

Publication Publication Date Title
US11543887B2 (en) User interface control of responsive devices
US11029784B2 (en) Methods and apparatuses for applying free space inputs for surface constrained controls
US20210247890A1 (en) Method and apparatus for ego-centric 3d human computer interface
CN109408037A (en) A kind of non-contacting mixing control method based on " hand+face expression+head pose "
CN110399081A (en) Custodial care facility and its display interface layout adjustment method, device
US20180253163A1 (en) Change of active user of a stylus pen with a multi-user interactive display
US11579706B2 (en) Method and apparatus for applying free space input for surface constrained control
Jantz et al. A brain-computer interface for extended reality interfaces
Dewez et al. Towards “avatar-friendly” 3D manipulation techniques: Bridging the gap between sense of embodiment and interaction in virtual reality
Xia et al. Iteratively designing gesture vocabularies: A survey and analysis of best practices in the HCI literature
US20160004315A1 (en) System and method of touch-free operation of a picture archiving and communication system
Rateau et al. Mimetic interaction spaces: Controlling distant displays in pervasive environments
Wang et al. AirMouse: Turning a pair of glasses into a mouse in the air
Walsh et al. Assistive pointing device based on a head-mounted camera
Mgbemena Man-machine systems: A review of current trends and applications
CN113885695A (en) Gesture interaction method and system based on artificial reality
CN204270276U (en) A kind of human-computer interaction device based on radio-frequency (RF) identification
Cogerino et al. Multi-modal input devices for active and healthy ageing
Malleswari Dynamic virtual assistance of I/O functionalities
Augstein et al. Haptic and Touchless User Input Methods for Simple 3D Interaction Tasks: Interaction Performance and User Experience for People with and Without Impairments
Chen et al. The integration method of multimodal human-computer interaction framework
Aggarwal et al. Gesture-Based Computer Control
Cicek et al. Towards Personalized Head-Tracking Pointing
Mhatre et al. Hand gesture based X-ray image controlling using Convolutional Neural Network
Dai et al. Magic Portal Interaction to Support Precise Embodied Mid-air and Haptic Selection-at-a-distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301