CN107168538A - A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action - Google Patents
A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action Download PDFInfo
- Publication number
- CN107168538A CN107168538A CN201710437831.9A CN201710437831A CN107168538A CN 107168538 A CN107168538 A CN 107168538A CN 201710437831 A CN201710437831 A CN 201710437831A CN 107168538 A CN107168538 A CN 107168538A
- Authority
- CN
- China
- Prior art keywords
- action
- role
- behavior
- centimeters
- limb action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Abstract
The present invention provides a kind of 3D campuses guide method and system that emotion computing is carried out based on limb action, including passes through the identification of Kinect somatosensory video camera progress limb action;The limb action that video camera is sent is received by computer;If the limb action identified is mobile control posture, then carry out action classification judgement, and the visual angle movement moving operation of virtual role is carried out in the application of 3D campus environments guide to visitors, when virtual role is close to the region and object specified, corresponding informative presentations are shown on picture;If identifying that limb action acts for limbs mood, the judgement of categories of emotions is carried out, and feedback content is shown in the application of 3D campus environments guide to visitors;The present invention according to action command except that can make the operation moved in 3D environment, posture state can also be passed through, the mood that limb action implies is judged, emotion computing is attached in the application of 3D body-sensings guide to visitors, and give appropriate feedback to help user, help preferably to carry out interaction in the application.
Description
Technical field
The present invention relates to human-computer interaction technique field, more particularly to a kind of 3D schools that emotion computing is carried out based on limb action
Garden guide method and system.
Background technology
The expression of human emotion is not only only limitted to be exchanged using the mode of language, and the expression of face can also be betrayed
The inherence emotional state that imply, Ekman and Friesen (1957) with regard to it is proposed that the basic type of emotion of six kinds of mankind, with
And its feature of corresponding face's mood, it is respectively surprised, fear, it is angry, happy, sad and detest, utilize face
Feature discloses the real emotional reactions of people's heart.In addition other learned researchs also think, except the table of face
Outside feelings, speech recognition or physiology identification can also be used as one of the clue of inherent emotional state (e.g., Schuller et
al.2011).Although in addition, the part that whether can reflect heart emotional state on limb action has been disputed on always, so
And having had the increasing viewpoint studied and indicated in contrast in recent years, they think that body language can also be with emotion
It is related, and can be as key character (the Schouwstra et al.1995 for recognizing mood inside a people;
Coulson 2008;Dael et al.1995;Bretan et al.2015;Chen Ting Wei 2011).
Picard (1997) thinks that emotion is more than the instrument exchanged between men, while can also be the mankind and calculating
Machine carries out interactive another means, and if computer to be allowed has intelligence, then we just have to allow calculating
Machine possesses mood, and possesses the ability for distinguishing mood, and can make feedback to mood.Therefore, in order to be able to allowing human-computer interaction
(human–computer interaction;HCI mechanism) can become on more humane, interactive more naturally, in recent years
Mood is considered as mankind's key factor interactive with computer, and attempts to combine emotion computing (Affective
Computing) research applied in extensive field is more and more.
At present, the research of many combination emotion computing association areas is all the feature identification with face mood mostly, or
It is to give appropriate feedback according to physiology identification, but the application that limbs emotion identification is carried out for posture is relative is to compare
Few, especially among the application of guide system, although have many use limb actions come as the behaviour for browsing 3D virtual environments
Make instrument, but body language is carried out to the analysis of emotion, and add in interaction the application of mood feedback and few.
The content of the invention
Present invention aims to overcome that the deficiencies in the prior art, propose a kind of 3D that emotion computing is carried out based on limb action
Campus guide method and system, except the operation moved in 3D environment can be made according to action command, moreover it is possible to exist through user
Pose information in operating process, to recognize the emotional expression implied in action, and can do feelings according to the posture of user
The judgement of thread, emotion computing is attached in the application of 3D body-sensings guide to visitors, and gives appropriate feedback, contributes to user to incorporate 3D
Campus guide system, is carried out interactive in 3D campus situation.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of 3D campuses guide method that emotion computing is carried out based on limb action, including:
By the skeletal gesture of Kinect somatosensory camera grabs user's whole body, to carry out the detecting of limb action and distinguish
Know;
The limb action that the Kinect somatosensory video camera is sent is received and identified by computer, with default action row
To be matched;If it is mobile control posture to identify the limb action, action classification judgement is carried out, and setting up
The application of 3D campus environments guide to visitors in carry out the visual angle movement moving operation of virtual role, when virtual role is close to the region specified
During with object, according to the content established in advance in database, corresponding informative presentations, described information are shown on picture
Brief fundamental region is described as to introduce;If identifying that the limb action acts for limbs mood, categories of emotions is carried out
Judgement, and set up 3D campus environments guide to visitors application in show feedback content;The categories of emotions includes angry, anxiety
And it is interesting;The feedback content illustrates including operational order, the environmental map at place and the more detailed information content.
The default action behavior include forward extending out the right hand and shoulder distance be more than 30 centimeters, forward extend out left hand with
Shoulder distance be more than 30 centimeters, body to Right deviation be more than 15 degree, body incline to the left more than 15 degree, lift more than 10 centimeters,
Body forward is inclined and both hands are significantly lifted, body forward is inclined and both hands are lifted or staggeredly and body retreats or straight and upright and both hands are big
Width is lifted.
If the action behavior matched for forward extend out the right hand and shoulder distance be more than 30 centimeters, forward extend out left hand with
Shoulder distance is inclined to the left more than 15 degree or lifted more than 10 centimeters more than 15 degree, body to Right deviation more than 30 centimeters, body,
Then it is judged as mobile control posture;If the action behavior matched inclines for body forward and both hands are significantly lifted, body forward
Incline and both hands lift or staggeredly and body retreat or straight and upright and both hands significantly lift in one kind, then be judged as limbs mood move
Make.
The action classification includes the right hand and inclines to the left and jump to Right deviation, body to extension, body to extension, left hand;Institute
Stating visual angle movement, moving operation is gone ahead including role, role draws back, role turns right, role turns left and role's jump;
When the action behavior is more than 30 centimeters to forward extend out the right hand and shoulder distance, corresponding action classification be the right hand to extension,
Corresponding visual angle movement moving operation is gone ahead for role;The action behavior is more than 30 to forward extend out left hand with shoulder distance
During centimetre, corresponding action classification be left hand to extension, corresponding visual angle movement moving operation is drawn back for role;The action
Behavior be body to Right deviation be more than 15 degree when, corresponding action classification be body to Right deviation, moving operation is moved at corresponding visual angle
Turned right for role;When the action behavior is that body is inclined to the left more than 15 degree, corresponding action classification inclines to the left for body, right
The visual angle movement moving operation answered is turned left for role;The action behavior is corresponding dynamic for when lifting more than 10 centimeters
It is jump to make classification, and corresponding visual angle movement moving operation is jumped for role.
When the action behavior is that body forward is inclined and both hands are significantly lifted, corresponding categories of emotions is anger, corresponding
Feedback content is display operational order explanation;The action behavior be body forward incline and both hands lift or staggeredly when, it is corresponding
Categories of emotions is anxiety, and corresponding feedback content is the environmental map where display;The action behavior be body retreat or
When straight and upright and both hands are significantly lifted, corresponding categories of emotions is interesting, and corresponding feedback content is in the more detailed information of display
Hold.
The Kinect somatosensory camera mounting is located at apart from 2 meters of remote places of user to carry out the seizure of double
With identification.
The 3D campus environments guide to visitors application can carry out the displaying of 3D campus environments and display campus information.
A kind of 3D campuses guide system that emotion computing is carried out based on limb action, including:
Limb action is detected and identification module, for the skeleton appearance by Kinect somatosensory camera grabs user's whole body
Gesture, to carry out the detecting and identification of limb action;
Limb action judge module, for being received and identified by computer, the limb action is detected and identification module is sent out
The limb action sent, is matched with default action behavior;Judge the limb action for mobile control posture or limbs feelings
Thread is acted, and is controlled posture if mobile, is sent to activity control resume module;If the action of limbs mood, feelings are sent to
Feel computing module processing;
Activity control module, carries out action classification judgement, and carry out void in the 3D campus environments guide to visitors application set up
Intend the visual angle movement moving operation of role, when virtual role is close to the region specified and object, according in advance in database
The content established, shows corresponding informative presentations on picture, and described information is described as brief fundamental region and introduced;
Emotion computing module, carries out the judgement of categories of emotions, and shown in the 3D campus environments guide to visitors application set up
Feedback content;The categories of emotions includes angry, anxiety and interesting;The feedback content include operational order illustrate, place
Environmental map and the more detailed information content.
3D campuses guide system also includes:
Action behavior presetting module, for defining action behavior;The action behavior includes forward extending out the right hand and shoulder
Inclined to the left apart from being more than 30 centimeters, forward extending out left hand and shoulder apart from more than 30 centimeters, body to Right deviation more than 15 degree, body
More than 15 degree, lift and incline more than 10 centimeters, body forward and both hands are significantly lifted, body forward is inclined and both hands are lifted or handed over
It is wrong and body retreats or straight and upright and both hands are significantly lifted;
Action classification presetting module, for defining action classification;The action classification include the right hand to extension, left hand forward
Stretch, body inclines to the left and jumped to Right deviation, body;
Moving operation presetting module is moved at visual angle, for defining basic operation instruction;The basic operation instruction includes angle
Color is gone ahead, role draws back, role turns right, role turns left and role's jump;
Categories of emotions presetting module, for due to categories of emotions;The categories of emotions includes angry, anxiety and interesting;
Feedback content presetting module, for the feedback content due to display;The feedback content include operational order explanation,
The environmental map at place and the more detailed information content;
Mobile control posture relating module, moving operation is moved for relevant action behavior, action classification and visual angle;It is described
When action behavior is more than 30 centimeters to forward extend out the right hand and shoulder distance, corresponding action classification is the right hand to extension, correspondence
Visual angle movement moving operation gone ahead for role;The action behavior is more than 30 centimeters to forward extend out left hand with shoulder distance
When, corresponding action classification be left hand to extension, corresponding visual angle movement moving operation is drawn back for role;The action behavior
For body to Right deviation be more than 15 degree when, corresponding action classification be body to Right deviation, corresponding visual angle movement moving operation is angle
Color is turned right;When the action behavior is that body is inclined to the left more than 15 degree, corresponding action classification inclines to the left for body, corresponding
Visual angle movement moving operation is turned left for role;The action behavior is corresponding action class when lifting more than 10 centimeters
It Wei not jump, corresponding visual angle movement moving operation is jumped for role;
Limbs mood acts relating module, for relevant action behavior, categories of emotions and feedback content;The action behavior
Incline for body forward and when both hands are significantly lifted, corresponding categories of emotions is anger, corresponding feedback content refers to for display operation
Make explanation;The action behavior be body forward incline and both hands lift or staggeredly when, corresponding categories of emotions be anxiety, it is corresponding
Feedback content is the environmental map where display;When the action behavior is that body retreats or straight and upright and both hands are significantly lifted,
Corresponding categories of emotions is interesting, and corresponding feedback content is the more detailed information content of display.
Compared with prior art, the present invention has the advantages that:
The present invention proposes a kind of 3D campuses guide method and system that emotion computing is carried out based on limb action, except meeting root
The operation moved in 3D environment is made according to action command, moreover it is possible to through the pose information of user in operation, to recognize
The emotional expression implied in action, and the judgement of mood can be done according to the posture of user, emotion computing is attached to 3D
In the application of body-sensing guide to visitors, and appropriate feedback is given, contribute to user to incorporate 3D campuses guide system, in 3D campus situation
It is middle to carry out interaction.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is present system interaction flow schematic diagram;
Fig. 3 is the depth image acquired by body-sensing video camera of the present invention;Wherein Fig. 3 (a) inclines to the left for body, and Fig. 3 (b) is
The right hand is to extension;
Fig. 4 is the basic picture that 3D campuses of the present invention guide system is presented;
Fig. 5 is the information content display picture that 3D campuses of the present invention guide system is presented;Wherein Fig. 5 (a) is information profile
The presentation picture of content, Fig. 5 (b) is detailed description display picture;
Fig. 6 is the information assists display picture that 3D campuses of the present invention guide system is presented;Wherein Fig. 6 (a) is complete for display
Floor maps information, Fig. 6 (b) for display operational order picture.
Embodiment
A kind of 3D campuses guide method that emotion computing is carried out based on limb action, it is shown in Figure 1, including:
Step 101, by the skeletal gesture of Kinect somatosensory camera grabs user's whole body, to carry out limb action
Detecting and identification;
Step 102, the limb action that the Kinect somatosensory video camera is sent is received and identified by computer, with presetting
Action behavior matched;If it is mobile control posture to identify the limb action, 103 are gone to step;If identified
The limb action acts for limbs mood, goes to step 104;
Step 103, action classification judgement is carried out, and virtual role is carried out in the 3D campus environments guide to visitors application set up
Visual angle movement moving operation, when virtual role is close to the region specified and object, according to being established in advance in database
Content, show corresponding informative presentations on picture, described information is described as brief fundamental region and introduced;
Step 104, the judgement of categories of emotions is carried out, and is shown in the 3D campus environments guide to visitors application set up in feedback
Hold;The categories of emotions includes angry, anxiety and interesting;The feedback content illustrates including operational order, the environment at place
Figure and the more detailed information content.
It is corresponding, a kind of 3D campuses guide system that emotion computing is carried out based on limb action, including:
Limb action is detected and identification module, for the skeleton appearance by Kinect somatosensory camera grabs user's whole body
Gesture, to carry out the detecting and identification of limb action;
Limb action judge module, for being received and identified by computer, the limb action is detected and identification module is sent out
The limb action sent, is matched with default action behavior;Judge the limb action for mobile control posture or limbs feelings
Thread is acted, and is controlled posture if mobile, is sent to activity control resume module;If the action of limbs mood, feelings are sent to
Feel computing module processing;
Activity control module, carries out action classification judgement, and carry out void in the 3D campus environments guide to visitors application set up
Intend the visual angle movement moving operation of role, when virtual role is close to the region specified and object, according in advance in database
The content established, shows corresponding informative presentations on picture, and described information is described as brief fundamental region and introduced;
Emotion computing module, carries out the judgement of categories of emotions, and shown in the 3D campus environments guide to visitors application set up
Feedback content;The categories of emotions includes angry, anxiety and interesting;The feedback content include operational order illustrate, place
Environmental map and the more detailed information content.
Specifically, whole system includes three parts:The application of 3D campus environments guide to visitors, body-sensing identification part and emotion operational part
Point.
In the present embodiment, 3D campus environments guide to visitors is applied to be realized by software, and the present embodiment carries out system using Unity
Exploitation write with building, and using the JavaScript programs applied with C# language to carry out 3D campuses guides to visitors.In 3D campuses
In guide to visitors application, user can freely watch to be built next 3D models to understand floor according to real campus environment
Internal complexion, and system can show the environmental information in corresponding region in real time, to allow the understanding that user can be deep enough
More some equipment of related school, the information of resource.
The part recognized in body-sensing, the present embodiment carries out the identification on limb action using Kinect somatosensory video camera,
Due to being judged with the skeletal gesture of user's whole body, therefore body-sensing camera mounting is located at apart from user about by we
2 meters of remote places carry out the seizure and identification of double.In addition, the action of user's limbs behavior can be identified to make
For the foundation of environment visit, the judgement in moving operation, and limbs emotion identification in guide system.
In addition to the action of the limbs operation that environment is moved in as above-mentioned guide system, emotion arithmetic section can also be tied
The foundation that emotion computing is used as in emotion identification is closed, user and the mood showed instantly in system mutual is judged, and
Categories of emotions according to having preassigned does corresponding information feedback, to help user and system to have more natural, hommization
Interaction, and lifting system operation ease for use.For example, when detecting user in viewing information, showing interested
During mood, it will show that more detailed information are supplied to user.Avoid, when going sight-seeing environment, excessive letter being shown quickly
Breath, causes the fatigue of confusion and vision of the user in viewing.
Further, it is shown in Figure 2, user caught through Kinect somatosensory video camera the information of effect of depth with
Skeleton, is recognized with detecting posture, and deploys interactive with 3D campuses guide system in this way, and the content of interaction is substantially
On can be divided into the mobile control and limbs Emotion identification and feedback typically instructed.Specifically, the mobile control typically instructed refers to
User can be through the good basic operation instruction of predefined, the visual angle movement behaviour that virtual role is carried out by limb action
Control, and when role is close to the region and object specified, system can drawn according to the content established in advance in database
Corresponding informative presentations are shown on face, this information content is introduced for brief fundamental region;Limbs Emotion identification refers to feedback
The mood of limbs is recognized with reference to emotion computing, and has selected the classification of angry, anxiety and the 3 kinds of moods that find of interest, by posture
Detecting judge the mood that may belong to, finally make appropriate reaction and feedback.
Further, mobile control division point and limbs Emotion identification will be described further with feedback part as follows.
(1) mobile control
Among the guide system built with Unity, the movement of role is the side for having used first person controller
Formula carries out the operation at visual angle.In addition, the video camera for being different from third person controller be erected at the rear of virtual portrait with
With shooting, in first person controller, user not can be appreciated that the personage of virtual role, can only see the visual angle with role
Taken picture.
The mobile control of role personage be the depth image information according to taken by body-sensing video camera carry out identification with
Manipulate (shown in Figure 3), the part of table 1 (posture and the instruction of mobile control) lists the shifting ordered by the present embodiment in addition
The basic operation instruction of dynamic control, it is desirable to carry out interactive behaviour with the application of whole 3D campuses guide to visitors by the postural discrimination of limbs
Make.
Table 1
Action classification | Action behavior | Interactive display content |
The right hand is to extension | Forward extend out the right hand and be more than 30 centimeters with shoulder distance. | Role goes ahead |
Left hand is to extension | Forward extend out left hand and be more than 30 centimeters with shoulder distance. | Role draws back |
Body is to Right deviation | Body is more than 15 degree to Right deviation. | Role turns right |
Body inclines to the left | Body inclines to the left more than 15 degree. | Role turns left |
Jump | Lift more than 10 centimeters. | Role jumps |
(2) limbs Emotion identification and feedback
As shown in table 2 (mood combination of gestures and the information of emotion feedback), mood of the present embodiment according to positive and negative sense,
Further coordinate the change of hand to recognize the mood that posture is showed, and give the response content of correlation.
Table 2
Categories of emotions | Limbs emotion-directed behavior | Feedback display content |
It is angry | Body forward is inclined, both hands are significantly lifted | Show operational order explanation |
Anxiety | Body forward is inclined, both hands are lifted or is interlocked | Environmental map where display |
It is interesting | Body is retreated or straight and upright, both hands are significantly lifted | Show the more detailed information content |
For the user of 3D guide systems as not using associated class previously, the operation in system can be compared
In being not suitable with that experienced user also comes, although the 3D guide systems of the present embodiment are having related gesture operation at the beginning
Explanation, but found in many research, user still can influence user to be used for system for the familiarity of system
Property impression, therefore the present embodiment carries out the exploitation of interaction mechanism from the angle of user, as user because running into behaviour
Difficulty on work and when feeling unpleasant even angry, system timely can respond user's phase according to angry mood
Close the explanation of operation.
On the other hand, in order to allow the mutual kinetic energy of system to have polynary and hommization response, when user is confused in system environments
Lose direction, it is not known that oneself position is when which regional location experiences anxiety, and system will respond floor where display at present
Complete map, and place regional location, to help user quickly to give orientation instantly for change.In addition, when user for
When currently displayed environmental information brief introduction feels interesting, system will further show that more related information allow interesting
People understand.
The 3D campuses guide system of the present embodiment, shown in Figure 4, in basic picture is presented, the upper right corner has building
Layer region and the display of small map, so as to the position where allowing user to understand at present and environment, and can be through basic
Mobile control action instruction, is gone sight-seeing in the environment of guide system.
When the virtual portrait role that user manipulates is close to the regional extent specified, system interface can show simply
Environment is introduced, if user is after brief informative presentations are watched in addition, when showing the limb action found of interest, system
It can show that more detailed information is watched to user, and emotion is combined in the way of this is presented automatically to reach according to the judgement of mood
The interaction mechanism of computing.The presentation picture of meeting display information brief introduction content when showing access areas such as Fig. 5 (a);Such as Fig. 5
(b) when showing the posture expression for judging user with interesting emotion, it will make more detailed for the content of the same area
Explanation show.In addition when detecting anxiety and during angry negative sense mood, system also can be anti-according to corresponding mood feedback
Should, render assistance in real time, as shown in Figure 6.
In Fig. 6 part, Fig. 6 (a) is represented when user produces the limbs emotional reactions of anxiety, it will be used as environment side
To the judgement lost, and complete floor maps information is shown immediately;Fig. 6 (b) is then to represent user to produce angry feelings
Thread is reacted, in this as being met difficulty in operation, it may be possible to is forgotten the instruction of action or is not remembered the posture of action clearly, therefore
The explanation that will be instructed in display once-through operation is into the picture of guide system.
It is described above, only it is present pre-ferred embodiments, is not intended to limit the scope of the present invention, therefore
Any subtle modifications, equivalent variations and modifications that every technical spirit according to the present invention is made to above example, still belong to
In the range of technical solution of the present invention.
Claims (9)
1. a kind of 3D campuses guide method that emotion computing is carried out based on limb action, it is characterised in that including:
By the skeletal gesture of Kinect somatosensory camera grabs user's whole body, to carry out the detecting and identification of limb action;
The limb action that the Kinect somatosensory video camera is sent is received and identified by computer, entered with default action behavior
Row matching;If it is mobile control posture to identify the limb action, action classification judgement is carried out, and in the 3D set up
The visual angle movement moving operation of virtual role is carried out in the application of campus environment guide to visitors, when virtual role is close to the region specified and right
As when, according to the content established in advance in database, corresponding informative presentations, described information introduction are shown on picture
Introduced for brief fundamental region;If identifying that the limb action acts for limbs mood, sentencing for categories of emotions is carried out
It is disconnected, and show feedback content in the 3D campus environments guide to visitors application set up;The categories of emotions includes angry, anxiety and had
Interest;The feedback content illustrates including operational order, the environmental map at place and the more detailed information content.
2. the 3D campuses guide method according to claim 1 that emotion computing is carried out based on limb action, it is characterised in that
The default action behavior includes forward extending out the right hand and shoulder distance is more than 30 centimeters, forward extends out left hand and shoulder distance
More than 30 centimeters, body to Right deviation be more than 15 degree, body incline to the left more than 15 degree, lift more than 10 centimeters, body forward
Incline and both hands significantly lift, body forward incline and both hands lift or staggeredly and body retreats or straight and upright and both hands are significantly lifted.
3. the 3D campuses guide method according to claim 2 that emotion computing is carried out based on limb action, it is characterised in that
If the action behavior matched is more than 30 centimeters with shoulder distance to forward extend out the right hand, forward extends out left hand and shoulder
Distance inclines to the left more than 15 degree or lifted more than 10 centimeters more than 15 degree, body to Right deviation more than 30 centimeters, body, then sentences
Break as mobile control posture;If the action behavior matched be body forward incline and both hands significantly lift, body forward incline and
Both hands lift or staggeredly and body retreat or straight and upright and both hands significantly lift in one kind, then be judged as limbs mood act.
4. the 3D campuses guide method according to claim 3 that emotion computing is carried out based on limb action, it is characterised in that
The action classification includes the right hand and inclines to the left and jump to Right deviation, body to extension, body to extension, left hand;It is described to regard
Moving operation is gone ahead including role, role draws back, role turns right, role turns left and role's jump for angle movement;It is described
When action behavior is more than 30 centimeters to forward extend out the right hand and shoulder distance, corresponding action classification is the right hand to extension, correspondence
Visual angle movement moving operation gone ahead for role;The action behavior is more than 30 centimeters to forward extend out left hand with shoulder distance
When, corresponding action classification be left hand to extension, corresponding visual angle movement moving operation is drawn back for role;The action behavior
For body to Right deviation be more than 15 degree when, corresponding action classification be body to Right deviation, corresponding visual angle movement moving operation is angle
Color is turned right;When the action behavior is that body is inclined to the left more than 15 degree, corresponding action classification inclines to the left for body, corresponding
Visual angle movement moving operation is turned left for role;The action behavior is corresponding action class when lifting more than 10 centimeters
It Wei not jump, corresponding visual angle movement moving operation is jumped for role.
5. the 3D campuses guide method according to claim 3 that emotion computing is carried out based on limb action, it is characterised in that
When the action behavior is that body forward is inclined and both hands are significantly lifted, corresponding categories of emotions is anger, corresponding feedback
Content is display operational order explanation;The action behavior be body forward incline and both hands lift or staggeredly when, corresponding mood
Species is anxiety, and corresponding feedback content is the environmental map where display;The action behavior is that body retreats or straight and upright
And both hands are when significantly lifting, corresponding categories of emotions is interesting, and corresponding feedback content is shows the more detailed information content.
6. the 3D campuses guide method according to claim 1 that emotion computing is carried out based on limb action, it is characterised in that
The Kinect somatosensory camera mounting is located at apart from 2 meters of remote places of user to carry out the seizure of double with distinguishing
Know.
7. the 3D campuses guide method according to claim 1 that emotion computing is carried out based on limb action, it is characterised in that
The 3D campus environments guide to visitors application can carry out the displaying of 3D campus environments and display campus information.
8. a kind of 3D campuses guide system that emotion computing is carried out based on limb action, it is characterised in that including:
Limb action is detected and identification module, for the skeletal gesture by Kinect somatosensory camera grabs user's whole body,
To carry out the detecting and identification of limb action;
Limb action judge module, detects and recognizes what module was sent for receiving and identifying the limb action by computer
Limb action, is matched with default action behavior;It is that mobile control posture or limbs mood are dynamic to judge the limb action
Make, control posture if mobile, be sent to activity control resume module;If the action of limbs mood, emotion fortune is sent to
Calculate resume module;
Activity control module, carries out action classification judgement, and carry out virtual angle in the 3D campus environments guide to visitors application set up
The visual angle movement moving operation of color, when virtual role is close to the region and object specified, according to the foundation in database in advance
Good content, shows corresponding informative presentations on picture, and described information is described as brief fundamental region and introduced;
Emotion computing module, carries out the judgement of categories of emotions, and show feedback in the 3D campus environments guide to visitors application set up
Content;The categories of emotions includes angry, anxiety and interesting;The feedback content illustrates including operational order, the environment at place
Map and the more detailed information content.
9. the 3D campuses guide system according to claim 8 that emotion computing is carried out based on limb action, it is characterised in that
Also include:
Action behavior presetting module, for defining action behavior;The action behavior includes forward extending out the right hand and shoulder distance
More than 30 centimeters, forward extend out left hand and shoulder distance is more than 30 centimeters, body is to Right deviation is more than 15 degree, body inclines to the left and be more than
15 degree, lift incline more than 10 centimeters, body forward and both hands significantly lift, body forward incline and both hands lift or staggeredly and
Body is retreated or straight and upright and both hands are significantly lifted;
Action classification presetting module, for defining action classification;The action classification include the right hand to extension, left hand to extension,
Body inclines to the left and jumped to Right deviation, body;
Moving operation presetting module is moved at visual angle, for defining basic operation instruction;Basic operation instruction include role to
Before walk, role draws back, role turns right, role turns left and role jump;
Categories of emotions presetting module, for due to categories of emotions;The categories of emotions includes angry, anxiety and interesting;
Feedback content presetting module, for the feedback content due to display;The feedback content includes operational order explanation, place
Environmental map and the more detailed information content;
Mobile control posture relating module, moving operation is moved for relevant action behavior, action classification and visual angle;The action
When behavior is more than 30 centimeters to forward extend out the right hand and shoulder distance, corresponding action classification be the right hand to extension, it is corresponding to regard
Angle movement moving operation is gone ahead for role;When the action behavior is more than 30 centimeters to forward extend out left hand with shoulder distance,
Corresponding action classification be left hand to extension, corresponding visual angle movement moving operation is drawn back for role;The action behavior is
Body to Right deviation be more than 15 degree when, corresponding action classification be body to Right deviation, corresponding visual angle movement moving operation is role
Turn right;When the action behavior is that body is inclined to the left more than 15 degree, corresponding action classification inclines to the left for body, corresponding to regard
Angle movement moving operation is turned left for role;The action behavior is corresponding action classification when lifting more than 10 centimeters
For jump, corresponding visual angle movement moving operation is jumped for role;
Limbs mood acts relating module, for relevant action behavior, categories of emotions and feedback content;The action behavior is body
When body turns forward and both hands are significantly lifted, corresponding categories of emotions is anger, and corresponding feedback content is said for display operational order
It is bright;The action behavior be body forward incline and both hands lift or staggeredly when, corresponding categories of emotions be anxiety, corresponding feedback
Content is the environmental map where display;When the action behavior is that body retreats or straight and upright and both hands are significantly lifted, correspondence
Categories of emotions to be interesting, corresponding feedback content is shows the more detailed information content.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710437831.9A CN107168538A (en) | 2017-06-12 | 2017-06-12 | A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710437831.9A CN107168538A (en) | 2017-06-12 | 2017-06-12 | A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107168538A true CN107168538A (en) | 2017-09-15 |
Family
ID=59825051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710437831.9A Pending CN107168538A (en) | 2017-06-12 | 2017-06-12 | A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107168538A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363978A (en) * | 2018-02-12 | 2018-08-03 | 华南理工大学 | Using the emotion perception method based on body language of deep learning and UKF |
CN109034090A (en) * | 2018-08-07 | 2018-12-18 | 南通大学 | A kind of emotion recognition system and method based on limb action |
CN109101879A (en) * | 2018-06-29 | 2018-12-28 | 温州大学 | A kind of the posture interactive system and implementation method of VR teaching in VR classroom |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103092349A (en) * | 2013-01-23 | 2013-05-08 | 宁凯 | Panoramic experience method based on Kinect somatosensory equipment |
CN103116857A (en) * | 2013-02-01 | 2013-05-22 | 武汉百景互动科技有限责任公司 | Virtual sample house wandering system based on body sense control |
CN103218654A (en) * | 2012-01-20 | 2013-07-24 | 沈阳新松机器人自动化股份有限公司 | Robot emotion generating and expressing system |
CN105913482A (en) * | 2016-03-31 | 2016-08-31 | 上海晋荣智能科技有限公司 | Human body bone identification method based on Kinect |
CN106297442A (en) * | 2016-10-27 | 2017-01-04 | 深圳市成真教育科技有限公司 | A kind of body-sensing mutual education realization method and system |
-
2017
- 2017-06-12 CN CN201710437831.9A patent/CN107168538A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218654A (en) * | 2012-01-20 | 2013-07-24 | 沈阳新松机器人自动化股份有限公司 | Robot emotion generating and expressing system |
CN103092349A (en) * | 2013-01-23 | 2013-05-08 | 宁凯 | Panoramic experience method based on Kinect somatosensory equipment |
CN103116857A (en) * | 2013-02-01 | 2013-05-22 | 武汉百景互动科技有限责任公司 | Virtual sample house wandering system based on body sense control |
CN105913482A (en) * | 2016-03-31 | 2016-08-31 | 上海晋荣智能科技有限公司 | Human body bone identification method based on Kinect |
CN106297442A (en) * | 2016-10-27 | 2017-01-04 | 深圳市成真教育科技有限公司 | A kind of body-sensing mutual education realization method and system |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108363978A (en) * | 2018-02-12 | 2018-08-03 | 华南理工大学 | Using the emotion perception method based on body language of deep learning and UKF |
CN108363978B (en) * | 2018-02-12 | 2022-04-22 | 华南理工大学 | Emotion sensing method based on body language by adopting deep learning and UKF |
CN109101879A (en) * | 2018-06-29 | 2018-12-28 | 温州大学 | A kind of the posture interactive system and implementation method of VR teaching in VR classroom |
CN109034090A (en) * | 2018-08-07 | 2018-12-18 | 南通大学 | A kind of emotion recognition system and method based on limb action |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
CN110334669B (en) * | 2019-07-10 | 2021-06-08 | 深圳市华腾物联科技有限公司 | Morphological feature recognition method and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107168538A (en) | A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action | |
Piana et al. | Real-time automatic emotion recognition from body gestures | |
Ammar et al. | The affective tutoring system | |
CN108845668B (en) | Man-machine interaction system and method | |
Turk | Gesture recognition | |
KR101302638B1 (en) | Method, terminal, and computer readable recording medium for controlling content by detecting gesture of head and gesture of hand | |
CN105980965A (en) | Systems, devices, and methods for touch-free typing | |
CN107562186A (en) | The 3D campuses guide method for carrying out emotion computing is recognized based on notice | |
CN110286763A (en) | A kind of navigation-type experiment interactive device with cognitive function | |
CN104407694A (en) | Man-machine interaction method and device combining human face and gesture control | |
CN102221881A (en) | Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking | |
CN103336581A (en) | Human eye movement characteristic design-based human-computer interaction method and system | |
CN109933205A (en) | A kind of vehicle-mounted expression in the eyes interactive device | |
CN104460967A (en) | Recognition method of upper limb bone gestures of human body | |
Chen et al. | Real-time multi-modal human–robot collaboration using gestures and speech | |
Wang et al. | MFA: A Smart Glove with Multimodal Intent Sensing Capability. | |
Wang et al. | Gaze-aware hand gesture recognition for intelligent construction | |
CN107272893A (en) | Man-machine interactive system and method based on gesture control non-touch screen | |
Chacón-Quesada et al. | Augmented reality controlled smart wheelchair using dynamic signifiers for affordance representation | |
Rett | Robot-human interface using Laban Movement Analysis inside a Bayesian framework | |
Spanogianopoulos et al. | Human computer interaction using gestures for mobile devices and serious games: A review | |
CN105045390A (en) | Human upper limb skeleton gesture identification method | |
Baatar et al. | Comparing sensor based and vision based techniques for dynamic gesture recognition | |
Yu et al. | A multi-sensor gesture interaction system for human-robot cooperation | |
CN109144237B (en) | Multi-channel man-machine interactive navigation method for robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |
|
RJ01 | Rejection of invention patent application after publication |