CN103809733A - Man-machine interactive system and method - Google Patents
Man-machine interactive system and method Download PDFInfo
- Publication number
- CN103809733A CN103809733A CN201210440197.1A CN201210440197A CN103809733A CN 103809733 A CN103809733 A CN 103809733A CN 201210440197 A CN201210440197 A CN 201210440197A CN 103809733 A CN103809733 A CN 103809733A
- Authority
- CN
- China
- Prior art keywords
- user
- man
- attitude
- interactive operation
- view data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
Abstract
The invention provides a man-machine interactive system and method. The man-machine interactive system comprises image obtaining equipment, man-machine interactive processing equipment and display equipment, wherein the image obtaining equipment is used for obtaining image data; the man-machine interactive processing equipment is used for determining interactive operation which users want to conduct according to multiple types of actions and gestures, detected in the image data, of the users; the display equipment is used for displaying a display screen corresponding to results of the interactive operation. According to the man-machine interactive system and method, the man-machine interactive operation can be conducted by combining multiple movement detecting modes, and therefore under the situation that an additional input device is not needed, ambiguity of man-machine interactive operation recognition is reduced, and accuracy of the man-machine interactive operation is improved.
Description
Technical field
The present invention relates to computer vision and area of pattern recognition, more specifically, relate to a kind of non-contacting, naturally remote man-machine interaction (HCI) system and method.
Background technology
Man-machine interaction mode based on computer vision technique can be carried out vision by various Image Acquisition and disposal route and obtain user's input.Man-machine interaction mode based on computer vision technique becomes the hot issue of human-computer interaction technology of new generation, especially aspect the man-machine interaction of amusement and recreation, is being widely used.Under this interactive mode, can come to carry out alternately with computing machine by user's body posture, head pose, sight line or human action, thereby can make user free from the input mode of traditional keyboard, mouse etc., obtain unprecedented man-machine interaction and experience.
The multiple man-machine interaction mode based on computer vision has been proposed at present.In the existing man-machine interaction mode of one, can by use touch input and three-dimensional (3D) gesture input produce, revise and operate 3D object.In another approach, can detect with Virtual User interface and carry out alternately by human body attitude.
But the type of the motion detection that existing human-computer interaction device and method are utilized is comparatively single, input media that conventionally need to be based on touching and need user to remember that a large amount of compulsory exercises carries out alternately.Due to gesture, posture and degree of depth induction range, conventionally need to carry out pre-service or various manual operation, for example, need to calibrate various sensors, pre-defined interactive space etc.This makes user feel inconvenient.Therefore, need a kind of man-machine interaction mode that can utilize multi-motion detection mode and not rely on additional input media.
Summary of the invention
According to an aspect of the present invention, provide a kind of man-machine interactive system, having comprised: image acquisition equipment, for obtaining view data; Man-machine interaction treatment facility, determines that according to the user's who detects from view data polytype action and attitude user wants the interactive operation of carrying out; Display device, shows the display screen corresponding with the result of interactive operation.
According to an aspect of the present invention, man-machine interaction treatment facility comprises: motion detection block detects user's polytype action and attitude from view data; Mutual determination module, the user's who detects according to motion detection block polytype action and attitude determine that user wants the interactive operation that will carry out, and send corresponding demonstration operational order to display control module; Display control module, the instruction control display device definite according to mutual determination module shows corresponding interactive operation on display screen.
According to an aspect of the present invention, motion detection block comprises: sight line capture module, for detect user's direction of visual lines from view data; Attitude tracing module, for following the trail of and identify attitude and the action of user's body each several part in view data.
According to an aspect of the present invention, sight line capture module is determined user's direction of visual lines by detect user's the pitch orientation of head and yawing moment from view data.
According to an aspect of the present invention, user's the node of hand is followed the trail of and detected to attitude tracing module to determine user's motion and the gesture of hand in view data, and the health bone node that detects user is to determine the attitude action of user's body each several part.
The attitude of the user's that according to an aspect of the present invention, determination module detects according to sight line capture module alternately direction of visual lines and the user's of attitude tracing module identification hand determines whether to start interactive operation.
According to an aspect of the present invention, if determine that the display items that the direction indication of user's direction of visual lines and user's hand all points on display screen exceedes the schedule time, mutual determination module is definite starts this display items to carry out interactive operation.
According to an aspect of the present invention, if determine that the direction indication of user's direction of visual lines and user's hand does not all point to display items, mutual determination module is definite stops this display items to carry out interactive operation.
According to an aspect of the present invention, when user is during near image acquisition equipment, attitude tracing module follow the trail of and identification user's finger movement with identification user's gesture, when user is during away from image acquisition equipment, attitude tracing module is followed the trail of and the action of identification user's arm.
According to an aspect of the present invention, man-machine interaction treatment facility also comprises: self-defined posture Registering modules, and for registering the interactive operation order corresponding with user-defined gesture actions.
According to a further aspect in the invention, provide a kind of man-machine interaction method, having comprised: obtain view data; Determine that according to the user's who detects from view data polytype action and attitude user wants the interactive operation of carrying out; Show the display screen corresponding with the result of interactive operation.
According to a further aspect in the invention, the step of determining interactive operation comprises: the polytype action and the attitude that from view data, detect user; Determine according to the user's who detects polytype action and attitude the interactive operation that will carry out, and send the demonstration operational order corresponding with interactive operation; On display screen, show corresponding interactive operation according to definite instruction control display device.
According to a further aspect in the invention, detecting polytype action of user and the step of attitude comprises: the direction of visual lines that detects user from view data; Follow the trail of and identify the attitude action of user's body each several part.
According to a further aspect in the invention, determine user's direction of visual lines by detect user's the pitch orientation of head and yawing moment from view data.
According to a further aspect in the invention, the node of hand by following the trail of and detect user in view data to be to determine user's motion and the gesture of hand, and health bone node by detecting user from view data is to determine the attitude action of user's body each several part.
According to a further aspect in the invention, determine whether to start interactive operation according to the attitude of the user's of the user's who detects direction of visual lines and the identification of attitude tracing module hand.
According to a further aspect in the invention, if determine that the display items that the direction indication of user's direction of visual lines and user's hand all points on display screen exceedes the schedule time, determines and starts this display items to carry out interactive operation.
According to a further aspect in the invention, if determine that the direction indication of user's direction of visual lines and user's hand does not all point to display items, determines and stops this display items to carry out interactive operation.
According to a further aspect in the invention, when user is during near image acquisition equipment, follow the trail of and identification user's finger movement with identification user's gesture, when user is during away from image acquisition equipment, the action of identification user's arm.
According to a further aspect in the invention, the step of determining interactive operation also comprises: determine the interactive operation corresponding with the user-defined gesture actions of registering.
Accompanying drawing explanation
By the description of carrying out below in conjunction with the accompanying drawing that embodiment is exemplarily shown, above and other object of the present invention and feature will become apparent, wherein:
Fig. 1 illustrates according to the man-machine interactive system of the embodiment of the present invention and user to carry out interactive schematic diagram;
Fig. 2 is the structured flowchart illustrating according to the man-machine interaction treatment facility of the man-machine interactive system of the embodiment of the present invention;
Fig. 3 illustrates the schematic diagram that starts according to another embodiment of the present invention or stop man-machine interactive operation attitude;
Fig. 4 is the process flow diagram illustrating according to the man-machine interaction method of the embodiment of the present invention;
Fig. 5 illustrates the process flow diagram that carries out menu operation according to the man-machine interaction method of the embodiment of the present invention;
Fig. 6 illustrates the process flow diagram that carries out the interactive operation of 3D display-object according to the man-machine interaction method of the embodiment of the present invention;
Fig. 7 illustrates the process flow diagram that carries out hand-written operation according to the man-machine interaction method of the embodiment of the present invention.
Embodiment
Now will describe embodiments of the invention in detail, the example of described embodiment is shown in the drawings, and wherein, identical label refers to identical parts all the time.Below will be by described embodiment is described with reference to accompanying drawing, to explain the present invention.
Fig. 1 illustrates according to the man-machine interactive system of the embodiment of the present invention and user to carry out interactive schematic diagram.
As shown in Figure 1, comprise image acquisition equipment 100, man-machine interaction treatment facility 200 and display device 300 according to the man-machine interactive system of the embodiment of the present invention.Image acquisition equipment 100 is for obtaining view data, and view data can have depth characteristic and color characteristic.Image acquisition equipment 100 can be the device that can take depth image, for example, and depth camera.
Man-machine interaction treatment facility 200 is analyzed for the view data that image acquisition equipment 100 is obtained, and resolves thereby identify user's attitude and action the attitude to user and action.Then, man-machine interaction treatment facility 200 carries out corresponding demonstration according to the output control display device 300 of resolving.Display device 300 can be such as televisor (TV), the equipment of projector.
Here, as shown in Figure 1, man-machine interaction treatment facility 200 can determine that user wants the interactive operation of carrying out according to polytype action of the user who detects and attitude.For example, user can for example, (watching multiple objects in the shown content of display device 300 attentively, OBJ1 shown in Fig. 1, OBJ2 and OBJ3) in certain special object (OBJ2) time, with Fingers to this special object, thereby start interactive operation.That is to say, man-machine interaction treatment facility 200 can detect action and the posture of user's direction of visual lines, gesture and health each several part.User also can operate certain special object showing by moveable finger, for example, changes the display position of this object.Meanwhile, user goes back certain position (for example, arm) of Mobile body or moves whole health the input of carrying out interactive operation.Although should be understood that the equipment that image acquisition equipment 100, man-machine interaction treatment facility 200 and display device 300 are shown as separating, these three equipment also can be arbitrarily combined into one or two equipment.For example, image acquisition equipment 100 and man-machine interaction treatment facility 200 can be realized in an equipment.
Below with reference to Fig. 2 to being elaborated according to the structure of the man-machine interaction treatment facility 200 in the man-machine interactive system of the embodiment of the present invention.
As shown in Figure 2, comprise motion detection block 210, mutual determination module 220 and display control module 230 according to the man-machine interaction treatment facility 200 of the embodiment of the present invention.
According to one embodiment of present invention, motion detection block 210 can comprise sight line capture module 211 and attitude tracing module 213.
Wherein, sight line capture module 211 is for obtaining user's direction of visual lines from view data.Can obtain by detect user's head pose from view data user's direction of visual lines.The posture of head is mainly embodied by head pitching and head deflection.Correspondingly, can the head zone in depth image estimate respectively the angle of pitch and the deflection angle of head, thereby synthesize corresponding head pose based on the described angle of pitch and deflection angle, thereby obtain user's direction of visual lines.
For this reason, according to embodiments of the invention, attitude tracing module 213 can be identified and the motion of the hand of track user based on skin color feature and/or 3D feature.Particularly, attitude tracing module 213 can comprise the sorter based on skin color or 3D features training.For the situation that adopts skin color sorter, can utilize probability model (for example, gauss hybrid models (GMM)) whether to belong to hand by the color distribution of hand skin to distinguish a possible pixel.For depth characteristic, can produce degree of depth comparative feature as the mode of introducing in " Real-Time Human Pose Recognition in Parts from Single Depth Images.Jamie Shotton et al.In CVPR 2011 ", or partial-depth piece (little rectangular block) and the piece on known hand model are compared and measure similarity.Then, by the combination of different color characteristics and depth characteristic, can use general sorter (such as, Random Forest or AdaBoosting decision tree) carry out classification task to determine the hand in view data.Then,, by detection hand frame by frame, the traceable movement locus/speed with calculating hand of attitude tracing module 213, to locate hand in 2D image and 3d space territory.Especially, by depth data and 3D hand model are compared, the position of traceable hand joint.But, if hand, away from image acquisition equipment 100,, in the time that the hand region in view data is less than predetermined threshold, is considered data reliability, can determine by the mode of the health bone of track user the motion of arm.
Should be understood that above is only an example that determines whether to start or finish interactive operation state according to the user's that detects action and attitude.Also can determine whether to start or finish interactive operation state according to other default mode.For example, can start interactive operation attitude according to user's direction of visual lines and predetermined gesture.As shown in Figure 3, when motion detection block 210 determines from view data that user's finger opens and when direction of visual lines points to the particular item on the display screen of display device 300, mutual determination module 220 can determine that user wants this particular item to carry out interactive operation.Next,, in the time that motion detection block 210 determines that user's finger closes up and hand starts to move, mutual determination module 220 can determine that user wants to drag particular item.If motion detection block 210 is determined user's the fist of holding into, mutual determination module 220 can determine that user wants to stop interactive operation.
After entering interactive operation state, mutual determination module 220 also determines that according to user's action and attitude user wants the interactive operation of carrying out.According to one embodiment of present invention, mutual determination module 220 can be determined according to the direction indication of user's hand the interactive operation of moving hand.According to the direction indication of the definite user's of attitude tracing module 213 hand, mutual determination module 220 can calculate the intersection point of this direction indication and display screen, thereby obtains the position of pointer on display screen.In the time that user's hand moves, mutual determination module 220 can send corresponding order, and indicated number control module 230 is controlled the demonstration of display device 300, and pointer is also moved on screen with the movement of setting about.
According to one embodiment of present invention, mutual determination module 220 also can carry out according to the definite user's of attitude tracing module 213 hand motion the interactive operation of confirming button.According to the direction indication of the definite user's of attitude tracing module 213 hand, mutual determination module 220 can calculate the intersection point of this direction indication and display screen, if there is the display items such as button in this position, mutual determination module 220 can determine that user presses this button.Or if attitude tracing module 213 determines that finger/fist of users is along its direction indication fast moving, mutual determination module 220 confirming buttons are pressed.
Should be understood that only having provided mutual determination module 220 here moves and determine that user wants several examples of the interactive operation of carrying out according to the determined direction of visual lines of sight line tracing module 210 and the definite user's of attitude tracing module 213 attitude.But one skilled in the art will understand that interactive operation of the present invention is not limited to this.Also can carry out more interactive operation according to user's attitude action and/or user's direction of visual lines, for example, can drag display-object, rotational display target by mobile hand, click or double-click display-object etc. by the motion of finger.
In addition, according to embodiments of the invention, user also can customize the interactive operation corresponding with specific action.For this reason, man-machine interaction treatment facility 200 also can comprise a self-defined posture Registering modules (not shown), for registering the interactive operation corresponding with user-defined gesture actions.Self-defined posture Registering modules can have a database, for the posture of record and action are mapped to corresponding interactive operation order.For example, carrying out 2D or 3D target show in the situation that, can dwindle or amplify 2D or 3D display-object by following the trail of the direction of motion of two hands.Especially, in order to register new gesture actions, reproducibility and the ambiguity of the self-defining gesture actions of self-defined posture Registering modules test subscriber, and return to a reliability mark, whether effective with the self-defining interactive operation order of indicating user.
After mutual determination module 220 has been determined the interactive operation that user wants to carry out, mutual determination module 220 sends corresponding instruction to display control module 230, and display control module 230 shows corresponding interactive operation according to instruction control display device 300 on display screen.For example, can control the screen-picture of the operations such as display device 300 display pointers are moved, corresponding display items is moved, button is pressed.
Describe according to the detailed process of the man-machine interaction method of the embodiment of the present invention below with reference to Fig. 4.
As shown in Figure 4, at step S410, first obtain view data by image acquisition equipment 100.
Next, at step S420, man-machine interaction treatment facility 200 analysis images obtain polytype user's attitude and the action in the view data that equipment 100 obtains, to determine whether entering interactive operation state and user wants the interactive operation of carrying out.Here, for example, man-machine interaction treatment facility 200 can be from the action of the various piece of view data detection and Identification user's direction of visual lines and human body and attitude, to determine that user wants the interactive operation of carrying out.According to the present embodiment, man-machine interaction treatment facility 200 can determine whether to enter interactive operation state according to the direction of visual lines detecting and user's direction indication.Particularly, in the time that man-machine interaction treatment facility 200 determines that on the display screen that detects user's direction of visual lines and the direction indication of hand sensing display device 300 from view data, certain shown display items exceedes the schedule time, man-machine interaction treatment facility 200 enters interactive operation state, and determines according to the follow-up attitude action of user the interactive operation that will carry out display-object.
Then,, at step S430, show corresponding display screen or upgrade display screen according to definite interactive operation control display device 300.For example, can according to the direction indication of user's hand determine user want the pointer of mobile display position, drag display items, click display items, double-click display items etc.
In step S420, if during carrying out interactive operation, man-machine interaction treatment facility 200 determines that user's direction indication and direction of visual lines have all left display-object, determines that user wants to stop the interactive operation to display-object, and shows the display screen that stops display-object operating.It should be noted that and also can come by another way to determine whether user wants to stop interactive operation.For example, can stop interactive operation according to user's certain gestures (as above clenching one's fists).
The exemplary flow of utilizing man-machine interaction method according to the present invention to carry out various interactive operations is described below with reference to Fig. 5-Fig. 7.
Shown in Fig. 5 is the process flow diagram that carries out menu operation according to the man-machine interaction method of the embodiment of the present invention.
In the embodiment of Fig. 5, suppose that default menu is displayed on the display screen of display device 300, and default menu comprises that some are carried out interactive operation for user.
At step S510, in the time that the human body attitude that detects from the view data catching shows the direction indication of user's hand and direction of visual lines and all points to certain specific menu item on display screen, determine the interactive operation state entering menu.
Next, at step S520, the movement locus of traceable user's hand and speed to be to determine user's action and the gesture of hand, and determines that according to the action of hand and gesture user wants the interactive operation of carrying out.For example, can carry out according to the action of user's hand the interactive operation of analog mouse.In the time determining that user's forefinger is made the action of clicking, can choose Fingers to show the particular item of the menu in direction.In the time determining that user's middle finger is made the action of clicking, can show with right mouse button and move corresponding content, for example, show the entremets uniterming relevant to this etc.Then,, at step S530, control display device demonstration or the renewal menu content corresponding with definite interactive operation.
Fig. 6 is the process flow diagram that carries out the operation of 3D display-object according to the man-machine interaction method of the embodiment of the present invention.Here, display device 300 is the display devices that can show 3D content.
First,, at step S610, in the time that the human body attitude that detects from the view data catching shows the direction indication of user's hand and direction of visual lines and all points to the specific 3D display-object on display screen, determine the interactive operation state entering 3D display-object.Next, at step S620, the movement locus of traceable user's hand and speed to be to determine user's action and the gesture of hand, and determines that according to the action of hand and gesture user wants the interactive operation of carrying out.For example, the 3D display-object on the joint of the direction indication of hand and direction of visual lines can be picked up, and can be according to the movement of hand and mobile 3 D display-object.In addition, also can drag according to the action of hand, zoom in or out the 3D display-object of choosing.Finally, at step S630, control display device and again play up interactive operation 3D display-object afterwards according to definite interactive operation.
Fig. 7 is the process flow diagram that carries out text input operation according to the man-machine interaction method of the embodiment of the present invention.Here suppose that the presumptive area on the shown display screen of display device 300 can be used as territory, input text area.
First,, at step S710, in the time that the human body attitude that detects from the view data catching shows the direction indication of user's hand and direction of visual lines and all points to the handwriting input region on display screen, determine the interactive operation state that enters handwriting input.Next, at step S720, the movement locus of traceable user's hand and speed, and determine that according to the movement locus of user's hand user wants the text of input.Can determine that user wants the text of input according to the recognition methods based on study, and be corresponding interactive operation order by text interpretation.Finally, at step S730, control the display screen that display device shows interactive operation command execution result afterwards.
Although should be understood that above embodiment determines whether to start or finish interactive operation and user's follow-up interactive operation according to the direction indication of direction of visual lines and hand, the invention is not restricted to this.Can determine whether to start or finish interactive operation and follow-up interactive operation according to the combination of the motion detection that detects other type.
According to the present invention, can utilize the combination of multi-motion detection mode to carry out man-machine interactive operation, thereby (for example do not needing extra input media, touch-screen input media) situation under, reduce the blur level of man-machine interactive operation identification, improve the accuracy of man-machine interactive operation.For example, in the situation that not adopting touch-screen input media, can realize the amplification of display-object, the interactive operation of dwindling.Like this, take full advantage of the motion detection mode of computer vision technique, experience for user has brought interactive operation better.
Although illustrate and described the present invention with reference to some exemplary embodiments of the present invention, but it should be appreciated by those skilled in the art that, in the case of not departing from the spirit and scope of the present invention of claim and equivalent restriction thereof, can make in form and details various changes.
Claims (11)
1. a man-machine interactive system, comprising:
Image acquisition equipment, for obtaining view data;
Man-machine interaction treatment facility, determines that according to the user's who detects from view data polytype action and attitude user wants the interactive operation of carrying out;
Display device, shows the display screen corresponding with the result of interactive operation.
2. man-machine interactive system as claimed in claim 1, wherein, man-machine interaction treatment facility comprises:
Motion detection block detects user's polytype action and attitude from view data;
Mutual determination module, the user's who detects according to motion detection block polytype action and attitude determine that user wants the interactive operation that will carry out, and send corresponding demonstration operational order to display control module;
Display control module, the instruction control display device definite according to mutual determination module shows corresponding interactive operation on display screen.
3. man-machine interactive system as claimed in claim 2, wherein, motion detection block comprises:
Sight line capture module, for detecting user's direction of visual lines from view data;
Attitude tracing module, for following the trail of and identify attitude and the action of user's body each several part in view data.
4. man-machine interactive system as claimed in claim 3, wherein, sight line capture module is determined user's direction of visual lines by detect user's the pitch orientation of head and yawing moment from view data.
5. man-machine interactive system as claimed in claim 4, wherein, user's the node of hand is followed the trail of and detected to attitude tracing module to determine user's motion and the gesture of hand in view data, and the health bone node that detects user is to determine the attitude action of user's body each several part.
6. man-machine interactive system as claimed in claim 3, wherein, the attitude of the user's of the user's that mutual determination module detects according to sight line capture module direction of visual lines and the identification of attitude tracing module hand determines whether to start interactive operation.
7. man-machine interactive system as claimed in claim 3, wherein, if determine that the display items that the direction indication of user's direction of visual lines and user's hand all points on display screen exceedes the schedule time, mutual determination module is definite starts this display items to carry out interactive operation.
8. man-machine interactive system as claimed in claim 7, wherein, if determine that the direction indication of user's direction of visual lines and user's hand does not all point to display items, mutual determination module is definite stops this display items to carry out interactive operation.
9. man-machine interactive system as claimed in claim 3, wherein, when user is during near image acquisition equipment, user's finger movement is followed the trail of and identified to attitude tracing module with identification user's gesture, when user is during away from image acquisition equipment, attitude tracing module is followed the trail of and the action of identification user's arm.
10. man-machine interactive system as claimed in claim 1, wherein, man-machine interaction treatment facility also comprises:
Self-defined posture Registering modules, for registering the interactive operation order corresponding with user-defined gesture actions.
11. 1 kinds of man-machine interaction methods, comprising:
Obtain view data;
Determine that according to the user's who detects from view data polytype action and attitude user wants the interactive operation of carrying out;
Show the display screen corresponding with the result of interactive operation.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210440197.1A CN103809733B (en) | 2012-11-07 | 2012-11-07 | Man-machine interactive system and method |
CN201810619648.5A CN108845668B (en) | 2012-11-07 | 2012-11-07 | Man-machine interaction system and method |
KR1020130050237A KR102110811B1 (en) | 2012-11-07 | 2013-05-03 | System and method for human computer interaction |
US14/071,180 US9684372B2 (en) | 2012-11-07 | 2013-11-04 | System and method for human computer interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210440197.1A CN103809733B (en) | 2012-11-07 | 2012-11-07 | Man-machine interactive system and method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810619648.5A Division CN108845668B (en) | 2012-11-07 | 2012-11-07 | Man-machine interaction system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103809733A true CN103809733A (en) | 2014-05-21 |
CN103809733B CN103809733B (en) | 2018-07-20 |
Family
ID=50706630
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210440197.1A Active CN103809733B (en) | 2012-11-07 | 2012-11-07 | Man-machine interactive system and method |
CN201810619648.5A Active CN108845668B (en) | 2012-11-07 | 2012-11-07 | Man-machine interaction system and method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810619648.5A Active CN108845668B (en) | 2012-11-07 | 2012-11-07 | Man-machine interaction system and method |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102110811B1 (en) |
CN (2) | CN103809733B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104391578A (en) * | 2014-12-05 | 2015-03-04 | 重庆蓝岸通讯技术有限公司 | Real-time gesture control method of three-dimensional images |
CN105005779A (en) * | 2015-08-25 | 2015-10-28 | 湖北文理学院 | Face verification anti-counterfeit recognition method and system thereof based on interactive action |
CN105740948A (en) * | 2016-02-04 | 2016-07-06 | 北京光年无限科技有限公司 | Intelligent robot-oriented interaction method and device |
CN105759973A (en) * | 2016-03-09 | 2016-07-13 | 电子科技大学 | Far-near distance man-machine interactive system based on 3D sight estimation and far-near distance man-machine interactive method based on 3D sight estimation |
CN107087219A (en) * | 2017-02-22 | 2017-08-22 | 李海英 | Smart home TV |
CN107678545A (en) * | 2017-09-26 | 2018-02-09 | 深圳市维冠视界科技股份有限公司 | A kind of information interactive terminal and method |
CN107895161A (en) * | 2017-12-22 | 2018-04-10 | 北京奇虎科技有限公司 | Real-time attitude recognition methods and device, computing device based on video data |
CN107944376A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | The recognition methods of video data real-time attitude and device, computing device |
CN109426498A (en) * | 2017-08-24 | 2019-03-05 | 北京迪文科技有限公司 | A kind of man-machine interactive system backstage development approach and device |
CN110442243A (en) * | 2019-08-14 | 2019-11-12 | 深圳市智微智能软件开发有限公司 | A kind of man-machine interaction method and system |
CN111527468A (en) * | 2019-11-18 | 2020-08-11 | 华为技术有限公司 | Air-to-air interaction method, device and equipment |
CN112051746A (en) * | 2020-08-05 | 2020-12-08 | 华为技术有限公司 | Method and device for acquiring service |
CN112099623A (en) * | 2020-08-20 | 2020-12-18 | 昆山火灵网络科技有限公司 | Man-machine interaction system and method |
WO2022247506A1 (en) * | 2021-05-28 | 2022-12-01 | Huawei Technologies Co.,Ltd. | Systems and methods for controlling virtual widgets in gesture-controlled device |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030093065A (en) * | 2002-05-31 | 2003-12-06 | 주식회사 유니온금속 | Heat Exchanger using Fin Plate having plural burring tubes and Method for manufacturing the same |
CN104740869B (en) * | 2015-03-26 | 2018-04-03 | 北京小小牛创意科技有限公司 | The exchange method and system that a kind of actual situation for merging true environment combines |
JP7091983B2 (en) * | 2018-10-01 | 2022-06-28 | トヨタ自動車株式会社 | Equipment control device |
KR102375947B1 (en) * | 2020-03-19 | 2022-03-18 | 주식회사 메이아이 | Method, system and non-transitory computer-readable recording medium for estimatiing interaction information between person and product based on image |
KR102524016B1 (en) * | 2020-08-21 | 2023-04-21 | 김덕규 | System for interaction with object of content |
CN113849065A (en) * | 2021-09-17 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Method and device for triggering client operation instruction by using body-building action |
US20230168736A1 (en) * | 2021-11-29 | 2023-06-01 | Sony Interactive Entertainment LLC | Input prediction for pre-loading of rendering data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694045A (en) * | 2005-06-02 | 2005-11-09 | 北京中星微电子有限公司 | Non-contact type visual control operation system and method |
US20110093939A1 (en) * | 2009-10-20 | 2011-04-21 | Microsoft Corporation | Resource access based on multiple credentials |
US20110154266A1 (en) * | 2009-12-17 | 2011-06-23 | Microsoft Corporation | Camera navigation for presentations |
CN102270035A (en) * | 2010-06-04 | 2011-12-07 | 三星电子株式会社 | Apparatus and method for selecting and operating object in non-touch mode |
US20110317874A1 (en) * | 2009-02-19 | 2011-12-29 | Sony Computer Entertainment Inc. | Information Processing Device And Information Processing Method |
CN102749990A (en) * | 2011-04-08 | 2012-10-24 | 索尼电脑娱乐公司 | Systems and methods for providing feedback by tracking user gaze and gestures |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002259989A (en) * | 2001-03-02 | 2002-09-13 | Gifu Prefecture | Pointing gesture detecting method and its device |
KR100520050B1 (en) * | 2003-05-12 | 2005-10-11 | 한국과학기술원 | Head mounted computer interfacing device and method using eye-gaze direction |
JP5207513B2 (en) * | 2007-08-02 | 2013-06-12 | 公立大学法人首都大学東京 | Control device operation gesture recognition device, control device operation gesture recognition system, and control device operation gesture recognition program |
CN101344816B (en) * | 2008-08-15 | 2010-08-11 | 华南理工大学 | Human-machine interaction method and device based on sight tracing and gesture discriminating |
KR101596890B1 (en) * | 2009-07-29 | 2016-03-07 | 삼성전자주식회사 | Apparatus and method for navigation digital object using gaze information of user |
US8659658B2 (en) * | 2010-02-09 | 2014-02-25 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
JP2012098771A (en) * | 2010-10-29 | 2012-05-24 | Sony Corp | Image forming apparatus and image forming method, and program |
US20130154913A1 (en) * | 2010-12-16 | 2013-06-20 | Siemens Corporation | Systems and methods for a gaze and gesture interface |
US9285874B2 (en) * | 2011-02-09 | 2016-03-15 | Apple Inc. | Gaze detection in a 3D mapping environment |
WO2012144666A1 (en) * | 2011-04-19 | 2012-10-26 | Lg Electronics Inc. | Display device and control method therof |
CN202142050U (en) * | 2011-06-29 | 2012-02-08 | 由田新技股份有限公司 | Interactive customer reception system |
US9201500B2 (en) * | 2012-09-28 | 2015-12-01 | Intel Corporation | Multi-modal touch screen emulator |
-
2012
- 2012-11-07 CN CN201210440197.1A patent/CN103809733B/en active Active
- 2012-11-07 CN CN201810619648.5A patent/CN108845668B/en active Active
-
2013
- 2013-05-03 KR KR1020130050237A patent/KR102110811B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1694045A (en) * | 2005-06-02 | 2005-11-09 | 北京中星微电子有限公司 | Non-contact type visual control operation system and method |
US20110317874A1 (en) * | 2009-02-19 | 2011-12-29 | Sony Computer Entertainment Inc. | Information Processing Device And Information Processing Method |
US20110093939A1 (en) * | 2009-10-20 | 2011-04-21 | Microsoft Corporation | Resource access based on multiple credentials |
US20110154266A1 (en) * | 2009-12-17 | 2011-06-23 | Microsoft Corporation | Camera navigation for presentations |
CN102270035A (en) * | 2010-06-04 | 2011-12-07 | 三星电子株式会社 | Apparatus and method for selecting and operating object in non-touch mode |
CN102749990A (en) * | 2011-04-08 | 2012-10-24 | 索尼电脑娱乐公司 | Systems and methods for providing feedback by tracking user gaze and gestures |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104391578A (en) * | 2014-12-05 | 2015-03-04 | 重庆蓝岸通讯技术有限公司 | Real-time gesture control method of three-dimensional images |
CN104391578B (en) * | 2014-12-05 | 2018-08-17 | 重庆蓝岸通讯技术有限公司 | A kind of real-time gesture control method of 3-dimensional image |
CN105005779A (en) * | 2015-08-25 | 2015-10-28 | 湖北文理学院 | Face verification anti-counterfeit recognition method and system thereof based on interactive action |
CN105740948A (en) * | 2016-02-04 | 2016-07-06 | 北京光年无限科技有限公司 | Intelligent robot-oriented interaction method and device |
CN105740948B (en) * | 2016-02-04 | 2019-05-21 | 北京光年无限科技有限公司 | A kind of exchange method and device towards intelligent robot |
CN105759973A (en) * | 2016-03-09 | 2016-07-13 | 电子科技大学 | Far-near distance man-machine interactive system based on 3D sight estimation and far-near distance man-machine interactive method based on 3D sight estimation |
CN107743257B (en) * | 2017-02-22 | 2018-09-28 | 合肥龙图腾信息技术有限公司 | Human posture's identification device |
CN107087219A (en) * | 2017-02-22 | 2017-08-22 | 李海英 | Smart home TV |
CN108076365B (en) * | 2017-02-22 | 2019-12-31 | 解波 | Human body posture recognition device |
CN107743257A (en) * | 2017-02-22 | 2018-02-27 | 李海英 | Human posture's identification device |
CN108076365A (en) * | 2017-02-22 | 2018-05-25 | 李海英 | Human posture's identification device |
CN109426498A (en) * | 2017-08-24 | 2019-03-05 | 北京迪文科技有限公司 | A kind of man-machine interactive system backstage development approach and device |
CN109426498B (en) * | 2017-08-24 | 2023-11-17 | 北京迪文科技有限公司 | Background development method and device for man-machine interaction system |
CN107678545A (en) * | 2017-09-26 | 2018-02-09 | 深圳市维冠视界科技股份有限公司 | A kind of information interactive terminal and method |
CN107944376A (en) * | 2017-11-20 | 2018-04-20 | 北京奇虎科技有限公司 | The recognition methods of video data real-time attitude and device, computing device |
CN107895161A (en) * | 2017-12-22 | 2018-04-10 | 北京奇虎科技有限公司 | Real-time attitude recognition methods and device, computing device based on video data |
CN107895161B (en) * | 2017-12-22 | 2020-12-11 | 北京奇虎科技有限公司 | Real-time attitude identification method and device based on video data and computing equipment |
CN110442243A (en) * | 2019-08-14 | 2019-11-12 | 深圳市智微智能软件开发有限公司 | A kind of man-machine interaction method and system |
CN111527468A (en) * | 2019-11-18 | 2020-08-11 | 华为技术有限公司 | Air-to-air interaction method, device and equipment |
WO2021097600A1 (en) * | 2019-11-18 | 2021-05-27 | 华为技术有限公司 | Inter-air interaction method and apparatus, and device |
CN112051746A (en) * | 2020-08-05 | 2020-12-08 | 华为技术有限公司 | Method and device for acquiring service |
CN112099623A (en) * | 2020-08-20 | 2020-12-18 | 昆山火灵网络科技有限公司 | Man-machine interaction system and method |
WO2022247506A1 (en) * | 2021-05-28 | 2022-12-01 | Huawei Technologies Co.,Ltd. | Systems and methods for controlling virtual widgets in gesture-controlled device |
US11693482B2 (en) | 2021-05-28 | 2023-07-04 | Huawei Technologies Co., Ltd. | Systems and methods for controlling virtual widgets in a gesture-controlled device |
Also Published As
Publication number | Publication date |
---|---|
CN108845668B (en) | 2022-06-03 |
CN103809733B (en) | 2018-07-20 |
KR20140059109A (en) | 2014-05-15 |
KR102110811B1 (en) | 2020-05-15 |
CN108845668A (en) | 2018-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103809733B (en) | Man-machine interactive system and method | |
US20220261112A1 (en) | Systems, devices, and methods for touch-free typing | |
US8959013B2 (en) | Virtual keyboard for a non-tactile three dimensional user interface | |
KR101757080B1 (en) | Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand | |
KR101652535B1 (en) | Gesture-based control system for vehicle interfaces | |
US9684372B2 (en) | System and method for human computer interaction | |
US8166421B2 (en) | Three-dimensional user interface | |
EP2049976B1 (en) | Virtual controller for visual displays | |
KR101809636B1 (en) | Remote control of computer devices | |
US20170024017A1 (en) | Gesture processing | |
CN110941328A (en) | Interactive display method and device based on gesture recognition | |
JP4323180B2 (en) | Interface method, apparatus, and program using self-image display | |
US9063573B2 (en) | Method and system for touch-free control of devices | |
CN107450714A (en) | Man-machine interaction support test system based on augmented reality and image recognition | |
US20130120250A1 (en) | Gesture recognition system and method | |
Geer | Will gesture recognition technology point the way? | |
Molina et al. | A natural and synthetic corpus for benchmarking of hand gesture recognition systems | |
US20130229348A1 (en) | Driving method of virtual mouse | |
Sairam et al. | Virtual Mouse using Machine Learning and GUI Automation | |
US20090110237A1 (en) | Method for positioning a non-structural object in a series of continuing images | |
Choondal et al. | Design and implementation of a natural user interface using hand gesture recognition method | |
Caputo | Gestural interaction in virtual environments: User studies and applications | |
Luong et al. | Human computer interface using the recognized finger parts of hand depth silhouette via random forests | |
Raees et al. | Thumb inclination-based manipulation and exploration, a machine learning based interaction technique for virtual environments | |
Feng et al. | FM: Flexible mapping from one gesture to multiple semantics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |