CN104281253A - Vision-based man-machine interaction method and system - Google Patents
Vision-based man-machine interaction method and system Download PDFInfo
- Publication number
- CN104281253A CN104281253A CN201310296237.4A CN201310296237A CN104281253A CN 104281253 A CN104281253 A CN 104281253A CN 201310296237 A CN201310296237 A CN 201310296237A CN 104281253 A CN104281253 A CN 104281253A
- Authority
- CN
- China
- Prior art keywords
- image
- man
- action
- mouse
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a vision-based man-machine interaction method and system. The system comprises a vision technique and a man-machine interaction technique. A camera is used for catching and shooting an action of an operator and then capturing a target image; the target image is acquired by a host machine and is analyzed by adopting an image processing analysis technique of the system, so that the action of a target object is detected; and then the action of the object is combined with the corresponding data, so that the operation of the interaction system is realized. According to the invention, an interconnected effect between the operator and the screen is achieved and the system is applied to the interaction at a game and reporting moment.
Description
1. technical field
The present invention relates to a kind of man-machine interactive platform, especially adopt vision technique to realize the action of people and computing machine, game machine etc. carry out real-time, interactive.
2. background technology
Along with the develop rapidly of the basic technologies such as computer picture, machine vision, virtual reality, augmented reality, cross discipline technology, expand the field of computer utility.The appearance of vision technique and human-computer interaction technology and development, impart a kind of interactive mode between computing machine and people, people's many valuable things that utilized these technological development to go out, as the gesture identification interactive system etc. of data glove, staff recognition system, view-based access control model plane projection.
The invention provides a kind of interactive good, that true-time operation is strong man-machine interactive platform and overcome that the existing versatility based on gesture interaction platform is strong, real-time existing defects, and the problem that discrimination is not high, can reach and allow operator have a kind of sensation be in wherein, and the picture of projection also can produce different effects along with the action of operator.
3. summary of the invention
The object of the invention is to provide a kind of interactive good, man-machine interactive plateform system that true-time operation is strong to the defect of existing interaction platform.
Present system comprises: the hardware devices such as main frame, camera, screen and the video acquisition module, image processing module, target identification module, the mouse emulation module that operate on main frame.
System cloud gray model flow process is as follows: video camera picked-up vedio data passes to main frame, main frame does analyzing and processing to video image, detect the action of target object, for the mouse action that the action simulation of target object is corresponding, it is connected with projector, by the movement response of people on projector, realize man-machine interaction.Wherein, described main frame is one of core component of this system, and it can be connected with other external units and realize man-machine interaction; In addition, native system software is also operate on main frame.
Described camera is native system image collecting device, and it infrared fileter is arranged on infrared photography repacking camera in front.By video acquisition module, image is passed to main frame, the view data as main frame is originated.
Described video acquisition module completes the collection of data.Use VFW technology to gather video, a frame is only got to the video gathered and carries out coded treatment and store operation.
Described image processing module and target identification module, after namely carrying out pre-service to image, adopt the frame difference method improved to obtain the shape of hand, recycling Contour extraction and Canny algorithm obtain edge image in one's hands.
Described mouse emulation module, the position assignment being about to the hand obtained, to mouse, after the position assignment to mouse, just can realize long-range analog mouse action by Visual identification technology.The present invention is a kind of system with remote interaction function, makes man-machine interaction more natural and convenient.
4. accompanying drawing explanation
Fig. 1 is the man-machine interactive system structural representation of view-based access control model.
Fig. 2 is the camera after improving.
Fig. 3 is the image of video acquisition.
Fig. 4 is the shape that the frame difference method improved obtains hand.The shape that in figure, (a) obtains on the left side for hand, the shape that (b) obtains below for hand, the shape that (c) obtains above for hand.
Fig. 5 is that hand coordinate obtains schematic diagram.
Fig. 6 is the left button operation of analog mouse.
Fig. 7 is the right button operation of analog mouse.
5. embodiment
As shown in Figure 1, present system hardware device comprises main frame, camera, screen, projector etc.Video camera picked-up vedio data passes to main frame, and main frame does analyzing and processing to video image, detects the action of target object, for the operation of the relative mouse of the action simulation of target object, main frame is connected with projector, by the movement response of people on projector, realizes man-machine interaction.
Embodiment as shown in Figure 2, is arranged on infrared filter on camera, absorbs video image by camera, just directly can absorb infrared light like this under insufficient light or dark condition, collects image clearly.
Embodiment as shown in Figure 3, video acquisition module is used to get a frame to the video gathered, then inside this video window, carry out the display of video image, thus carrying out the coded treatment of video, finally storage operation is carried out to it, when gather video terminate when, close window and exit capture program, otherwise enter the acquisition process of next frame until the video acquisition that finally completes.
As shown in Figure 4 and Figure 5, Fig. 4 and Fig. 5 is by after the image of the collection in Fig. 3 just pre-service, adopting the frame difference method of improvement and background subtraction two kinds of methods to split the target object of motion and the shape of hand, obtains shape in one's hands.Adopt the method for Corner Detection to obtain coordinate again, such method can obtain a lot of angle points on the edge of hand, then replaces the coordinate figure of a point according to the angular coordinate mean value collected, thus obtains the coordinate of hand.
As shown in Figure 6 and Figure 7, Fig. 6 and Fig. 7 is after some process above to image, the extraction of opponent's shape profile, find front position, then on this basis the position of front end is assigned to virtual mouse, then analog mouse carries out left button or right button, the operation of clicking or double-clicking, thus finishing man-machine interaction.
Claims (9)
1. a man-machine interaction method for view-based access control model, mainly comprises the human-computer interaction function of view-based access control model technology and human-computer interaction technology; It is characterized in that, man-machine interaction comprises the action of the target object of operational semantics but not operating mouse realizes by identifying.
2. method according to claim 1, is characterized in that: the action recognition function of target object comprises four steps: video acquisition, image procossing, target identification, mouse emulation.
3. method according to claim 2, is characterized in that: described video acquisition step shows at video window (getting a frame), then to Video coding process and storage operation, for image processing step process.
4. method according to claim 2, is characterized in that: the image that described image processing step process acquisition step gathers, and comprises the subject performance of operational semantics in detected image.
5. method according to claim 2, is characterized in that: described target identification step is to the frame difference method of the image gathered by improvement and the action of background subtraction recognition target object.
6. method according to claim 2, is characterized in that: described mouse emulation step adopts the method for Contour extraction and Corner Detection obtain the coordinate of hand accurately and by the position assignment of hand to mouse, complete the operation of WINDOWS system.
7. method according to claim 5, is characterized in that: the frame difference method of described improvement is on the basis obtaining every frame and background difference, by the method that the difference of adjacent two frames and background difference merge mutually again.
8. a man-machine interactive system for view-based access control model, is characterized in that: comprise video acquisition module, image processing module, target identification module, mouse emulation module.
9. system according to claim 8, is characterized in that: the camera in described video acquisition module adopts infrared camera, installs infrared fileter additional simultaneously.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310296237.4A CN104281253A (en) | 2013-07-10 | 2013-07-10 | Vision-based man-machine interaction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310296237.4A CN104281253A (en) | 2013-07-10 | 2013-07-10 | Vision-based man-machine interaction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104281253A true CN104281253A (en) | 2015-01-14 |
Family
ID=52256221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310296237.4A Pending CN104281253A (en) | 2013-07-10 | 2013-07-10 | Vision-based man-machine interaction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104281253A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101923433A (en) * | 2010-08-17 | 2010-12-22 | 北京航空航天大学 | Man-computer interaction mode based on hand shadow identification |
CN102662471A (en) * | 2012-04-09 | 2012-09-12 | 沈阳航空航天大学 | Computer vision mouse |
CN202548758U (en) * | 2012-04-11 | 2012-11-21 | 罗青 | Interactive projection system |
CN102854983A (en) * | 2012-09-10 | 2013-01-02 | 中国电子科技集团公司第二十八研究所 | Man-machine interaction method based on gesture recognition |
-
2013
- 2013-07-10 CN CN201310296237.4A patent/CN104281253A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101923433A (en) * | 2010-08-17 | 2010-12-22 | 北京航空航天大学 | Man-computer interaction mode based on hand shadow identification |
CN102662471A (en) * | 2012-04-09 | 2012-09-12 | 沈阳航空航天大学 | Computer vision mouse |
CN202548758U (en) * | 2012-04-11 | 2012-11-21 | 罗青 | Interactive projection system |
CN102854983A (en) * | 2012-09-10 | 2013-01-02 | 中国电子科技集团公司第二十八研究所 | Man-machine interaction method based on gesture recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110221690B (en) | Gesture interaction method and device based on AR scene, storage medium and communication terminal | |
Jain et al. | Real-time upper-body human pose estimation using a depth camera | |
Suau et al. | Real-time head and hand tracking based on 2.5 D data | |
CN111694428B (en) | Gesture and track remote control robot system based on Kinect | |
Dinh et al. | Hand gesture recognition and interface via a depth imaging sensor for smart home appliances | |
Gourgari et al. | Thetis: Three dimensional tennis shots a human action dataset | |
TW201120681A (en) | Method and system for operating electric apparatus | |
CN102096471B (en) | Human-computer interaction method based on machine vision | |
CN103455657B (en) | A kind of site work emulation mode based on Kinect and system thereof | |
CN103376890A (en) | Gesture remote control system based on vision | |
CN104423569A (en) | Pointing position detecting device, method and computer readable recording medium | |
TW201331891A (en) | Activity recognition method | |
CN110413143B (en) | Man-machine interaction method based on laser radar | |
CN103995595A (en) | Game somatosensory control method based on hand gestures | |
Hongyong et al. | Finger tracking and gesture recognition with kinect | |
CN110164060A (en) | A kind of gestural control method, storage medium and doll machine for doll machine | |
CN103258188A (en) | Moving target object detection tracking method based on cross-platform computer vision library | |
CN109919128B (en) | Control instruction acquisition method and device and electronic equipment | |
CN201413506Y (en) | Image trapping positioning device | |
Holte et al. | View invariant gesture recognition using the CSEM SwissRanger SR-2 camera | |
Raheja et al. | Controlling a remotely located robot using hand gestures in real time: A DSP implementation | |
Titlee et al. | A novel design of an intangible hand gesture controlled computer mouse using vision based image processing | |
Vidhate et al. | Virtual paint application by hand gesture recognition system | |
Shaker et al. | Real-time finger tracking for interaction | |
CN111651038A (en) | Gesture recognition control method based on ToF and control system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150114 |