CN203070205U - Input equipment based on gesture recognition - Google Patents

Input equipment based on gesture recognition Download PDF

Info

Publication number
CN203070205U
CN203070205U CN 201220299370 CN201220299370U CN203070205U CN 203070205 U CN203070205 U CN 203070205U CN 201220299370 CN201220299370 CN 201220299370 CN 201220299370 U CN201220299370 U CN 201220299370U CN 203070205 U CN203070205 U CN 203070205U
Authority
CN
China
Prior art keywords
user
module
input equipment
equipment based
stream data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220299370
Other languages
Chinese (zh)
Inventor
刘广松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Touchair Technology Co ltd
Original Assignee
Dry Line Consulting (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dry Line Consulting (beijing) Technology Co Ltd filed Critical Dry Line Consulting (beijing) Technology Co Ltd
Priority to CN 201220299370 priority Critical patent/CN203070205U/en
Application granted granted Critical
Publication of CN203070205U publication Critical patent/CN203070205U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The utility model provides input equipment based on gesture recognition. The input equipment based on the gesture recognition comprises a camera shooting module, an image operating processing module and a universal interface module, wherein view directions of the camera shooting module face towards the front of a user, the camera shooting module is used for obtaining image stream data containing filed depth information in the view range in the front of the user in real time, the image operating processing module is used for receiving the image stream data which is obtained by the camera shooting module, analyzing the field depth information from the image stream data, tracking regional positions of hand type parts in the image stream data according to features of hands of the user and combining the field depth information to obtain space three-dimensional position information of the hands of the user and action states of fingers in real time, and the universal interface module is used for outputting the space three-dimensional position information of the hands of the user and action states of fingers to a host machine of a computer. The input equipment based on the gesture recognition can be standard input equipment like a mouse and a keyboard, and the body of the input equipment based on the gesture recognition can be integrated into wearable equipment or a function module on the wearable equipment, and the input equipment based on the gesture recognition is good in portability.

Description

A kind of input equipment based on gesture identification
Technical field
The utility model relates to field of computer technology, more specifically, relates to a kind of input equipment based on gesture identification.
Background technology
Along with progress and the arriving in information explosion epoch of society, the more and more consumer-elcetronics devicess miscellaneous (as portable terminal, PDA(Personal Digital Assistant) etc.) that rely on of people obtain various information.Such as: make a phone call to link up with others, browsing page obtains news and checks Email etc.The man-machine interaction of widespread use at present comprises the hardware devices such as keyboard and mouse that dependence is traditional, and the touch-screen that came into vogue gradually in recent years etc.
People do not satisfy for existing man-machine interaction mode, and people expect that the man-machine interaction of a new generation can be natural, accurate and quick alternately as the person to person.In the nineties in 20th century, the research of man-machine interaction has entered the multi-modal stage, be called natural human-machine interaction (Human-Computer Nature Interaction, HCNI or Human-Machine Nature Interaction, HMNI).
In recent years, natural human-machine interaction is subjected to extensive concern, has also obtained significant progress for man-machine natural Research of Gesture Recognition.Along with being the development at the touch screen interaction interface, plane of representative with equipment such as the Iphone of U.S. Apple, Ipad, people can be easily to the interaction content on the touch screen of plane directly touch click, drag, picture amplifies and the gesture interaction operation such as dwindles.
Kinect equipment with Microsoft is representative, and based on the other technology of body perception, the user can control interaction content on the screen by space gesture motion naturally.
Yet present existing gesture identification equipment all only is applicable to particular device and specific interactive interface, is not the input-output device of standard, can't be compatible mutually with miscellaneous equipment, can't be applied among the various widespread usage situations, and also be not easy to independently carry.
The utility model content
The utility model proposes a kind of input equipment based on gesture identification, can become a kind of standard input device as mouse or keyboard, can be compatible mutually with miscellaneous equipment, various widespread usage situations be can be applicable to, and specific equipment and specific interactive interface are not only applicable to.
The technical solution of the utility model is as follows:
A kind of input equipment based on gesture identification comprises photographing module, image operation processing module and common interface module, wherein:
Photographing module, its visual field direction are used for obtaining in real time the image stream data that user's field of front vision scope contains depth of view information towards user the place ahead;
The image operation processing module, be used for receiving the described image stream data that photographing module obtains, from described image stream data, parse described depth of view information, according to the regional location of hand-type part in the described image stream data of the signature tracking of user's hand, and obtain the spatial three-dimensional position information of user's hand and the operating state of finger in real time in conjunction with described depth of view information;
Common interface module is used for the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.
Photographing module is used for obtaining the image stream data that user's field of front vision scope contains depth of view information in real time with the speed of per second 30 frames at least.
The image operation processing module is used for the feature according to user's hand, and the mode that application Face Detection algorithm combines with the masterplate recognizer is resolved described image stream data, to follow the tracks of the regional location of hand-type part in the described image stream data.
Common interface module is used for the mode by cable data interface or wireless data interface, and the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.
Described cable data interface is USB (universal serial bus) (USB) data-interface.
Described wireless data interface is: blue-teeth data interface, infrared data interface or wifi data-interface.
Described photographing module, image operation processing module and common interface module integrate, but to form independently wearable device.
Described photographing module, image operation processing module and common interface module integrate, but constitute the independently functional unit of wearable device.
The image operation processing module, being used for using the masterplate recognizer is skeleton pattern with user's hand Real time identification, and in this skeleton pattern, finger is characterized by straight line, and the joint of hand is by a sign.
From technique scheme as can be seen, in the utility model embodiment, photographing module, its visual field direction is used for obtaining in real time the image stream data that user's field of front vision scope contains depth of view information towards user the place ahead; The image operation processing module, be used for receiving the described image stream data that photographing module obtains, from described image stream data, parse described depth of view information, according to the regional location of hand-type part in the described image stream data of the signature tracking of user's hand, and obtain the spatial three-dimensional position information of user's hand and the operating state of finger in real time in conjunction with described depth of view information; Common interface module is used for the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.This shows, use after the utility model, a kind of solution of general gesture identification interactive device has been proposed, the equipment of developing based on this solution can become a kind of standard input device as mouse and keyboard, can be compatible mutually with miscellaneous equipment, various widespread usage situations be can be applicable to, and specific equipment and specific interactive interface are not only applicable to.
And, but the utility model can be integrated into a wearable device, but or be integrated into functional module on certain wearable device, portability is very good.
In addition, the user can be worn on the utility model also that head is gone up, front, the first-class a plurality of positions of shoulder, goes for a lot of scenes.
Description of drawings
Fig. 1 is the input equipment based on gesture identification of the present utility model and main frame johning knot composition.
Fig. 2 is staff shell system synoptic diagram.
Embodiment
For making the purpose of this utility model, technical scheme and advantage clearer, below in conjunction with accompanying drawing the utility model is described in further detail.
The utility model proposes a kind of general gesture identification equipment, this equipment can be widely used in present human-computer interaction interface, make the people carry out man-machine interaction naturally by gesture.
In one embodiment, this equipment comprises photographing module, image operation processing module and common interface module.
Physically, but photographing module and image operation processing module and common interface module can integrate and become a wearable device.
Alternatively, but photographing module and image operation processing module and common interface module can be integrated into the functional module on certain wearable device.
The user can be worn on this wearable device or functional module that head is gone up, front, the first-class a plurality of positions of shoulder, and wherein photographing module visual field direction is towards the place ahead of people, the field range that will enter photographing module after user's hand lifts.
Such as, photographing module exemplarily obtains the image stream data that the field of front vision scope contains depth of view information in real time with per second 30 frames at least, is used for successive image calculation process module and analyzes the depth of view information that obtains visual field scene and position and the movable information that further obtains visual field staff and finger.
The image operation processing module receives the image stream data that photographing module obtains, obtain the depth of view information of the place ahead, photographing module visual field scene by certain software algorithm, and judge the appearance of staff in the scene of the camera visual field of photographing module according to the feature of staff, and the position of real-time tracing staff and finger and movable information.
Fig. 2 is staff shell system synoptic diagram.In actual treatment, the skeleton tracing system that can comprise staff in this software algorithm, by this algorithm with staff real-time be identified as a skeleton pattern (as shown in Figure 2), wherein point by straight line and characterize, the joint of hand is by a sign, thereby can obtain the three-dimensional space position information of staff and the operating state of finger in real time.
Common interface module can be general wired (as USB) or wireless (as bluetooth, wifi etc.) multiple data interface.By linking to each other with main frame, the three-dimensional space position information of the staff that common interface module can obtain the image operation processing module and the data in real time such as operating state of finger ground send to main frame with specific data layout.
Host side is equipped with this equipment or the modules driver under the respective host operating system, this driver is real-time transmitted to parsing the data of main frame and is converted into the interactive operation order, its flow process crosses USB just as MouseAcross and computer is online, after computer end had been installed the driver of mouse, mouse just can be controlled pointer on the computer screen, and to carry out interactive operation the same.
Fig. 1 is the input equipment based on gesture identification of the present utility model and main frame johning knot composition.
As shown in Figure 1, should comprise photographing module, image operation processing module and common interface module based on the input equipment of gesture identification.
Photographing module, its visual field direction are used for obtaining in real time the image stream data that user's field of front vision scope contains depth of view information towards user the place ahead;
The image operation processing module, be used for receiving the described image stream data that photographing module obtains, from described image stream data, parse described depth of view information, according to the regional location of hand-type part in the described image stream data of the signature tracking of user's hand, and obtain the spatial three-dimensional position information of user's hand and the operating state of finger in real time in conjunction with described depth of view information;
Common interface module is used for the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.
In one embodiment, photographing module is used for obtaining the image stream data that user's field of front vision scope contains depth of view information in real time with the speed of per second 30 frames at least.
Such as, photographing module specifically can adopt the degree of depth shooting sensing device of Israel Primesense company research and development.This degree of depth shooting sensing device can obtain the depth information of the place ahead view field image in real time, it is by going out infrared point system of battle formations case from sensor projects, detect the visual field, the place ahead of the infrared point system of battle formations case that superposeed then by the traditional cmos imageing sensor of band infrared filter, because infrared point system of battle formations case can change according to distance and the shape of the object of reflection ray, be integrated in chip in the degree of depth shooting sensing device and receive from the result of cmos image sensor in real time and calculate the depth information of each pixel according to the mutation analysis of infrared point system of battle formations case.
In one embodiment, the image operation processing module, be used for the feature according to user's hand, the mode that application Face Detection algorithm combines with the masterplate recognizer is resolved described image stream data, to follow the tracks of the regional location of hand-type part in the described image stream data.
Preferably, the image operation processing module, being used for using the masterplate recognizer is skeleton pattern with user's hand Real time identification, and in this skeleton pattern, finger is characterized by straight line, and the joint of hand is by a sign.
Particularly, when photographing module specifically can adopt the degree of depth shooting sensing device of Israel Primesense company research and development, the image operation processing module receives the view data that has depth information from a frame frame of degree of depth shooting sensing device, the method that combines with the masterplate recognizer according to the Face Detection algorithm is analyzed the regional location of hand-type part in the tracking image to the view data that receives, in conjunction with depth information and then can obtain the spatial three-dimensional position of hand point and the operating state of finger in real time.
In one embodiment, common interface module is used for the mode by cable data interface or wireless data interface, and the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.Wherein, the cable data interface specifically can comprise USB (universal serial bus) (USB) data-interface; Wireless data interface specifically can comprise: blue-teeth data interface, infrared data interface or wifi data-interface, etc.
Preferably, photographing module, image operation processing module and common interface module can be integrated, but to form independently wearable device.
Alternatively, photographing module, image operation processing module and common interface module can also be integrated, but constitute the independently functional unit of wearable device.
In one embodiment, can be with photographing module and image operation processing module integral body as a gesture sensing unit.This gesture sensing unit comprises photographing module and image processing module, and the above-mentioned combination of photographing module and image processing module is only as an instantiation of gesture sensing unit.Here, the gesture sensing unit is used for the position of real-time sensing tracking sensing module visual field user's hand and the two states at least of hand.
In sum, in the utility model embodiment, photographing module, its visual field direction is used for obtaining in real time the image stream data that user's field of front vision scope contains depth of view information towards user the place ahead; The image operation processing module, be used for receiving the described image stream data that photographing module obtains, from described image stream data, parse described depth of view information, according to the regional location of hand-type part in the described image stream data of the signature tracking of user's hand, and obtain the spatial three-dimensional position information of user's hand and the operating state of finger in real time in conjunction with described depth of view information; Common interface module is used for the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.This shows, use after the utility model, a kind of solution of general gesture identification interactive device has been proposed, the equipment of developing based on this solution can become a kind of standard input device as mouse and keyboard, can be compatible mutually with miscellaneous equipment, various widespread usage situations be can be applicable to, and specific equipment and specific interactive interface are not only applicable to.
And, but the utility model can be integrated into a wearable device, but or be integrated into functional module on certain wearable device, portability is very good.
In addition, the user can be worn on the utility model also that head is gone up, front, the first-class a plurality of positions of shoulder, goes for a lot of scenes.
The above is preferred embodiment of the present utility model only, is not for limiting protection domain of the present utility model.All within spirit of the present utility model and principle, any modification of doing, be equal to replacement, improvement etc., all should be included within the protection domain of the present utility model.

Claims (5)

1. the input equipment based on gesture identification is characterized in that, comprises photographing module, image operation processing module and common interface module, wherein:
Photographing module, its visual field direction are used for obtaining in real time the image stream data that user's field of front vision scope contains depth of view information towards user the place ahead;
The image operation processing module, be used for receiving the described image stream data that photographing module obtains, from described image stream data, parse described depth of view information, according to the regional location of hand-type part in the described image stream data of the signature tracking of user's hand, and obtain the spatial three-dimensional position information of user's hand and the operating state of finger in real time in conjunction with described depth of view information;
Common interface module is used for the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame;
Described photographing module, image operation processing module and common interface module integrate, but but to form independently wearable device or to constitute the independently functional unit of wearable device.
2. the input equipment based on gesture identification according to claim 1 is characterized in that,
Photographing module is used for obtaining the image stream data that user's field of front vision scope contains depth of view information in real time with the speed of per second 30 frames at least.
3. the input equipment based on gesture identification according to claim 1 is characterized in that,
Common interface module is used for the mode by cable data interface or wireless data interface, and the spatial three-dimensional position information of described user's hand and the operating state of finger are outputed to main frame.
4. the input equipment based on gesture identification according to claim 3 is characterized in that, described cable data interface is USB (universal serial bus) (USB) data-interface.
5. the input equipment based on gesture identification according to claim 3 is characterized in that, described wireless data interface is: blue-teeth data interface, infrared data interface or wifi data-interface.
CN 201220299370 2012-06-21 2012-06-21 Input equipment based on gesture recognition Expired - Fee Related CN203070205U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220299370 CN203070205U (en) 2012-06-21 2012-06-21 Input equipment based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220299370 CN203070205U (en) 2012-06-21 2012-06-21 Input equipment based on gesture recognition

Publications (1)

Publication Number Publication Date
CN203070205U true CN203070205U (en) 2013-07-17

Family

ID=48768971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220299370 Expired - Fee Related CN203070205U (en) 2012-06-21 2012-06-21 Input equipment based on gesture recognition

Country Status (1)

Country Link
CN (1) CN203070205U (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015077978A1 (en) * 2013-11-29 2015-06-04 Intel Corporation Controlling a camera with face detection
CN105890647A (en) * 2016-04-11 2016-08-24 青岛理工大学 Testing system integrated with wearable devices and operation method thereof
WO2018028152A1 (en) * 2016-08-12 2018-02-15 信利光电股份有限公司 Image acquisition device and virtual reality device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015077978A1 (en) * 2013-11-29 2015-06-04 Intel Corporation Controlling a camera with face detection
CN105705993A (en) * 2013-11-29 2016-06-22 英特尔公司 Controlling a camera with face detection
US9628699B2 (en) 2013-11-29 2017-04-18 Intel Corporation Controlling a camera with face detection
TWI586167B (en) * 2013-11-29 2017-06-01 英特爾股份有限公司 Controlling a camera with face detection
CN105705993B (en) * 2013-11-29 2019-09-06 英特尔公司 Video camera is controlled using face detection
CN105890647A (en) * 2016-04-11 2016-08-24 青岛理工大学 Testing system integrated with wearable devices and operation method thereof
CN105890647B (en) * 2016-04-11 2017-12-05 青岛理工大学 A kind of test system and its operation method of integrated wearable device
WO2018028152A1 (en) * 2016-08-12 2018-02-15 信利光电股份有限公司 Image acquisition device and virtual reality device

Similar Documents

Publication Publication Date Title
KR101844390B1 (en) Systems and techniques for user interface control
KR101284797B1 (en) Apparatus for user interface based on wearable computing environment and method thereof
US20130241927A1 (en) Computer device in form of wearable glasses and user interface thereof
US20130265300A1 (en) Computer device in form of wearable glasses and user interface thereof
CN101510121A (en) Interface roaming operation method and apparatus based on gesture identification
CN103135753A (en) Gesture input method and system
WO2019024577A1 (en) Natural human-computer interaction system based on multi-sensing data fusion
EP3486747A1 (en) Gesture input method for wearable device, and wearable device
TW201423612A (en) Device and method for recognizing a gesture
CN104298340A (en) Control method and electronic equipment
CN103092437A (en) Portable touch interactive system based on image processing technology
CN106293099A (en) Gesture identification method and system
Jaemin et al. A robust gesture recognition based on depth data
CN104516649A (en) Intelligent cell phone operating technology based on motion-sensing technology
CN203070205U (en) Input equipment based on gesture recognition
Sreejith et al. Real-time hands-free immersive image navigation system using Microsoft Kinect 2.0 and Leap Motion Controller
KR20100048747A (en) User interface mobile device using face interaction
Kakkoth et al. Survey on real time hand gesture recognition
CN102662471B (en) Computer vision mouse
CN202749066U (en) Non-contact object-showing interactive system
KR20120037739A (en) User interface device and method based on hand gesture recognition
Xu et al. Bare hand gesture recognition with a single color camera
Rustagi et al. Virtual Control Using Hand-Tracking
CN104536568B (en) Detect the dynamic control system of user's head and its control method
CN115047966A (en) Interaction method, electronic equipment and interaction system

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SUZHOU CHUDA INFORMATION TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: TOUCHAIR (BEIJING) TECHNOLOGY CO., LTD.

Effective date: 20140213

COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100085 HAIDIAN, BEIJING TO: 215021 SUZHOU, JIANGSU PROVINCE

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20140213

Address after: 215021 A1503, international science and Technology Park, 1355 Jinji Lake Avenue, Suzhou Industrial Park, Jiangsu, China

Patentee after: SUZHOU TOUCHAIR TECHNOLOGY Co.,Ltd.

Address before: 100085. Office building 2, building 2, No. 1, Nongda South Road, Beijing, Haidian District, B-201

Patentee before: TOUCHAIR TECHNOLOGY Co.,Ltd.

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: He Xiaopan

Document name: Approval notice of fee reduction

DD01 Delivery of document by public notice

Addressee: Patent of Suzhou touch information technology Co.,Ltd. The person in charge

Document name: payment instructions

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: Patent of Suzhou touch information technology Co.,Ltd. The person in charge

Document name: Notice of termination of patent right

DD01 Delivery of document by public notice
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130717

Termination date: 20210621