CN202110488U - Gesture control system based on computer vision - Google Patents

Gesture control system based on computer vision Download PDF

Info

Publication number
CN202110488U
CN202110488U CN2011201491958U CN201120149195U CN202110488U CN 202110488 U CN202110488 U CN 202110488U CN 2011201491958 U CN2011201491958 U CN 2011201491958U CN 201120149195 U CN201120149195 U CN 201120149195U CN 202110488 U CN202110488 U CN 202110488U
Authority
CN
China
Prior art keywords
module
gesture
control system
identification module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011201491958U
Other languages
Chinese (zh)
Inventor
郭晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HENAN ANRUI DIGITAL TECHNOLOGY CO LTD
Original Assignee
HENAN ANRUI DIGITAL TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HENAN ANRUI DIGITAL TECHNOLOGY CO LTD filed Critical HENAN ANRUI DIGITAL TECHNOLOGY CO LTD
Priority to CN2011201491958U priority Critical patent/CN202110488U/en
Application granted granted Critical
Publication of CN202110488U publication Critical patent/CN202110488U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

Provided is a gesture control system based on computer vision. The gesture control system comprises an image acquiring device, an image processing device and a piece of interactive operation equipment. The image processing device is composed of a hand positioning module, a hot-spot registration module and a gesture identification module. The image acquiring device is composed of two high-speed cameras, an infrared detector, an image processing chip and a rack, wherein output terminals of the infrared detector and the two high-speed cameras are respectively connected with the hand positioning module. An output terminal of the hand positioning module is connected with the hot-spot registration module and the gesture identification module, respectively. An output terminal of one high-speed camera is also connected with the gesture identification module. Output terminals of the gesture identification module and the hot-spot registration module are connected to the interactive operation equipment, respectively. In this way, a new input mode is provided, and computers can be operated by gestures. In addition, gestures can be positioned and identified independent of markers, achieving convenient setting and easy operation.

Description

A kind of gesture control system based on computer vision
Technical field
The utility model relates to a kind of computer entry device based on vision, specifically a kind of gesture control system based on computer vision.
Background technology
In the input equipment field, traditional input equipments such as mouse-keyboard are difficult to completely operate feeling of immersion and interest.And people except using language, communication text, also usually accomplish the interchange task by gesture in interpersonal communication process at ordinary times.Because staff not only comprises a large amount of interactive information that meets the human cognitive custom; And because the continuous motion of staff; Also there is three-dimensional space position information, therefore can people's hand position and gesture be combined fully, study as an indivisible integral body; Thereby finding and excavating a kind of brand-new input mode is that gesture is controlled, just the core of system place.
The utility model content
The utility model technical matters to be solved provides a kind of gesture control system based on computer vision, can be provided with conveniently through people's gesture operation computing machine, operates simple and easy.
The utility model is to solve the problems of the technologies described above the technical scheme that is adopted to be: a kind of gesture control system based on computer vision; Form by image collecting device, image processing apparatus and interactive operation equipment; Described image processing apparatus is made up of staff locating module, hotspot registration module and gesture identification module; Image collecting device is made up of two high-speed cameras, infrared eye, picture processing chip and a support; The output terminal of infrared eye and two high-speed cameras is connected with the staff locating module respectively; The output terminal of staff locating module is connected with the gesture identification module with the hotspot registration module respectively, and the output terminal of a high-speed camera in the image collecting device also is connected with the gesture identification module, and the output terminal of gesture identification module and hotspot registration module is connected to interactive operation equipment respectively.
Described two high-speed cameras and triangular in shape being arranged on the support of infrared eye, and two high-speed cameras laterally arrange.
Described infrared eye is made up of infrared transmitter and infrared remote receiver.
Described interactive operation equipment is the computing machine that has display.
The beneficial effect of the utility model is: the present technique scheme provides a kind of brand-new input mode, can realize utilizing the gesture operation computing machine.Disobey and help mark and can reach location, identifying operation, be provided with conveniently, operate simple and easy gesture.Native system adopts the communication protocol of standard to design, and interface flexibly is provided, and can call own service flexibly, realizes the good butt joint between each module of internal system.
Description of drawings
Fig. 1 is the structural representation of the utility model;
Fig. 2 is the structural representation of image collecting device;
Fig. 3 is a circuit connecting mode synoptic diagram in the image collecting device.
Mark among the figure: 1, image collecting device, 2, interactive operation equipment, 3, image processing apparatus; 4, staff locating module, 5, the hotspot registration module, 6, the gesture identification module; 7, high-speed camera, 8, infrared eye, 9, support; 10, picture processing chip, 11, infrared transmitter, 12, infrared remote receiver.
Embodiment
As shown in the figure; A kind of gesture control system based on computer vision; Form by image collecting device 1, image processing apparatus 3 and interactive operation equipment 2; Described image processing apparatus 3 is made up of staff locating module 4, hotspot registration module 5 and gesture identification module 6; Image collecting device 1 is made up of two high-speed cameras 7, infrared eye 8, picture processing chip 10 and a support 9, and the output terminal of infrared eye 8 and two high-speed cameras 7 is connected with staff locating module 4 respectively, and the output terminal of staff locating module 4 is connected with gesture identification module 6 with hotspot registration module 5 respectively; The output terminal of a high-speed camera 7 in the image collecting device 1 also is connected with gesture identification module 6, and the output terminal of gesture identification module 6 and hotspot registration module 5 is connected to interactive operation equipment 2 respectively.
Described two high-speed cameras 7 and infrared eye 8 triangular in shape being arranged on the support 9, and two high-speed cameras 7 laterally arrange.Two high-speed cameras 7 are controlled by picture processing chip 10, and the image information that photographs is sent to picture processing chip 10, thereby it is right to obtain left and right sides image.
Described infrared eye 8 is made up of infrared transmitter 11 and infrared remote receiver 12, and infrared transmitter 11 is responsible for sending infrared signal by picture processing chip 10 controls.Infrared remote receiver 12 is responsible for receiving infrared signal by picture processing chip 10 controls, and the signal that receives is sent to picture processing chip 10, thereby obtains the infrared acquisition image.
Described interactive operation equipment 2 is for having the computing machine of display.
In the utility model; Image collecting device 1 is used for the image information of acquisition operations person's hand; Staff locating module 4 is according to the image information that collects; Utilize CamShift algorithm keeps track staff target, and obtain the pixel coordinate of the staff barycenter in the image of the left and right sides, go out the three-dimensional coordinate of center of mass point then according to imaging model and infrared acquisition data computation.After the image information that a high-speed camera in the image collecting device 1 collects is sent to gesture identification module 6; Be the HSV image with the RGB image transitions that collects earlier; Two information in staff center of mass point position that calculate by area and staff locating module 4 then; Extract the staff profile, concentrate in point and search convex closure, through calculating the ratio identification gesture of original contour area and convex closure area.Hotspot registration module 5 is used to realize the conversion of the operating system coordinate system of image collecting device 1 coordinate system and interactive operation equipment 2, thereby according to gesture the relevant position of operating system is operated.
The utility model, through want designed system functional requirement is analyzed, the system that designs mainly comprises six major parts: operator, image collecting device, staff locating module, hotspot registration module, gesture identification module and interactive operation equipment.Six parts interact, and constitute a closed-loop path.Wherein:
(1) operator: the operator in the middle of the true environment is the input end of total system; In system's starting stage, the operator need not put into staff (need not carry the natural hand of any label at hand) in the effective imaging focal length scope of special image collecting device.According to the difference that concrete Windows uses, system is in service at every frame, does corresponding variation through the focus that the position that changes special image collecting device the place ahead hand-type and staff drives in the middle of the Windows environment.Reach the effect of off-line operation Windows environment thus.
(2) image collecting device: major function is as the IMAQ source, the time real image of catching camera the place ahead.Owing in staff location, will use the relative theory of computer vision and infrared acquisition, can not be a pictures so this module will provide, and about should same staff two width of cloth images to the infrared acquisition depth map.Can two identical special image collecting device and infrared eyes of model be fixed on above the support of a speciality, and regulate overlapping imaging region.Native system carries out analytical calculation through receiving the image of left and right sides camera and infrared eye, obtain left and right sides image to infrared depth map, for staff locating module and gesture identification module provide data basic.
(3) staff locating module: this module is one of nucleus module of system, and major function is in each frame, all need left and right sides image that the analyzing and processing image capture module captures to the infrared acquisition image.At first from video flowing, follow the tracks of the staff target in the image of the left and right sides, then the staff barycenter in the image of the left and right sides is used principle of computer vision as object matching point, and infrared depth finding figure, finally calculate staff center of mass point three-dimensional coordinate; Certainly; This stage is a very important task in addition; That will be demarcated binocular special image collecting device exactly; The purpose of demarcating is exactly to obtain some central call parameters of every special image collecting device, comprises the distortion factor of the intrinsic parameter of special image collecting device, outer parameter and camera lens etc.If the distortion of camera lens is more serious, also need correct carrying out solid left and right sides image.The precision of demarcating will directly have influence on the precision of staff location.
According to the computer binocular vision principle,,, accurately calculate the three-dimensional coordinate of target with respect to (right side) camera lens photocentre position, a left side through the Camshift algorithm in conjunction with infrared depth finding.Mainly be through the picture of gathering is handled to recover the corresponding three-dimensional information of original image.And binocular vision is an important component part of computer vision, and it takes Same Scene continuously through two video cameras that are fixed on diverse location, thereby calculates the three-dimensional coordinate that the parallax of the same space o'clock in two width of cloth images obtains this point then.Combine the infrared acquisition data at last, use the grey scale mapping self-adaptive enhancement algorithm, calculate suitable depth range, thereby reduce external interference, reach main effectively Region control.
(4) hotspot registration module: the staff center of mass point coordinate that the staff locating module calculates is a corresponding left side (right side) special image collecting device camera lens photocentre; Realize mutual with the Windows environment; Staff center of mass point three-dimensional coordinate just need with the Windows environment in the middle of certain a corresponding relation is a bit arranged, promptly need one from a left side (right side) special image collecting device coordinate is tied to the conversion of Windows environment coordinate system.This step provides solid foundation alternately for the dummy object in the operator accomplishes true staff and the virtual world.
(5) gesture identification module: this module also is one of nucleus module of system, and major function is discerned camera lens the place ahead gesture exactly.The staff locating module will be used two special image collecting devices, because need obtain left and right sides image pair staff center of mass point information; And the gesture identification module only need be used a special image collecting device, uses the correlation theory of image recognition through the image that this video camera is grasped, thereby identifies specific gesture.In addition, this module also need be supported an extendible gesture storehouse, according to the concrete difference of using, can constantly define specific gesture and add in the gesture storehouse like this, satisfies different application requirements.The profile detection function cvCany () (its principle is the realization to the Canny algorithm) that system has used the OpenCV image library to provide detects all borders in the middle of the bianry image, and according to the difference of borderline region they is combined into different profiles.With profile that obtains and the contrast of gesture storehouse progress, realize the identification of gesture at last.
(6) interactive operation equipment: with Windows operating system is example; Accomplish mutual with Windows for final; Also need in the Windows environment, define a plurality of focuses; Through defining operation rule, all are mutual with Windows's, comprise that extractings, mobile, releasing operation etc. all will be through the focus completion.Hotspot registration module and gesture identification module can obtain positional information and the gesture information of staff in the Windows environment, can drive virtual hand by two parts information fully and realize corresponding interactive operation.Finally, the staff in the true environment just with the Windows environment in focus have a corresponding relation, the human hand movement in true environment or change gesture, focus also will be made the change of corresponding position and gesture.So, combining corresponding working rule, just can realize accomplishing the interactive operation of extracting, mobile etc. and Windows with true hand-drive focus.
Each functional module in the utility model can be employed in and corresponding program is set in the computing machine realizes.
The gesture control system of the utility model has been considered the security of system of system under the Windows environment; Each equipment of composition system has all been taked effective measures; And through creating the discovery of threat model and handling potential safety problem; According to the Windows security model, the stability of coding safeguards system safe in utilization and security.Native system is all followed advance and practical principle in design, construction and lectotype selection; On technology is flat, keep relative epoch synchronism with the construction of other system; Life cycle with safeguards system prolongs as much as possible, guarantees and equipment existing or that can adopt in foreseeable future compatibility mutually.System has all fully adopted advanced system and technology at structural design and function design aspect, under the prerequisite of safe and reliable operation, and the more more rich functions of expansion.System design and lectotype selection are fully paid attention to utility function, are reduced overall investment, try to achieve perfect unity advanced and economy, accomplish the preferably comprehensive of equipment performance/price ratio, and all are from the actual demand of management.Utilize existing environment and existing equipment to carry out integration and upgrade simultaneously.Native system is a starting point to improve overall operation efficiency, according to the simplicity principle, designs to the demand and the existing environment of reality, installs simply, is provided with conveniently, operates simple and easy.

Claims (4)

1. gesture control system based on computer vision; It is characterized in that: form by image collecting device (1), image processing apparatus (3) and interactive operation equipment (2); Described image processing apparatus (3) is made up of staff locating module (4), hotspot registration module (5) and gesture identification module (6); Image collecting device (1) is made up of two high-speed cameras (7), an infrared eye (8), picture processing chip (10) and support (9); The output terminal of infrared eye (8) and two high-speed cameras (7) is connected with staff locating module (4) respectively; The output terminal of staff locating module (4) is connected with gesture identification module (6) with hotspot registration module (5) respectively; The output terminal of the high-speed camera (7) in the image collecting device (1) also is connected with gesture identification module (6), and the output terminal of gesture identification module (6) and hotspot registration module (5) is connected to interactive operation equipment (2) respectively.
2. a kind of gesture control system as claimed in claim 1 based on computer vision; It is characterized in that: described two high-speed cameras (7) and an infrared eye (8) are triangular in shape to be arranged on the support (9), and two high-speed cameras (7) laterally arrange.
3. a kind of gesture control system based on computer vision as claimed in claim 1 is characterized in that: described infrared eye (8) is made up of infrared transmitter (11) and infrared remote receiver (12).
4. a kind of gesture control system based on computer vision as claimed in claim 1 is characterized in that: described interactive operation equipment (2) is for having the computing machine of display.
CN2011201491958U 2011-05-12 2011-05-12 Gesture control system based on computer vision Expired - Fee Related CN202110488U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011201491958U CN202110488U (en) 2011-05-12 2011-05-12 Gesture control system based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011201491958U CN202110488U (en) 2011-05-12 2011-05-12 Gesture control system based on computer vision

Publications (1)

Publication Number Publication Date
CN202110488U true CN202110488U (en) 2012-01-11

Family

ID=45435934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011201491958U Expired - Fee Related CN202110488U (en) 2011-05-12 2011-05-12 Gesture control system based on computer vision

Country Status (1)

Country Link
CN (1) CN202110488U (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809738A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Information acquisition method and electronic device
WO2016004864A1 (en) * 2014-07-10 2016-01-14 宁波舜宇光电信息有限公司 Gesture recognition module testing apparatus, and testing method thereof
CN108052901A (en) * 2017-12-13 2018-05-18 中国科学院沈阳自动化研究所 A kind of gesture identification Intelligent unattended machine remote control method based on binocular

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809738A (en) * 2012-11-13 2014-05-21 联想(北京)有限公司 Information acquisition method and electronic device
CN103809738B (en) * 2012-11-13 2017-03-29 联想(北京)有限公司 A kind of information collecting method and electronic equipment
WO2016004864A1 (en) * 2014-07-10 2016-01-14 宁波舜宇光电信息有限公司 Gesture recognition module testing apparatus, and testing method thereof
CN108052901A (en) * 2017-12-13 2018-05-18 中国科学院沈阳自动化研究所 A kind of gesture identification Intelligent unattended machine remote control method based on binocular
CN108052901B (en) * 2017-12-13 2021-05-25 中国科学院沈阳自动化研究所 Binocular-based gesture recognition intelligent unmanned aerial vehicle remote control method

Similar Documents

Publication Publication Date Title
KR102541812B1 (en) Augmented reality within a field of view that includes a mirror image
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
JP5912059B2 (en) Information processing apparatus, information processing method, and information processing system
CN102638653B (en) Automatic face tracing method on basis of Kinect
CN106371281A (en) Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN103258316B (en) Method and device for picture processing
Langlotz et al. Online creation of panoramic augmented reality annotations on mobile phones
TW201915943A (en) Method, apparatus and system for automatically labeling target object within image
CN104656893B (en) The long-distance interactive control system and method in a kind of information physical space
US20220392264A1 (en) Method, apparatus and device for recognizing three-dimensional gesture based on mark points
CN104881526B (en) Article wearing method based on 3D and glasses try-on method
EP2880863A1 (en) Context-driven adjustment of camera parameters
JP2012212343A (en) Display control device, display control method, and program
WO2014062663A1 (en) System and method for combining data from multiple depth cameras
CN104700385B (en) The binocular visual positioning device realized based on FPGA
CN203012636U (en) Man-machine interactive system based on laser projection positioning
KR20130034125A (en) Augmented reality function glass type monitor
CN203027358U (en) Adaptive sight line tracking system
CN104460951A (en) Human-computer interaction method
CN110164060B (en) Gesture control method for doll machine, storage medium and doll machine
WO2015093130A1 (en) Information processing device, information processing method, and program
CN107610157A (en) A kind of unmanned plane target method for tracing and system
CN202110488U (en) Gesture control system based on computer vision
CN110096152A (en) Space-location method, device, equipment and the storage medium of physical feeling
CN105335451A (en) Processing method and apparatus for display data in finder frame, shooting method and terminal

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120111

Termination date: 20130512