CN103208002B - Based on gesture identification control method and the system of hand contour feature - Google Patents

Based on gesture identification control method and the system of hand contour feature Download PDF

Info

Publication number
CN103208002B
CN103208002B CN201310123587.0A CN201310123587A CN103208002B CN 103208002 B CN103208002 B CN 103208002B CN 201310123587 A CN201310123587 A CN 201310123587A CN 103208002 B CN103208002 B CN 103208002B
Authority
CN
China
Prior art keywords
gesture
frame
video
outline
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310123587.0A
Other languages
Chinese (zh)
Other versions
CN103208002A (en
Inventor
徐增敏
蒋英春
段雪峰
关健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201310123587.0A priority Critical patent/CN103208002B/en
Publication of CN103208002A publication Critical patent/CN103208002A/en
Application granted granted Critical
Publication of CN103208002B publication Critical patent/CN103208002B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of gesture identification control method based on hand contour feature and system, and it mainly contains initial setup procedure, Image semantic classification step, gesture identification step, command analysis step and application program controlling step to be controlled and forms.The present invention carries out gesture identification by carrying out Outside contour extraction to frame of video, to utilize in gesture identification process gesture feature area and gesture feature girth to identify gesture, and embrace fist and a palm with convex closure defect area with portraying, thus effectively improve accuracy and the efficiency of hand identification; All the conventional computational discrimination such as girth and area is adopted in gesture identification process, and the image steganalysis sample alignments of non-complex, thus do not need to add additional samples storehouse, just can serve for other application and developments, such as based on Photo Viewer, music player, web browser and game application that gesture controls.

Description

Based on gesture identification control method and the system of hand contour feature
Technical field
The present invention relates to field of human-computer interaction, be specifically related to a kind of gesture identification control method based on hand contour feature and system.
Background technology
The man-machine interaction mode of gesture computer for controlling is utilized just progressively to become developing direction and the trend of man-machine interaction.At present, people often need the hardware devices such as auxiliary keyboard, mouse and control pen to realize for the control of computer-internal application software (as video jukebox software and PPT powerpoint), can not liberate the both hands of user.The powerpoint commonly using PPT mono-class as people carrys out accessory exhibition content, and the mode that speaker controls lantern slide depends on the hardware devices such as keyboard, mouse and control pen, can not liberate both hands so that speaker and audience interaction.
The mode of feature database coupling that what the gesture interaction system of current appearance adopted is, therefore all have that pattern-recognition Sample Storehouse is large, match time is long, calculated amount is huge and design system structure can only be carried out based on specific interaction demand and task, so have difference in realization and design aspect, and all setting oneself corresponding constraint condition, characteristic parameter, the algorithm of each flow performing of system is also not quite similar.But the task difference run due to device interiors such as computing machine, smart mobile phone, video terminals and the difference of application scenarios, gesture has diversity and polysemy, and complex background and light reflection can affect the accuracy rate of gesture identification.Therefore the gesture needing the gesture interaction system developed must input according to actual effector for particular task and application-specific scene realizes corresponding reaction or controls, and causes the gesture identification time long, and difficulty is transplanted in application, affects man-machine interaction experience.
Summary of the invention
Technical matters to be solved by this invention is that, match time large for existing gesture identification control method pattern-recognition Sample Storehouse is long and be not easy to the problems such as transplanting, provides a kind of gesture identification control method based on hand contour feature and system.
For solving the problem, the present invention is achieved by the following technical solutions:
Based on the gesture identification control method of hand contour feature, comprise the steps:
Step 1, arrange the initial parameter of system, wherein initial parameter comprises the semantic control command up and down corresponding to four direction of video frame threshold value, gesture feature area threshold, gesture feature perimeter threshold, convex closure defect area threshold value, object distance (i.e. the distance of camera and effector) and gesture;
Step 2, by the image/video stream of camera shooting people, and is sent in computing machine and is carried out Image semantic classification to the frame of video of image/video stream;
Step 3, all outlines of frame of video found out by computing machine, then the outline of current video frame put into two-way profile chained list, then travel through chained list, get rid of noise profile;
Step 4, if the border of outline is close to the border of whole frame of video, then this outline of computer-made decision is not gesture, returns the chained list traversal cycling of step 3, continues other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 5, continue to carry out gesture identification to this outline;
Step 5, calculate the area of outline, if when the ratio of the area of outline and video area matrix area is less than the ratio of default gesture feature area threshold and object distance square, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 6, continue to carry out gesture identification to this outline;
Step 6, calculate the girth of the maximum frame rectangle of outline, if when the girth of outline maximum frame rectangle and the ratio of video area matrix girth are greater than the ratio of default gesture feature perimeter threshold and object distance, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 7, continue to carry out gesture identification to this outline;
Step 7, calculate the convex closure defect area of outline, if when the convex closure defect area of outline is less than default convex closure defect area threshold value, then this outline of computer-made decision is for embracing boxer's gesture, now calculate the centre coordinate position of embracing boxer's gesture, and repeat step 2-7 continuation detection next frame frame of video until identify armful boxer's gesture; If the convex closure defect area of outline is greater than default convex closure defect area threshold value, then this outline of computer-made decision is palm hand gesture, and calculates the centre coordinate position of palm hand gesture;
Step 8, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and generation command queue is numbered to the control command parsed;
Step 9, computing machine takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to the application program to be controlled of computing machine, and final application program to be controlled is by this control command of response.
Based on the gesture recognition control system of hand contour feature, comprise initial setup module, image pre-processing module, gesture recognition module, command analysis module and application program controlling module to be controlled;
Initial setup module, for arranging the initial parameter of system, above-mentioned initial parameter comprises the semantic control command up and down corresponding to four direction of video frame threshold value, gesture feature area threshold, gesture feature perimeter threshold, convex closure defect area threshold value, object distance (i.e. the distance of camera and effector) and gesture;
Image pre-processing module, each frame frame of video of camera being taken the image/video stream of people carries out Image semantic classification;
Whether whether gesture recognition module, judge to have in pretreated frame of video by the outline found in frame of video and embrace the appearance of boxer's gesture, occur embracing boxer's gesture without then continuing to detect next frame frame of video; Boxer's gesture is embraced if identify, then the centre coordinate Position input command analysis module of boxer's gesture will be embraced, and same Image semantic classification is done to ensuing next frame frame of video, and find in next frame frame of video and whether have palm hand gesture to occur, whether palm hand gesture is there is without then continuing to detect next frame frame of video again, if identify palm hand gesture, then by the centre coordinate Position input command analysis module of palm hand gesture;
Command analysis module, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and generation command queue is numbered to the control command parsed;
Application program controlling module to be controlled, takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to application program to be controlled, makes this control command of application response to be controlled.
Compared with prior art, the present invention has following features:
1) gesture identification is carried out by carrying out Outside contour extraction to frame of video, to utilize in gesture identification process gesture feature area and gesture feature girth to identify gesture, and embrace fist and a palm with convex closure defect area with portraying, thus effectively improve accuracy and the efficiency of hand identification;
2) all the conventional computational discrimination such as girth and area is adopted in gesture identification process, and the image steganalysis sample alignments of non-complex, thus do not need to add additional samples storehouse, just can serve for other application and developments, such as based on Photo Viewer, music player, web browser and game application that gesture controls.
Accompanying drawing explanation
Fig. 1 is the gesture identification control method process flow diagram based on hand contour feature;
Fig. 2 is convex closure and the convex closure defect schematic diagram of gesture outline;
Fig. 3 is Image semantic classification process flow diagram;
Fig. 4 is a kind of quadrant and order control law;
Fig. 5 is the gesture recognition control system theory diagram based on hand contour feature.
Embodiment
PPT(lantern slide for application program to be controlled below), the present invention is further elaborated:
Based on a gesture identification control method for hand contour feature, as shown in Figure 1, comprise the steps:
Step 1, the initial step needed for start up system, arranges system initial parameter.As being connected in computing machine by camera, complete the parameters arranged in panel, after appointment lantern slide display successful connection to be opened, the frame of video input picture treatment step will obtained by camera.VC++ Programming MFC interface can be used during concrete enforcement, respectively a start button is designed to camera and lantern slide, check whether successful connection, and scene setting should be comprised in the design that panel is set, embrace fist to arrange, palm is arranged and file is arranged, wherein scene setting parameter comprises video frame threshold value, two-value threshold, object distance and image range, the semantic control command up and down corresponding to four direction of gesture, embrace fist parameters and comprise the maximum frame rectangular perimeter of outer surface sum outline, palm parameters comprises the maximum frame rectangular perimeter of outer surface sum outline and convex closure defect area, file parameters comprises slide file name and full frame rear filename.
The properties influence of gesture profile efficiency and the accuracy rate of gesture identification.In the present invention, the convex closure of outline and the concept of convex closure defect is introduced.Wherein convex closure to be connected successively the enclosed region obtained by each outer concave vertex of hand, and namely what namely in Fig. 2, the peripheral dark line of hand drew out is convex closure.Convex closure defect then refers to the space between inner 2 fingers of convex closure, and namely in Fig. 2, namely A to H is each " convex closure defect " of convex closure by the region marked.These convexity defects provide the method for the feature performance of hand and hand state, the present invention proposes with convex closure defect area accordingly and portray palm, improve accuracy and the efficiency of hand identification.The detecting portion wherein identified, comprise human face region eliminating, borderline region eliminating, length breadth ratio, girth and area, convex closure defects detection, thus accuracy of identification improves greatly.
Step 2, after completing initial step, by the beginning track button of design, starts the image/video stream of camera shooting people, and is sent in computing machine and carry out Image semantic classification to the frame of video of image/video stream.
In the present embodiment, adopt the cvCreateCameraCapture function capturing video stream of OpenCV, then obtain every two field picture input picture pre-treatment step with the cvQueryFrame function converting video stream of OpenCV and carry out Image semantic classification.
Image semantic classification step is used for Image semantic classification and detects gesture, namely by converting frame of video to YCbCr space from rgb space, extract Cb component and carry out binaryzation, carry out morphological operations three step according to "-corrosion--corrosion of expanding of expanding ", obtain every frame pretreated input hand image gesture identification step.In the present embodiment, first by the camera collection video image of access computing machine, then the video image collected is converted to YCbCr form by rgb format, thus isolate this color space brightness and chrominance information, because the image after its gray processing has stronger antijamming capability to illumination, the impact of illumination therefore effectively can be reduced.See Fig. 3.
Then utilize global threshold method to obtain binary image.Global threshold binaryzation is only by pixel (i, j) gray-scale value f (i, j) definite threshold, if image f is (i, j) gray-scale value is limited to [a, b] in, its threshold value is t (a≤t≤b)), then the general mathematical expression that its image binaryzation calculates is:
g ( x , y ) = 1 , f ( x , y ) &GreaterEqual; t 0 , f ( x , y ) < t
The g (x, y) obtained is exactly bianry image, and threshold value t is different, and the bianry image g (x, y) obtained also is different.Threshold operation is divided into 0 and 255 two groups all pixels, by selected suitable threshold value, so just the image of multi-grey level can be become the image only having two gray levels, using wherein interested object pixel is as foreground pixel, remainder is pixel as a setting.The cvSplit function segmentation of OpenCV first can be used during enforcement to extract Cr passage, then do Threshold segmentation computing with the cvThreshold function of OpenCV, during for reaching desirable segmentation effect, threshold value t is limited in by embodiment [135,150].
Still there is noise in the image obtained due to global threshold method, the finger areas segmentation of the images of gestures after two-value process is not thorough, this can cause the calculating of subsequent characteristics value to produce certain deviation thus affect final gesture identification result, therefore needing first to carry out dilation operation splitting the image obtained, removing undesirable little " hole " caused of segmentation; Carry out erosion operation again and obtain image.Can use the cvMorphologyEx function of OpecnCV that 2 expansion-corrosion are brought together and carry out morphological operations during enforcement.
Step 3, all outlines of frame of video found out by computing machine, then the outline of current video frame put into two-way profile chained list, then travel through chained list, get rid of noise profile.The outline proposing each object from frame of video can adopt existing known correlated image processing method, and in the present embodiment, all outlines of described frame of video are found out by the cvFindContours function of OpenCV.
Step 4, if the border of outline is close to the border of whole frame of video, then this outline of computer-made decision is not gesture, returns the chained list traversal cycling of step 3, continues other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 5, continue to carry out gesture identification to this outline.
In the present embodiment, the method on outline and border that detects is such as formula shown in (1):
{ con i| (O c-O v) > (BT, BT) and (P c-P v) > (BT, BT) (1)
In formula (1), Oc, Pc, Ov, Pv are the two-dimensional coordinate of nonnegative integer, and wherein Oc, Pc represent gesture profile con ithe left upper end point coordinate of place rectangle frame and bottom righthand side point coordinate, Ov, Pv represent left upper end point coordinate and the bottom righthand side point coordinate of whole video, and BT is video frame threshold value.As Oc (10,20), Pc (30,30), Ov (0,0), Pv (1000,1000), BT=20, then Oc-Ov=(10,20) < (20,20), formula (1) is not met, now profile con iit is not gesture.
Step 5, calculate the area of outline, if when the ratio of the area of outline and video area matrix area is less than the ratio of default gesture feature area threshold and object distance square, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 6, continue to carry out gesture identification to this outline.
In the present embodiment, detect the outline convex closure total area and meet the method for gesture feature such as formula shown in (2):
{con i|S c/S v≥ST/(D×D)}(2)
In formula (2), Sc, Sv are nonnegative integer, and 0<ST<1, D are nonnegative real number, represent the distance of camera and effector, and control within 5 meters, and Sc represents the profile con of gesture ipixel number in region, its area can be calculated by the cvContourArea function of openCV and try to achieve, and Sv represents the pixel number in video area, and ST is gesture feature area threshold.Sc=100 pixel, Sv=500*500=250000 pixel, D=2 rice, ST=0.00375, does not meet formula (2), now profile con after substitution iit is not gesture.
Step 6, calculate the girth of the maximum frame rectangle of outline, if when the girth of outline maximum frame rectangle and the ratio of video area matrix girth are greater than the ratio of default gesture feature perimeter threshold and object distance, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 7, continue to carry out gesture identification to this outline.
In the present embodiment, detect outline girth and meet the method for gesture feature such as formula shown in (3):
{con i|L c/L v≤LT/D}(3)
In formula (3), Lc, Lv are nonnegative integer, and 0<LT<1, D are nonnegative real number, and represent the distance of camera and effector, Lc represents the profile con of gesture ithe girth of place rectangle, can be calculated by the cvArcLength function of openCV and try to achieve, Lv represents the girth of place, video area rectangle, and LT is gesture feature perimeter threshold.
Step 7, calculate the convex closure defect area of outline, if when the convex closure defect area of outline is less than default convex closure defect area threshold value, then this outline of computer-made decision is for embracing boxer's gesture, now calculate the centre coordinate position of embracing boxer's gesture, and repeat step 2-7 continuation detection next frame frame of video until identify armful boxer's gesture; If the convex closure defect area of outline is greater than default convex closure defect area threshold value, then this outline of computer-made decision is palm hand gesture, and calculates the centre coordinate position of palm hand gesture.
In the present embodiment, embrace fist contour feature storehouse and determined by above formula (1)-(3), its feature database initial value is in table 1; Palm contour feature storehouse is determined by formula (1)-(4), and its feature database initial value is in table 2.
{ con i | &Sigma; i = 1 n SDE i < DET } - - - ( 4 )
In above formula, DET is defect area threshold value, SDE irepresent i-th leg-of-mutton area of defect be end * high/2, wherein each defect contains the two-dimensional coordinate of the end and high two end points.If meet formula (4), be judged to embrace fist, otherwise be judged to be palm.Profile con can be obtained by the cvConvexityDefects of OpenCV during enforcement idefect triangle series because profile is that to comprise a lot of defects leg-of-mutton, so use Σ SDE irepresent defect leg-of-mutton area and.Specifically ask method first can find convex closure hull with cvConvexHull2, utilize hull and cvConvexityDefects function on profile, find defect series defects, and then travel through defect triangle series, cumulative convex defect area summation obtains.
Table 1 armful fist contour feature storehouse
Contour feature project Threshold value default value Explanation
Image range BT 20 (pixels) The frame distance of profile and video
Feature perimeter L T 0.1 Girth scale-up factor
Feature area ST 0.00375 Area ratio coefficient
Table 2 palm contour feature storehouse
Contour feature project Threshold value default value Explanation
Image range BT 20 (pixels) The frame distance of profile and video
Feature perimeter L T 0.1 Girth scale-up factor
Feature area ST 0.00375 Area ratio coefficient
Convex defect area DET 8000 Accumulative convex defect area and
Note: defect area DET is relevant with object distance D (distance of camera and effector), during as object distance D=0.5 (rice), defect area DET=8000; If object distance D=2 (rice), defect area DET=2000.
In addition, because the present invention requires higher, therefore to the performance test that the system have been corresponding index parameter for real-time response.Test mainly uses GetTickCount function, it return from os starting till now the millisecond number of process, rreturn value is DWORD, so can be used for carrying out mutual delayed test.Mutual delay referred to from system acquires to image to the time spent by lantern slide execution corresponding operating, Delay Bottlenecks due to gestural control system is the time delay identified, therefore the mutual delay (surveying 10 times altogether) of fist and palm is embraced with the test of dos source program, as shown in table 3:
The table 3 man-machine interaction time
Upper table illustrates, the man-machine interaction time of the present invention quickly, on average completes in 1ms, proves that the real-time performance of this system is fine, can reach good man-machine interaction effect while breaking away from gesture Sample Storehouse.
Step 8, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and raw command queue is numbered to the control command parsed.
In the present embodiment, OpenCV is used to write function PlamCenter (x, y) with FistCenter (x, y) obtain palm centre coordinate respectively and embrace fist centre coordinate, then through type (5) calculates gesture offset vector v (x, y), this vectorial place quadrant represents corresponding order.
v(x,y)=PlamCenter(x,y)-FistCenter(x,y)(5)
The PlamCenter (x, y) of formula (5) represents palm centre coordinate, and FistCenter (x, y) is for embracing fist centre coordinate.
The gesture that command analysis step resolves effector according to gesture offset vector is semantic, and resolves to control command.In the present embodiment, control law as shown in Figure 4, judges that the method which quadrant gesture offset vector v (x, y) belongs to is: the x comparing this vector, y coordinate order of magnitude.If | x|>|y|, and x>0, be then considered as vector and belong to I quadrant; If | x|<|y|, and y>0, be then considered as vector and belong to II quadrant; If | x|>|y|, and x<0, be then considered as vector and belong to III quadrant; If | x|<|y|, and y<0, be then considered as vector and belong to IV quadrant.
In the coordinate axis shown in Fig. 4, I, II, III and IV quadrant represents the control command of lower one page of lantern slide, beginning, page up and end respectively, and the order of correspondence is numbered 1,2,3 and 4.In embodiment, from vector v and x-axis forward angle α, be defined as follows: when 0< α < π/4 or 7 π/4< α <2 π, v is at I quadrant; When π/4< α <3 π/4, v is at II quadrant; When 3 π/4< α <5 π/4, v is at III quadrant; When 5 π/4< α <7 π/4, v is at IV quadrant.Wherein π=180 degree.
Step 9, computing machine takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to the application program to be controlled of computing machine, and final application program to be controlled is by this control command of response.
Lantern slide rate-determining steps is used for taking out order numbering one by one inside lantern slide command queue, whether determining program software gets this lantern slide handle, if otherwise confirm to reopen lantern slide, if it is dummy keyboard message is sent to window slide, the control command that response command analyzing step generates by final lantern slide, thus realize gesture control lantern slide, then again proceed to Image semantic classification step, wait for Next Command.In the present embodiment, the corresponding concrete lantern slide Keyboard Control information of lantern slide control information, sets as shown in table 4:
The order numbering that table 4 lantern slide controls
Order numbering Press key message Explanation
0 Nothing Do not send any message
1 Direction right button Lower one page
2 F5 Exit playing lantern slides
3 Direction left button Page up
4 Esc Start playing lantern slides, full screen display
In present embodiment, gesture identification is that the action of embracing fist and palm by detecting user judges slideshow operation order.First need to open a lantern slide, and click embodiment software and carry out dynamic gesture detection.When user face camera stretch out the right hand close the tight the five fingers formed one hold tightly fist time, will be considered as start lantern slide control, wherein embrace fist centre coordinate and be considered as coordinate axis origin position; When the right hand the five fingers all launch a formation palm, will be considered as resolving control command; If now palm centre coordinate is being embraced above fist centre coordinate, will be considered as starting playing lantern slides and full screen display; If palm centre coordinate is embracing the right side of fist centre coordinate, lantern slide will be considered as and play lower one page; If palm centre coordinate is embracing the left side of fist centre coordinate, lantern slide will be considered as and play page up; If palm centre coordinate is embracing the below of fist centre coordinate, will be considered as terminating play and exit lantern slide.Lantern slide control command numbering can be changed according to user habit during concrete enforcement, change the slideshow operation rule that gesture controls four direction up and down.
A kind of gesture recognition control system based on hand contour feature involved by said method, as shown in Figure 5, it forms primarily of initial setup module, image pre-processing module, gesture recognition module, command analysis module and application program controlling module to be controlled.
Initial setup module, for arranging the initial parameter of system, above-mentioned initial parameter comprises the semantic control command up and down corresponding to four direction of video frame threshold value, gesture feature area threshold, gesture feature perimeter threshold, convex closure defect area threshold value, object distance (distance of camera and effector) and gesture.
Image pre-processing module, each frame frame of video of camera being taken the image/video stream of people carries out Image semantic classification.In the present embodiment, first described image pre-processing module converts frame of video to YCbCr space from rgb space; Then extract Cb component and carry out binaryzation; Finally carry out morphological operations successively according to expansion-corrosion-expansion-corrosion, obtain the pretreated image of every frame frame of video.
Whether whether gesture recognition module, judge to have in pretreated frame of video by the outline found in frame of video and embrace the appearance of boxer's gesture, occur embracing boxer's gesture without then continuing to detect next frame frame of video.Boxer's gesture is embraced if identify, then the centre coordinate Position input command analysis module of boxer's gesture will be embraced, and same Image semantic classification is done to ensuing next frame frame of video, and find in next frame frame of video and whether have palm hand gesture to occur, whether palm hand gesture is there is without then continuing to detect next frame frame of video again, if identify palm hand gesture, then by the centre coordinate Position input command analysis module of palm hand gesture.In the present embodiment, described gesture recognition module finds out all outlines of frame of video by the cvFindContours function of OpenCV.
Command analysis module, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and raw command queue is numbered to the control command parsed.
Application program controlling module to be controlled, takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to application program to be controlled, makes this control command of application response to be controlled.
The present invention is not limited only to above-described embodiment, as the present invention not only can be used in the control of lantern slide, can also be applied in video playback and control, plays in control and other application programs.

Claims (6)

1., based on the gesture identification control method of hand contour feature, it is characterized in that comprising the steps:
Step 1, arrange the initial parameter of system, wherein initial parameter comprises the semantic control command up and down corresponding to four direction of video frame threshold value, gesture feature area threshold, gesture feature perimeter threshold, convex closure defect area threshold value, object distance and gesture;
Step 2, by the image/video stream of camera shooting people, and is sent in computing machine and is carried out Image semantic classification to the frame of video of image/video stream;
Step 3, all outlines of frame of video found out by computing machine, then the outline of current video frame put into two-way profile chained list, then travel through chained list, get rid of noise profile;
Step 4, if the border of outline is close to the border of whole frame of video, then this outline of computer-made decision is not gesture, returns the chained list traversal cycling of step 3, continues other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 5, continue to carry out gesture identification to this outline;
Step 5, calculate the area of outline, if when the ratio of the area of outline and video area matrix area is less than the ratio of default gesture feature area threshold and object distance square, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 6, continue to carry out gesture identification to this outline;
Step 6, calculate the girth of the maximum frame rectangle of outline, if when the girth of outline maximum frame rectangle and the ratio of video area matrix girth are greater than the ratio of default gesture feature perimeter threshold and object distance, then this outline of computer-made decision is not gesture, return the chained list traversal cycling of step 3, continue other outlines in the two-way profile chained list of traversal detection current video frame; Otherwise enter step 7, continue to carry out gesture identification to this outline;
Step 7, calculate the convex closure defect area of outline, if when the convex closure defect area of outline is less than default convex closure defect area threshold value, then this outline of computer-made decision is for embracing boxer's gesture, now calculate the centre coordinate position of embracing boxer's gesture, and repeat step 2-7 continuation detection next frame frame of video until identify armful boxer's gesture; If the convex closure defect area of outline is greater than default convex closure defect area threshold value, then this outline of computer-made decision is palm hand gesture, and calculates the centre coordinate position of palm hand gesture;
Step 8, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and generation command queue is numbered to the control command parsed;
Step 9, computing machine takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to the application program to be controlled of computing machine, and final application program to be controlled is by this control command of response.
2. the gesture identification control method based on hand contour feature according to claim 1, it is characterized in that, in step 2, in computing machine, the process of Image semantic classification is carried out specifically to each frame frame of video, first convert frame of video to YCbCr space from rgb space; Then extract Cb component and carry out binaryzation; Finally carry out morphological operations successively according to expansion-corrosion-expansion-corrosion, obtain the pretreated image of every frame frame of video.
3. the gesture identification control method based on hand contour feature according to claim 1, is characterized in that, all outlines of described frame of video are found out by the cvFindContours function of OpenCV.
4. according to claim 1 based on the gesture identification control method of hand contour feature and the gesture recognition control system based on hand contour feature designed, it is characterized in that, comprise initial setup module, image pre-processing module, gesture recognition module, command analysis module and application program controlling module to be controlled;
Initial setup module, for arranging the initial parameter of system, above-mentioned initial parameter comprises the semantic control command up and down corresponding to four direction of video frame threshold value, gesture feature area threshold, gesture feature perimeter threshold, convex closure defect area threshold value, object distance and gesture;
Image pre-processing module, each frame frame of video of camera being taken the image/video stream of people carries out Image semantic classification;
Whether whether gesture recognition module, judge to have in pretreated frame of video by the outline found in frame of video and embrace the appearance of boxer's gesture, occur embracing boxer's gesture without then continuing to detect next frame frame of video; Boxer's gesture is embraced if identify, then the centre coordinate Position input command analysis module of boxer's gesture will be embraced, and same Image semantic classification is done to ensuing next frame frame of video, and find in next frame frame of video and whether have palm hand gesture to occur, whether palm hand gesture is there is without then continuing to detect next frame frame of video again, if identify palm hand gesture, then by the centre coordinate Position input command analysis module of palm hand gesture;
Command analysis module, difference is done to the centre coordinate position of embracing boxer's gesture and palm hand gesture and obtains gesture offset vector, the gesture of resolving effector's four direction up and down according to gesture offset vector place quadrant is semantic, resolve to corresponding control command, and generation command queue is numbered to the control command parsed;
Application program controlling module to be controlled, takes out control command one by one from command queue, and sends the dummy keyboard message corresponding to control command to application program to be controlled, makes this control command of application response to be controlled.
5. the gesture recognition control system based on hand contour feature according to claim 4, is characterized in that, first described image pre-processing module converts frame of video to YCbCr space from rgb space; Then extract Cb component and carry out binaryzation; Finally carry out morphological operations successively according to expansion-corrosion-expansion-corrosion, obtain the pretreated image of every frame frame of video.
6. the gesture recognition control system based on hand contour feature according to claim 4, is characterized in that, described gesture recognition module finds out all outlines of frame of video by the cvFindContours function of OpenCV.
CN201310123587.0A 2013-04-10 2013-04-10 Based on gesture identification control method and the system of hand contour feature Expired - Fee Related CN103208002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310123587.0A CN103208002B (en) 2013-04-10 2013-04-10 Based on gesture identification control method and the system of hand contour feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310123587.0A CN103208002B (en) 2013-04-10 2013-04-10 Based on gesture identification control method and the system of hand contour feature

Publications (2)

Publication Number Publication Date
CN103208002A CN103208002A (en) 2013-07-17
CN103208002B true CN103208002B (en) 2016-04-27

Family

ID=48755219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310123587.0A Expired - Fee Related CN103208002B (en) 2013-04-10 2013-04-10 Based on gesture identification control method and the system of hand contour feature

Country Status (1)

Country Link
CN (1) CN103208002B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440034A (en) * 2013-08-19 2013-12-11 中国科学院深圳先进技术研究院 Method and device for achieving man-machine interaction based on bare hands and monocular camera
CN103500335A (en) * 2013-09-09 2014-01-08 华南理工大学 Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition
CN103699225B (en) * 2013-12-17 2017-02-15 深圳市威富多媒体有限公司 Method for interacting with mobile terminal through hand shape and device for implementing same
CN104866084B (en) 2014-02-25 2021-04-30 中兴通讯股份有限公司 Gesture recognition method, device and system
CN103941869B (en) * 2014-04-21 2017-07-14 云南电网公司普洱供电局 A kind of body-sensing posture identification method based on action element
CN103984928B (en) * 2014-05-20 2017-08-11 桂林电子科技大学 Finger gesture recognition methods based on depth image
CN106791446A (en) * 2014-05-30 2017-05-31 张琴 Mobile terminal camera and smart mobile phone, panel computer or net book
EP3218842A4 (en) * 2014-11-13 2018-08-22 Intel Corporation Facial spoofing detection in image based biometrics
CN104484035A (en) * 2014-12-04 2015-04-01 北京百度网讯科技有限公司 Control method, device and system based on somatosensory equipment
CN105005429B (en) * 2015-06-30 2018-12-11 广东欧珀移动通信有限公司 A kind of method and terminal of terminal display picture
CN105872691A (en) * 2015-12-14 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for controlling browser
CN106503650B (en) * 2016-10-21 2019-09-24 上海未来伙伴机器人有限公司 A kind of recognition methods and system of images of gestures
CN106778141B (en) * 2017-01-13 2019-09-20 北京元心科技有限公司 Unlocking method and device based on gesture recognition and mobile terminal
CN107092349A (en) * 2017-03-20 2017-08-25 重庆邮电大学 A kind of sign Language Recognition and method based on RealSense
CN107506633B (en) * 2017-07-31 2019-10-15 Oppo广东移动通信有限公司 Unlocking method, device and mobile device based on structure light
CN107678551B (en) * 2017-10-19 2021-12-28 京东方科技集团股份有限公司 Gesture recognition method and device and electronic equipment
CN108229318A (en) * 2017-11-28 2018-06-29 北京市商汤科技开发有限公司 The training method and device of gesture identification and gesture identification network, equipment, medium
CN110007748B (en) * 2018-01-05 2021-02-19 Oppo广东移动通信有限公司 Terminal control method, processing device, storage medium and terminal
CN108385800B (en) * 2018-03-02 2020-09-29 九牧厨卫股份有限公司 Intelligent closestool and related control method and device
CN108491820B (en) 2018-04-02 2022-04-12 京东方科技集团股份有限公司 Method, device and equipment for identifying limb representation information in image and storage medium
CN111192314B (en) * 2019-12-25 2024-02-20 新绎健康科技有限公司 Method and system for determining inner and outer contour ratio of finger in GDV energy image
CN111589098A (en) * 2020-04-21 2020-08-28 哈尔滨拓博科技有限公司 Follow-up gesture control method and system for doll with stepping crown block
CN112089596A (en) * 2020-05-22 2020-12-18 未来穿戴技术有限公司 Friend adding method of neck massager, neck massager and readable storage medium
CN111915509B (en) * 2020-07-03 2023-12-29 北京博电互联能源科技有限公司 Protection pressing plate state identification method based on shadow removal optimization of image processing
CN112767304B (en) * 2020-12-04 2023-02-28 浙江大学山东工业技术研究院 Vision-based sunflower module position and direction detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7821541B2 (en) * 2002-04-05 2010-10-26 Bruno Delean Remote control apparatus using gesture recognition
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9696808B2 (en) * 2006-07-13 2017-07-04 Northrop Grumman Systems Corporation Hand-gesture recognition method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7821541B2 (en) * 2002-04-05 2010-10-26 Bruno Delean Remote control apparatus using gesture recognition
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A Low-Cost Visual Motion Data Glove as an Input Device to Interpret Human Hand Gestures";Han youngmo;《Transaction on Consumer Electronics》;20100531;第56卷(第2期);501-509 *
"Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network";Lin Chiunhsiun;《Pattern Recognition Letters》;20071201;第28卷(第16期);2190-2200 *
"基于特征提取的手势识别技术研究";程小鹏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20121015;全文 *
"基于视觉的多特征手势识别";翁汉良等;《计算机工程与科学》;20120215;第34卷(第2期);125-127 *

Also Published As

Publication number Publication date
CN103208002A (en) 2013-07-17

Similar Documents

Publication Publication Date Title
CN103208002B (en) Based on gesture identification control method and the system of hand contour feature
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
CN102200834B (en) Television control-oriented finger-mouse interaction method
CN111680594A (en) Augmented reality interaction method based on gesture recognition
Beyeler OpenCV with Python blueprints
Huang et al. RGB-D salient object detection by a CNN with multiple layers fusion
Caramiaux et al. Beyond recognition: using gesture variation for continuous interaction
Rathi et al. Sign language recognition using resnet50 deep neural network architecture
Tsagaris et al. Colour space comparison for skin detection in finger gesture recognition
Golovanov et al. Combining hand detection and gesture recognition algorithms for minimizing computational cost
WO2024094086A1 (en) Image processing method and apparatus, device, medium and product
Cambuim et al. An efficient static gesture recognizer embedded system based on ELM pattern recognition algorithm
CN102855025B (en) Optical multi-touch contact detection method based on visual attention model
Singh Recognizing hand gestures for human computer interaction
CN111651038A (en) Gesture recognition control method based on ToF and control system thereof
Hoque et al. Computer vision based gesture recognition for desktop object manipulation
CN108255298B (en) Infrared gesture recognition method and device in projection interaction system
Hong et al. Advances in Multimedia Information Processing–PCM 2018: 19th Pacific-Rim Conference on Multimedia, Hefei, China, September 21-22, 2018, Proceedings, Part III
CN115016641A (en) Conference control method, device, conference system and medium based on gesture recognition
KR20190132885A (en) Apparatus, method and computer program for detecting hand from video
WO2020224244A1 (en) Method and apparatus for obtaining depth-of-field image
Mou et al. Attention based dual branches fingertip detection network and virtual key system
Wang et al. Virtual piano system based on monocular camera
CN114967927B (en) Intelligent gesture interaction method based on image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

CF01 Termination of patent right due to non-payment of annual fee