CN106200971A - Man-machine interactive system device based on gesture identification and operational approach - Google Patents

Man-machine interactive system device based on gesture identification and operational approach Download PDF

Info

Publication number
CN106200971A
CN106200971A CN201610553998.7A CN201610553998A CN106200971A CN 106200971 A CN106200971 A CN 106200971A CN 201610553998 A CN201610553998 A CN 201610553998A CN 106200971 A CN106200971 A CN 106200971A
Authority
CN
China
Prior art keywords
finger tip
gesture
image
man
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610553998.7A
Other languages
Chinese (zh)
Inventor
梁鹏
郑振兴
林泽芳
吴玉婷
余经烈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN201610553998.7A priority Critical patent/CN106200971A/en
Publication of CN106200971A publication Critical patent/CN106200971A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The present invention discloses a kind of man-machine interactive system device based on gesture identification and operational approach, and described system and device includes monocular cam, is provided with the PC of man-machine interaction program, PLC plate and physical application equipment;Described operational approach includes gesture input module, finger tip locating module, gesture tracking module and key Mus mapping block.The present invention uses man-machine interaction scheme based on gesture identification, solves the confinement problems of the existing man-machine interaction mode that need to sense with a large amount of hardware devices, make man-machine interaction more natural, convenient.

Description

Man-machine interactive system device based on gesture identification and operational approach
Technical field
The present invention relates to a kind of man-machine interactive system device and operational approach, particularly relate to based on gesture identification man-machine Interactive system device and operational approach.
Background technology
Along with the fast development of science and technology and becoming increasingly popular of computer vision, man-machine communication's naturality is wanted by people Ask more and more higher, all traditional man-machine communication's modes, day by day can not meet people's needs such as mouse, keyboard, mike etc.. Using staff as the communication means between computer and people, other the communication means of having compared more natural, succinct, abundant and Directly, thus can provide the computer of gesture identification can make man-machine between exchange the most naturally facilitate.
Current gesture recognition system many employings following two:
(1) data glove or adornment: this mode can reduce detection and the complexity of recognizer, but the operation of Worn type Mode is obviously difficult to meet the needs of natural man-machine interaction;
(2) based on 3D depth camera: 3D scanning device volume is relatively big, and hardware cost is higher, and required operational capability is higher, difficult With integrated and be applied on popular intelligent terminal.
Secondly, between traditional palm, location algorithm has individual defect: when hands becomes level, it is impossible to be accurately positioned wherein certain Point between the individual palm, result in the limitation of gesture identification.
Application publication number is that the application for a patent for invention of CN105138136A discloses one " gesture identifying device, gesture knowledge Other method and gesture recognition system ", this identification system includes a kind of gesture identifying device, and gesture identifying device includes being arranged on At least one sensor corresponding with finger position, input mode judging unit and input content signal generating unit.This patent of invention Application achieves virtual man-machine interaction, is simulated virtual content fast and accurately, but its device needs to arrange sensor use Identifying in detection, necessary hardware is many and relatively costly.
It is " a kind of based on 2D video sequence that application publication number is that the application for a patent for invention of CN104992171A discloses one Gesture identification and man-machine interaction method and system ", this system relates generally to a kind of man-machine interaction method, by building the connection of staff Close characteristic model attitude and the gesture of staff under 2D cam movement prospect are identified.This application for a patent for invention realize exist Target staff selection under complex background, and realize the tracking of the degree of accuracy to staff, high stability, but extraction associating need to be traveled through Characteristic model, many with the sample matches of Sample Storehouse, module and process, relatively complicated time-consumingly, and the most specifically be pin-pointed to refer to Point.
Application publication number is that the application for a patent for invention of CN105045398A discloses one " a kind of void based on gesture identification Intend reality interactive device ", this equipment include 3D utilizing camera interface, helmet-type virtual reality display, signal processing component and Mobile device interface.This application for a patent for invention can capture the testing image sequence of the user hand containing depth information and pass through Process and identify, it is achieved virtual reality is mutual, but user need to be worn the helmet and use, it is difficult to meet the need of natural man-machine interaction Ask.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of man-machine interactive system based on gesture identification Device, this device uses the method carrying out gesture tracking based on the video caught under monocular cam, by man-machine interaction program Identifying processing to user's gesture, finally realizes the manipulation to physical application equipment.
Further object is that offer is a kind of and apply above-mentioned man-machine interactive system device based on gesture identification Operational approach, propose to put between the palm solution of calculating, and use Camshift algorithm, it is achieved the simulation to keyboard and mouse, Solve existing need to the confinement problems of man-machine interaction mode of a large amount of hardware devices sensing, make man-machine interaction more natural, Convenient, and preferably solve gesture polysemy, a difficult problem that multiformity is brought.
The present invention solves the technical scheme of above-mentioned technical problem:
A kind of man-machine interactive system device based on gesture identification and operational approach, it is characterised in that wherein:
Described system and device includes monocular cam, is provided with the PC of man-machine interaction program, PLC plate and physical application and sets Standby.Wherein, described monocular cam is for capturing the images of gestures of user;Described PC obtains monocular by USB interface The images of gestures of photographic head capture, image will be identified, analyze and process by man-machine interaction program therein;Described PLC Plate obtains the gesture information after PC processes by serial ports or USB;Described physical application equipment obtains information by circuit And information is carried out feedback enforcement.
Described system operation methods includes that gesture input module, finger tip locating module, gesture tracking module and key Mus map Module, wherein:
Described gesture input module captures user's gesture, and the images of gestures input that will capture for monocular cam Finger tip locating module;
Described finger tip locating module is for positioning the fingertip location in images of gestures, and identifies user's gesture according to this Type and the position of gesture, as gesture tracking module and the input of key Mus mapping block;
Described gesture tracking module is for following the trail of the motion track by transfer gesture, and obtains the amount of movement of gesture, makees Input for key Mus mapping block;
Described key Mus mapping block for being identified as the keyboard of correspondence by the type of user's gesture and gesture amount of movement With the operation of mouse, and carry out computer control accordingly.
Finger tip locating module in the man-machine interactive system operational approach based on gesture identification of the present invention, including following step Rapid:
(1) photographic head is loaded into image: recalls photographic head and is loaded into image;
(2) Image semantic classification: image is carried out color space conversion, colour of skin threshold process, image denoising process, image two Value processes and opening operation processes;
(3) profile and finger tip are found: based on point and the searching of finger tip point between the palm on profile;
(4) between the palm, location and finger tip filter: location Calculation between being slapped by finger tip location, the location between the palm and between finger tip Play a part mutually restriction.
The finger tip locating module of the man-machine interactive system operational approach based on gesture identification of the present invention, in step (2), Described Image semantic classification includes that color space conversion, colour of skin threshold process, image denoising process, image two-value processes and opens fortune Calculation processes, wherein:
Color space is changed: be transformed into by RGB image under HSV colour model;
Colour of skin threshold process: utilize the otsu adaptive threshold fuzziness of OpenCV;
Image denoising processes: removes and identifies the noise around target in image;
Image two-value processes: the prospect of image split with background;
Opening operation processes: eliminates after two-value processes disconnected scatterplot in image, and fills missing point.
The finger tip locating module of the man-machine interactive system operational approach based on gesture identification of the present invention, in step (3), Described profile and finger tip are found and are included that between profile lookup, the lookup of finger tip point, the palm, location and finger tip filter, wherein:
Profile is searched: screen unique profile after obtaining multiple profiles by four connections or eight connectivity;
Finger tip point is searched: identify number and the position of current finger tip;
Position between the palm: calculate between minimum point is slapped and position;
Finger tip filters: screen and remove unnecessary finger tip point.
The finger tip locating module of the man-machine interactive system operational approach based on gesture identification of the present invention, in step (4), Between the described palm location and finger tip filter, wherein, for the palm between point calculate, it is proposed that two kinds of solutions: utilize minimal principle, Utilize the triangle both sides sum principle more than the 3rd limit.
The gesture tracking module of the man-machine interactive system operational approach based on gesture identification of the present invention, including following step Rapid:
(1) tracing area selects: choose region interested by finger tip location;
(2) Camshift follows the tracks of hand algorithm process: mainly by the colouring information of moving objects in video reach with The purpose of track;
(3) center of gravity and area extraction are followed the tracks of: use frame difference method calculate finger tip coordinate and follow the tracks of the motion-vector of center of gravity, with The quantity of point between the change of track area and finger tip and the palm, does simple judgement accordingly.
Further, described Camshift follows the tracks of hand algorithm process by finger tip coordinate setting and Camshift algorithm knot Altogether, by method for tracking target that Camshift algorithm improvement is a unsupervised learning.
The present invention compared with prior art has a following beneficial effect:
1, the image capturing 2D photographic head is identified, and by finger tip location, gesture tracking and mapping three big moulds Block realizes the simulation of key Mus, overcomes the shortcoming that prior art necessary hardware equipment is many, cost is high;
2, propose two kinds of solutions and calculate point between the palm, solve existing algorithm and cannot be accurately positioned wherein between certain palm A difficult problem for point, improves the accuracy of gesture identification.
Accompanying drawing explanation
Fig. 1 is that the structure of a detailed description of the invention of the man-machine interactive system device based on gesture identification of the present invention is shown It is intended to.
Fig. 2 is the modular structure block diagram of man-machine interactive system in a specific embodiment of the present invention.
Fig. 3 is the workflow of finger tip locating module Image semantic classification step in a specific embodiment of the present invention Figure.
Fig. 4 be in a specific embodiment of the present invention images of gestures through opening operation process after before and after comparison diagram.
Fig. 5 is the work that in a specific embodiment of the present invention, finger tip locating module profile and finger tip find step Flow chart.
Fig. 6 is the principle comparison figure in four connected regions and eight connectivity region.
Fig. 7 is a convex closure collection, i.e. finger tip point set.
Fig. 8 is for putting schematic diagram between the palm.
Fig. 9 be triangular apex to 2 distance more than on this limit a little to the principles illustrated figure of 2 distances.
Figure 10 is that tracing area selects schematic diagram.
Detailed description of the invention
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention do not limit In this.
Seeing Fig. 1, the man-machine interactive system device based on gesture identification of the present invention is made up of significant components several in figure, Wherein:
Described user 1 gives expression to its operational order by gesture;Described monocular cam 2 runs work and captures use The images of gestures at family, wherein, the image that described monocular cam 2 is captured by USB interface sends described PC to 3;After described PC 3 receives image, images of gestures is identified, analyzes and processes by man-machine interaction program therein, its In, the gesture information after described PC will be processed by serial ports or USB interface is sent to described PLC plate 4;Described PLC Plate 4 communicates information to described physical application equipment 5 by circuit after obtaining the information that PC sends;Described physical application Equipment 5 obtains information and information carries out feedback enforcement, and wherein, the physical application equipment 5 described in the present embodiment with air-conditioning is Example, but it is not limited to this.
Seeing Fig. 2, described operational approach includes gesture input module, finger tip locating module, gesture tracking module and key Mus Mapping block, wherein:
(1) gesture input module: monocular cam capture user's images of gestures, and the images of gestures transmission that will capture To PC, and then input finger tip locating module;
(2) finger tip locating module: the picture catching photographic head carries out series of preprocessing, afterwards by find profile with Finger tip point location, is positioned again by point between the palm.Finger tip locating module is formed by with lower part: photographic head is loaded into image, figure Position between finding as pretreatment, profile and finger tip point, slap:
(2.1) photographic head is loaded into image: recalls photographic head, carries out getImage.
(2.2) Image semantic classification: see Fig. 3, preprocessing process includes carrying out image color space conversion, colour of skin threshold value Process, image denoising processes, image two-value processes, opening operation processes, and is described in detail below:
(2.2.1) color space conversion: generally, picture is all under rgb color model.But RGB three-component Between often have the highest dependency, directly utilize these components and tend not to reach the effect of anticipation, so needing to turn RGB image Change under HSV colour model.Wherein, the value of H, S, V can be obtained by formula (2), (3), (4) respectively.
V=MAX (4)
In above formula (1), MAX, MIN are the maximum of RGB image, minima respectively, and H, S, V are HSV image respectively H-number, S value and V-value.
After being transformed into HSV color space, carry out Threshold segmentation by taking 0~the 180 of H-number, then can get required two Value image.
(2.2.2) colour of skin threshold process: utilize the otsu adaptive threshold fuzziness of OpenCV.Program circuit is: calculate straight Side's figure normalization histogram;Calculate gradation of image average avgValue;Calculate histogrammic zeroth order w [i] and one-level square u [i] calculates and finds the inter-class variance (between-class variance) of maximum.
Variance [i]=(avgValue*w [i]-u [i]) * (avgValue*w [i]-u [i])/(w [i] * (1-w [i])) gray value of this maximum variance corresponding is threshold value to be looked for.
(2.2.3) image denoising processes: the digital picture in reality is subjected to imaging in digitized and transmitting procedure and sets For affecting with external environmental noise interference etc., therefore image need to be carried out denoising.The present embodiment uses agglomerate area threshold method Carrying out image filtering denoising, remove the noise around destination object in image, process is:
The connection constituent element extraction algorithm in binary mathematical morphology is used to ask for the area of agglomerate, less than the agglomerate of threshold value For noise, the pixel gray value of this agglomerate is all set as 255 that is removable noises.
(2.2.4) image two-value processes: carries out image binaryzation, the prospect of image is split with background.Image two Value refers to the gray value of the pixel on image is set to 0 or 255 so that whole image presents obvious black and white effect. Image binaryzation is most common most important process means in computer assisted image processing, and it makes data volume in image greatly subtract Few, it is thus possible to highlight the profile of target.In OpenCV, available Key Functions cvThreshold () realizes the two-value of image Change.
(2.2.5) opening operation processes: in order to scatterplot disconnected after eliminating binaryzation fills the missing point in hand simultaneously, Thus reach preferable image effect, use the opening operation in morphological method, the most first corrode and expand afterwards.If (x y) is input to f Image, b (x, y) is structural element, corrodes input picture f with structural element b and dilation operation is respectively defined as:
(f ⊙ b) (and s, t)=min{f (s+x, t+y)+b (x, y) | (s+x, t+y) ∈ Df, (x, y) ∈ Db} (5)
Wherein, s, t are the parameter of input picture f, and x, y are the parameter of structural element b, DfFor a set of image f, Db A set for structural element b.
See Fig. 3, the images of gestures comparison diagram processed for opening operation.
(2.3) profile and finger tip point are found: pretreated image carries out profile searching, and carry out one based on profile and be Row step, finds finger tip point.See Fig. 5, profile and finger tip point searching process to include determining between profile lookup, the lookup of finger tip point, the palm Position and finger tip filter, and are described in detail below:
(2.3.1) profile is searched: a profile generally corresponds to a series of point, namely a curve in image, is Find out marginal point by order and follow the tracks of border.Owing to the pixel value in each region is identical, can by four connections or eight even Logical region carries out profile lookup.Four connections and eight connectivity can mark the part connected in binary picture, and grammer is embodied as L= (BW, n) [L, num].Wherein BW is input picture;N can value be that 4 or 8 expressions connect four connection or eight connectivity regions;Num is The connected region number found;L is output image array, and its element value is integer, and background is marked as 0, first connected region Territory is marked as 1, and second connected region is marked as 2, and the rest may be inferred.
Seeing Fig. 6, the principle comparison figure of four connections and eight connectivity, 0 in figure is that the position at central pixel point place is exactly Four connection or eight connectivity regions, i.e. four connected regions refer to four points up and down of 0, eight connectivity also comprise the upper left corner, the upper right corner, The lower left corner and position, four, the lower right corner, therefore eight connectivity region contains four connected regions.
Profile is searched after terminating, and can obtain multiple profile, obtains unique profile conduct by largest contours definition screening Gesture profile, and search for follow-up finger tip.
(2.3.2) finger tip point is searched: for the angle of point set, finger tip is a convex closure in palm profile, and convex closure is Referring to a minimal convex polygon, meet institute that point concentrates a little on polygon edge or internal, see Fig. 7, line segment encloses The polygon become is exactly point set { p0, p1……p12Convex closure, wherein p0, p1... p12Refer both to node or summit.Here with convex Bag is searched and is carried out the location of finger tip, i.e. identifies number and the position of convex closure in gesture, i.e. may recognize that current finger tip number and Position.
(2.3.3) palm between position: by finger tip location slap between location Calculation, between the palm with finger tip between location play phase The effect of restriction mutually.Seeing schematic diagram between the palm of Fig. 8, in figure, A-E is point between the corresponding palm.Slapped by calculating minimum point Between position, i.e. find out each point coordinates between finger tip point by formula (7) and relative position between coordinate points come before and after corresponding with it The judgement made find based on adjacent two finger tip points between minimum point.
f(x0)≥f(x1)and f(x0)≥f(x2) (7)
Wherein f (x0) it is current vertical direction pixel coordinate figure, f (x1) and f (x2) be respectively its forerunner's coordinate figure with after Continue coordinate figure, and the follow-up span of forerunner selected in code is 3 pixels, and this is intended to prevent from constituting between pixel lines Influence of arrangement.This algorithm has individual defect, when hands becomes level, then cannot be accurately positioned wherein certain the palm between point.
Do three limits of a triangle surrounded through the line of opposite side owing to crossing random angle at triangle interior and do not have The principle that former Atria limit is big.See Fig. 9, it is known that the distance of triangular apex to two more than institute on this limit a little to 2 points Distance, i.e. AB+AC > BD+DC, so solution can obtain adjacent finger tip point (i.e. convex closure based on this principle by formula (8) Point) between profile point to point-to-point transmission Euclidean distance and maximum of points, this method can efficiently solve when hands becomes horizontal During state, it is impossible to the problem of point between the detection palm.
Wherein ACB is adjacent finger tip to target Euclidean distance sum, (Cx, Cy) it is the coordinate of target pixel points C, (Ax, Ay)、(Bx, By) be respectively adjacent finger tip point A Yu B coordinate.Impact point C, from A point, is then assigned to when ABC is more than MAX MAX, so until C arrives B point, the final MAX point obtained is then point between the required palm.
The method obtain not all referring to point between the palm between point, comprise the both sides of palm or wrist, Ke Yitong sometimes Cross vector angle formula (11) and remove this 2 point.
px=Ax-Cx, py=Ay-Cy, qx=Bx-Cx, qy=BY-Cy (10)
px、pyWith qx、qyRepresent respectively and point out the vector bristled with anger to A Yu B point from C, the most just can obtain by directed quantity angle formulae Go out cos θ, therefore then this some removal when cos θ is less than 0 can be defined.
(2.3.4) finger tip filters: obtain most important part after finger tip point.But pad of finger is not only one of which to be referred to Cusp, therefore groundwork is by screening, it is necessary first to remove unnecessary finger tip point.Function deldot () is used to delete distance relatively Near finger tip point, after having processed close point, just rejects the finger tip point of non-finger tip.Now need to use point between the palm, by between the palm This characteristic o'clock between two finger tips, search meet simultaneously have between consecutive points between the palm point and away between the palm distance of point exceed and make by oneself Threshold value the two condition time, can draw out finger tip point and the palm between point.
(3) gesture tracking module: the motion track of tracking user's gesture, and obtain the amount of movement of gesture, reflect as key Mus Penetrate the input information of module.Use Camshift innovatory algorithm to carry out hand tracking i.e. to need not choose with mouse before following the tracks of Tracing area.Gesture tracking module is formed by with lower part: tracing area selection, Camshift algorithm improvement, tracking center of gravity, face Long-pending extraction:
(3.1) tracing area selects: choose region interested by finger tip location.
(3.2) Camshift tracking hand algorithm improvement: Camshift algorithm, i.e. " Continuously adaptive Mean-Shift " algorithm is a kind of motion tracking algorithms.In common tracking processes, it is mainly by moving object in video The colouring information of body reaches the purpose followed the tracks of.It is the improvement of Meanshift algorithm.
Point in data regional area is only processed by Mean-Shift algorithm, moving area again after having processed.With Unlike Mean-Shift, Camshift search window can adjust size automatically.If have one be prone to segmentation distribution (as Keep close hands feature), this algorithm can according to hands open hold with fist time hands size automatically adjust window size.? In the application followed the tracks of, it can be using the new size calculated by former frame as the tracing area of next frame.
In Mean-Shift Yu Camshift track algorithm, it is required for selecting in advance area-of-interest, in general, touches Send out a mouse event and choose following range, but this belongs to and has supervision property, and do not meet the expectation of this project.Therefore need into Row Camshift algorithm improvement, makes it possible to choose region interested without supervision.Main location by finger tip is obtained This region, i.e. after the function reaching finger tip location, chooses one coordinate of composition of finger tip coordinate x, y value respectively as the upper left corner Point, width and highly take the minima of horizontal vertical longest distance.See Figure 10, A point and be then respectively the upper left corner of finger tip with B point Point and the coordinate figure of bottom right angle point, upper left angle point can retain, then 2 are carried out difference operation pass through formula MIN (| Ax-Bx|, |Ay-By|) calculate span size.Wherein, (Ax, Ay)、(Bx, By) it is respectively some A and the coordinate putting B.
(3.3) center of gravity and area extraction are followed the tracks of: the predominantly algorithm design of simulating keyboard and mouse.Patent of the present invention uses Frame difference method calculates finger tip coordinate and follows the tracks of the motion-vector of center of gravity, follows the tracks of the quantity of point between the change of area and finger tip and the palm, Do a simple judgement.
(4) mapping block: integrate finger tip locating module and gesture tracking module, to the operation under photographic head and analog key Mus operation room plays a part to build hinge.It includes keyboard emulation and mouse emulation:
(4.1) keyboard emulation: the parameter used, for following the tracks of center of gravity and following the tracks of area, first passes through frame difference method and calculates center of gravity Motion-vector and area change vector.In the calculating of analog direction key, calculate the speed of movement with the time of predefined Degree, then judges direction key when exceeding this speed.Owing to vertical direction is more than horizontal direction acting, so acquiescence vertical direction key Priority higher.The priority in space is minimum, it is judged that during its button, allows the area of area and previous frame after change Contrast, if the area after Bian Hua is less than a upper area 0.6 times, then judge to activate space bar.
(4.2) mouse emulation: that uses refers to the number of point between cusp and the palm.The most first meter is sent out by frame difference with keyboard emulation Calculate the number of point between the motion-vector of finger tip and finger tip, the palm.Then the speed of movement is calculated by the time of predefined Degree, then judges the vector of movement, is being assigned to the coordinate of current mouse when exceeding this speed.Between the palm, the number of point is to sentence Determine the movement whether carrying out mouse and whether carry out mouse-click.
Above-mentioned for the present invention preferably embodiment, but embodiments of the present invention are not limited by foregoing, its The change made under his any spirit without departing from the present invention and principle, modify, substitute, combine, simplify, all should be The substitute mode of effect, within being included in protection scope of the present invention.

Claims (6)

1. a man-machine interactive system device based on gesture identification and operational approach, it is characterised in that real by four module Now to the simulation of key Mus and then realize man-machine interaction, wherein:
Described system and device includes monocular cam, is provided with the PC of man-machine interaction program, PLC plate and physical application equipment, Wherein, described monocular cam is for capturing the images of gestures of user;Described PC obtains monocular by USB interface and images The images of gestures of head capture, image will be identified, analyze and process by man-machine interaction program therein;Described PLC plate leads to Cross serial ports or USB obtains the gesture information after PC processes;Described physical application equipment obtains information right by circuit Information carries out feedback and implements.
Described operational approach includes gesture input module, finger tip locating module, gesture tracking module and key Mus mapping block, its In:
Described gesture input module captures user's gesture, and the images of gestures input finger tip that will capture for monocular cam Locating module;
Described finger tip locating module is for positioning the fingertip location in images of gestures, and identifies the class of user's gesture according to this Type and the position of gesture, as gesture tracking module and the input of key Mus mapping block;
Described gesture tracking module is for following the trail of the motion track of user's gesture, and obtains the amount of movement of gesture, as key Mus The input of mapping block;
Described key Mus mapping block for being identified as keyboard and the Mus of correspondence by the type of user's gesture and gesture amount of movement Target operates, and carries out computer control accordingly.
Man-machine interactive system device based on gesture identification the most according to claim 1 and operational approach, it is characterised in that Described finger tip locating module, comprises the following steps:
(1) photographic head is loaded into image: recalls photographic head and is loaded into image;
(2) Image semantic classification: image is carried out at color space conversion, colour of skin threshold process, image denoising process, image two-value Reason and opening operation process;
(3) profile and finger tip are found: based on point and the searching of finger tip point between the palm on profile;
(4) between the palm, location and finger tip filter: location Calculation between being slapped by finger tip location, the location between the palm and between finger tip plays The mutually effect of restriction.
Man-machine interactive system device based on gesture identification the most according to claim 1 and operational approach, it is characterised in that Described gesture tracking module, is combined with Camshift algorithm by finger tip coordinate setting, by Camshift algorithm improvement is The method for tracking target of one unsupervised learning, wherein, this module comprises the following steps:
(1) tracing area selects: choose region interested by finger tip location;
(2) Camshift follows the tracks of hand algorithm process: mainly reach tracking by the colouring information of moving objects in video Purpose;
(3) center of gravity and area extraction are followed the tracks of: use frame difference method calculate finger tip coordinate and follow the tracks of the motion-vector of center of gravity, tracing surface The quantity of point between long-pending change and finger tip and the palm, does simple judgement accordingly.
4. according to the man-machine interactive system device based on gesture identification described in any one of claim 1-2 and operational approach, its It is characterised by, described finger tip locating module, wherein, calculates for point between the palm, it is proposed that two kinds of solutions: utilize minimum former Manage, utilize the triangle both sides sum principle more than the 3rd limit.
In man-machine interactive system operational approach based on gesture identification the most according to claim 2, it is characterised in that described Finger tip locating module, in step (2), described Image semantic classification includes color space conversion, colour of skin threshold process, image denoising Process, image two-value processes and opening operation processes, wherein:
Color space is changed: be transformed into by RGB image under HSV colour model;
Colour of skin threshold process: utilize the otsu adaptive threshold fuzziness of OpenCV;
Image denoising processes: removes and identifies the noise around target in image;
Image two-value processes: the prospect of image split with background;
Opening operation processes: eliminates after two-value processes disconnected scatterplot in image, and fills missing point.
In man-machine interactive system operational approach based on gesture identification the most according to claim 2, it is characterised in that described Finger tip locating module, in step (3), described profile and finger tip find include that profile is searched, finger tip point is searched, position between the palm with And finger tip filters, wherein:
Profile is searched: screen unique profile after obtaining multiple profiles by four connections or eight connectivity;
Finger tip point is searched: identify number and the position of current finger tip;
Position between the palm: calculate between minimum point is slapped and position;
Finger tip filters: screen and remove unnecessary finger tip point.
CN201610553998.7A 2016-07-07 2016-07-07 Man-machine interactive system device based on gesture identification and operational approach Pending CN106200971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610553998.7A CN106200971A (en) 2016-07-07 2016-07-07 Man-machine interactive system device based on gesture identification and operational approach

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610553998.7A CN106200971A (en) 2016-07-07 2016-07-07 Man-machine interactive system device based on gesture identification and operational approach

Publications (1)

Publication Number Publication Date
CN106200971A true CN106200971A (en) 2016-12-07

Family

ID=57474393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610553998.7A Pending CN106200971A (en) 2016-07-07 2016-07-07 Man-machine interactive system device based on gesture identification and operational approach

Country Status (1)

Country Link
CN (1) CN106200971A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272893A (en) * 2017-06-05 2017-10-20 上海大学 Man-machine interactive system and method based on gesture control non-touch screen
CN108777177A (en) * 2018-06-26 2018-11-09 李良杰 Demand selects and conveys system
CN108983980A (en) * 2018-07-27 2018-12-11 河南科技大学 A kind of mobile robot basic exercise gestural control method
CN109303987A (en) * 2017-07-26 2019-02-05 霍尼韦尔国际公司 For the enhancing what comes into a driver's of fire fighter sensed using head-up display and gesture
CN109710066A (en) * 2018-12-19 2019-05-03 平安普惠企业管理有限公司 Exchange method, device, storage medium and electronic equipment based on gesture identification
CN109961454A (en) * 2017-12-22 2019-07-02 北京中科华正电气有限公司 Human-computer interaction device and processing method in a kind of embedded intelligence machine
CN110026902A (en) * 2017-12-27 2019-07-19 株式会社迪思科 Cutting apparatus
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN111158457A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN112114675A (en) * 2020-09-29 2020-12-22 陕西科技大学 Method for using non-contact elevator keyboard based on gesture control

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272893A (en) * 2017-06-05 2017-10-20 上海大学 Man-machine interactive system and method based on gesture control non-touch screen
CN109303987A (en) * 2017-07-26 2019-02-05 霍尼韦尔国际公司 For the enhancing what comes into a driver's of fire fighter sensed using head-up display and gesture
CN109961454A (en) * 2017-12-22 2019-07-02 北京中科华正电气有限公司 Human-computer interaction device and processing method in a kind of embedded intelligence machine
CN110026902A (en) * 2017-12-27 2019-07-19 株式会社迪思科 Cutting apparatus
CN108777177A (en) * 2018-06-26 2018-11-09 李良杰 Demand selects and conveys system
CN108983980A (en) * 2018-07-27 2018-12-11 河南科技大学 A kind of mobile robot basic exercise gestural control method
CN109710066A (en) * 2018-12-19 2019-05-03 平安普惠企业管理有限公司 Exchange method, device, storage medium and electronic equipment based on gesture identification
CN111158491A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN111158457A (en) * 2019-12-31 2020-05-15 苏州莱孚斯特电子科技有限公司 Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
CN112114675A (en) * 2020-09-29 2020-12-22 陕西科技大学 Method for using non-contact elevator keyboard based on gesture control
CN112114675B (en) * 2020-09-29 2023-05-26 陕西科技大学 Gesture control-based non-contact elevator keyboard using method

Similar Documents

Publication Publication Date Title
CN106200971A (en) Man-machine interactive system device based on gesture identification and operational approach
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
Mukherjee et al. Fingertip detection and tracking for recognition of air-writing in videos
CN107038424B (en) Gesture recognition method
Crowley et al. Vision for man machine interaction
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
Ibraheem et al. Survey on various gesture recognition technologies and techniques
Hasan et al. Hand gesture modeling and recognition using geometric features: a review
CN107168527A (en) The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
Wu et al. Robust fingertip detection in a complex environment
CN105335711B (en) Fingertip Detection under a kind of complex environment
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN103336967B (en) A kind of hand motion trail detection and device
CN109325408A (en) A kind of gesture judging method and storage medium
CN108073851A (en) A kind of method, apparatus and electronic equipment for capturing gesture identification
Li et al. Hand gesture tracking and recognition based human-computer interaction system and its applications
CN108268125A (en) A kind of motion gesture detection and tracking based on computer vision
Chu et al. A Kinect-based handwritten digit recognition for TV remote controller
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
CN108108648A (en) A kind of new gesture recognition system device and method
CN108255285A (en) It is a kind of based on the motion gesture detection method that detection is put between the palm
Li Vision based gesture recognition system with high accuracy
Alhamazani et al. Using depth cameras for recognition and segmentation of hand gestures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20161207

WD01 Invention patent application deemed withdrawn after publication