CN102402289B - Mouse recognition method for gesture based on machine vision - Google Patents

Mouse recognition method for gesture based on machine vision Download PDF

Info

Publication number
CN102402289B
CN102402289B CN201110374316.3A CN201110374316A CN102402289B CN 102402289 B CN102402289 B CN 102402289B CN 201110374316 A CN201110374316 A CN 201110374316A CN 102402289 B CN102402289 B CN 102402289B
Authority
CN
China
Prior art keywords
gesture
mouse
target
target gesture
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110374316.3A
Other languages
Chinese (zh)
Other versions
CN102402289A (en
Inventor
徐向民
孙骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201110374316.3A priority Critical patent/CN102402289B/en
Publication of CN102402289A publication Critical patent/CN102402289A/en
Application granted granted Critical
Publication of CN102402289B publication Critical patent/CN102402289B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a mouse recognition method for gesture based on machine vision, comprising the following steps: (1) establishing an active shape model for gestures; (2) off-line training for gestures and getting a gesture features classifier; (3) collecting images; (4) extracting partial binary pattern characteristics for the images, searching the target gesture among the images through the gesture features classifier from step (2), then step (5); (5) positioning on the fingertips; (6) mapping of mouse. The invention is a natural and intuitive way of human-computer interaction, and has advantages of no need to carry other auxiliary equipments, finishing the mouse operation through natural hand and finger movement, small influence by light and background, and freely operation right switch when multi-users are operating the computer.

Description

A kind of mouse recognition method for gesture based on machine vision
Technical field
The present invention relates to machine vision technique and human-computer interaction technology, particularly a kind of mouse recognition method for gesture based on machine vision.
Background technology
Along with the development of computer technology, the interacting activity of people and computing machine becomes an important component part of mankind's schedule life.Mouse is mainly used in the man-machine interaction of graphical interfaces, but in reciprocal process, staff must be held a kind of mouse apparatus, in a plane, move, this mode of operation all has some limitations at aspects such as the naturality of man-machine interaction and friendly, and therefore research meets the development trend that man-machine interaction mode that the mankind are accustomed to naturally becomes field of human-computer interaction.
Human-computer interaction technology based on gesture mainly changes by identification user's gesture, the operation that judgement user need to carry out.It is divided into based on receipt gloves with based on machine vision two classes.Method based on data glove, requires user to wear a kind of sensor of similar gloves, thereby by computing machine, obtains position and the movable information of hand, and this method exists the shortcomings such as burdensome, dumb.The existing method based on machine, mostly adopts the method that the colour of skin is extracted to come dividing gesture region, and this method is subject to illumination effect large, and conventionally require the region of video acquisition be confined to hand region among a small circle in, this has limited the scope of activities of staff.
Summary of the invention
In order to overcome the deficiencies in the prior art, the object of the present invention is to provide a kind of natural, mouse recognition method for gesture based on machine vision intuitively.
Object of the present invention is achieved through the following technical solutions:
A mouse recognition method for gesture based on machine vision, comprises the following steps:
(1) set up the moving shape model of gesture: the profile of the gesture of first the five fingers being opened carries out the basic gesture modeling of B batten; Then on the basis of basic gesture model, set up the moving shape model of the gesture of B batten, shown in the following formula of its state space:
χ=(x,y,θ,s,λ,θ 1,l 1,θ 2,l 2,θ 3,l 3,θ 4,l 4,θ 5,θ 6)
Wherein, x, y is respectively horizontal ordinate and the ordinate of the position of gesture; θ represents the gesture anglec of rotation planar; S represents the size of gesture; θ 1, l 1represent that respectively little finger is around angle and the little finger length of its root rotation; θ 2, l 2represent respectively nameless angle and nameless length around its root rotation; θ 2, l 3represent that respectively middle finger is around angle and the middle finger length of its root rotation; θ 4, l 4represent that respectively forefinger is around angle and the forefinger length of its root rotation; θ 6represent that thumb is around the angle of its root rotation; θ 5represent that thumb is around the angle of its joint, middle part rotation;
(2) gesture is carried out to off-line training, obtain gesture feature sorter;
(3) gather image;
(4) extract local binary pattern (LBP) feature of image, the gesture feature sorter obtaining by step (2) is searched for target gesture in image; When searching target gesture, carry out step (5);
(5) finger tip location:
(5-1) adopt the moving shape model of the gesture in step (1) to carry out evaluation fitting to the target gesture searching, obtain the state space initial value χ of target gesture 0, make the initial value p of posterior probability density 00| Z 0)=1;
(5-2) target gesture is observed, utilized particle filter tracking algorithm to carry out iteration renewal to the state of target gesture, described iteration renewal process is specific as follows:
The observed quantity that defines t-1 two field picture is Z t-1, the state space of target gesture is χ during t-1 frame t-1posterior probability be p t-1t-1| Z t-1);
The optimal profile state of target gesture during according to t-1 frame the optimal profile state of target gesture during to t frame estimation probability be: p tt| χ t-1); Order
During t frame, the likelihood probability of the profile state of target gesture is p t(Z t| χ t);
While obtaining t frame, the state space of target gesture is χ tposterior probability be:
p tt|Z t)=p t(Z tt)*∫p ttt-1)*p t-1t-1|Z t-1)dχ t-1/p t(Z t);
The posterior probability density p of target gesture during according to t frame tt| Z t) the optimal profile state of target gesture while estimating t frame
(5-3) when t frame, in the optimal profile of target gesture, indicate the coordinate A of forefinger finger tip and the coordinate B of thumb finger tip;
(6) mouse mappings:
Build right-angle triangle ABC: cross A point and do vertical line, cross B point and do horizontal line, intersection point is C;
If line segment AB and CB angle are α, the angle of line segment BA and CA is β, and α is less than threshold value formula α tbe mapped as left mouse button click action, β is less than threshold value beta tbe mapped as and click action by mouse right button;
The mid point M of line taking section AB is mapped as the relative coordinate of mouse; If forefinger fingertip location is A in previous frame image *, the coordinate B of thumb finger tip *, line segment A *b *mid point be M *, directed line segment M*M is mapped as the relatively move vector of mouse on screen.
Step (2) is described carries out off-line training to gesture, obtains gesture feature sorter, is specially:
Use 600 open one's eyes wide mark gesture as positive sample and 1200 non-target gestures as negative sample; The opencv_traincascade program that use is increased income to be provided in the OpenCV of image processing class storehouse is trained.
The described gesture feature sorter obtaining by step (2) of step (4) is searched for target gesture in image, is specially:
The subwindow of intercepting entire image all 30 * 30, each subwindow, successively by 20 grades of gesture feature sorters, is eliminated non-target gesture subwindow step by step, and the subwindow by all 20 grades of gesture feature sorters is defined as target gesture; If do not find target gesture in this layer of search, by subwindow with 1.3 times of amplifications, again by gesture feature sorter, detect.
In the described search target of step (2) gesture process, if previous frame image has searched target gesture, the hunting zone of current frame image is reduced into region around, target gesture region in previous frame image graph picture, is specially:
If the center of the target gesture region searching in previous frame image is some O, some O is d to target gesture region left margin distance 1, some O is d to right margin distance 2, some O is d to coboundary distance 3, some O is d to lower boundary distance 4, new region of search is centered by an O, makees rectangular search frame, some O is 2*d to rectangular search frame left margin distance 1, some O is 2*d to rectangular search frame right margin distance 2, some O is 2*d to rectangular search upper frame edge circle distance 3, some O is 2*d to rectangular search frame lower boundary distance 4.
Step (5-2) is described to be observed target gesture, specifically by adaptive skin color segmentation method, observes; Described adaptive skin color segmentation method comprises the following steps:
(5-2-1) set up TSL space complexion model;
(5-2-2) carry out Complexion filter;
(5-2-3) bianry image obtaining after Complexion filter is carried out to dilation operation.
The described TSL of foundation of step (5-2-1) space complexion model is specially:
By following formula, by RGB color space conversion, be TSL color space:
T = 1 2 π tan - 1 ( r ′ g ′ ) + 0.5 S = 9 5 ( r ′ 2 + g ′ 2 ) L = 0.299 * R + 0.587 * G + 0.114 * B
Wherein r ′ = ( r - 1 3 ) , g ′ = ( g - 1 3 )
r = R R + G + B , g = G R + G + B
R, G, B are respectively the RGB component under rgb color model; T, S, L are respectively the TSL component under TSL colour model.
Described in step (5-2-2), carry out Complexion filter, be specially:
The face of 500 images that comprise area of skin color and hand region are sampled, Mean Matrix E and the covariance matrix ∑ of the dimensional Gaussian distribution probability distribution parameter of T and S under estimation TSL model;
Each pixel is detected, if the mahalanobis distance of the C=(T, S) that the T of a pixel and S component form vector and mean vector E is lower than threshold value Threshold, think that this pixel belongs to area of skin color; Described mahalanobis distance d=(C-E) t-1(C-E).
Described threshold value Threshold is determined by following process:
Distance according to Mean Matrix E and covariance matrix ∑ estimation C=(T, S) vector and mean vector E, obtains initial value;
According to following formula, calculate the degree of confidence confidence of each threshold value Threshold:
Confidence = PosSkin MaskArea * 2 - NegSkin BgArea
Wherein PosSkin refers to the quantity of the pixel of the colour of skin in colour of skin template region, and MaskArea refers to the total area in colour of skin template region, and NegSkin refers to the quantity of skin pixel point in background template region, and BgArea refers to the area in background template region;
The threshold value Threshold of degree of confidence maximum is updated to the TSL complexion model parameter of current the best, and upgrades TSL space complexion model.
Compared with prior art, the present invention has the following advantages and technique effect:
1, the present invention is a kind of natural, man-machine interaction mode intuitively, and user does not need to carry other utility appliance, by natural hand and finger motion, completes mouse action;
2, particle filter tracking model under the comprehensive LBP feature of the present invention and ASM, carries out gesture zone location and profile and follows the tracks of, and is subject to illumination and background influence little;
3, the present invention has overcome conventional mouse and user has been operated to the restriction of degree of freedom, can make user before camera in 2m middle distance computing machine is controlled;
4, the present invention can be when multi-user operation computing machine, free blocked operation power.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the present invention is based on the mouse recognition method for gesture of machine vision.
Fig. 2 is the schematic diagram of the five fingers gesture of opening.
Fig. 3 is the schematic diagram of target gesture.
Fig. 4 is the schematic diagram in search target gesture process.
Fig. 5 is the template-setup schematic diagram in complexion model.
Fig. 6 builds the schematic diagram of right-angle triangle ABC in mouse mappings process.
Fig. 7 is that left mouse button is clicked schematic diagram.
Fig. 8 is for clicking schematic diagram by mouse right button.
Fig. 9 is the schematic diagram that mouse moves.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment
Mouse recognition method for gesture of the present invention is based on gesture mouse system, the gesture mouse system of the present embodiment is comprised of image capture module, image processing module and mouse event respond module, image capture module comprises camera, is responsible for Real-time Collection user's image and is transferred in image processing module.
Image processing module and mouse event respond module are completed by computing machine, and wherein image processing module moves various image processing algorithm real-time analysis user images, and the motion of user's hand and finger is converted to corresponding steering order.Mouse event respond module is accepted the steering order that image processing module sends, and is mouse response events by corresponding instruction transformation.
As shown in Figure 1, the mouse recognition method for gesture based on machine of the present embodiment, comprises the following steps:
(1) set up the moving shape model of gesture: the profile of the gesture of first the five fingers being opened (as shown in Figure 2) carries out the basic gesture modeling of B batten; Then on the basis of basic gesture model, set up the moving shape model of the gesture of B batten, shown in the following formula of its state space:
χ=(x,y,θ,s,λ,θ 1,l 1,θ 2,l 2,θ 3,l 3,θ 4,l 4,θ 5,θ 6)
Wherein, x, y is respectively horizontal ordinate and the ordinate of the position of gesture; θ represents the gesture anglec of rotation planar; S represents the size of gesture; θ 1, l 1represent that respectively little finger is around angle and the little finger length of its root rotation; θ 2, l 2represent respectively nameless angle and nameless length around its root rotation; θ 3, l 3represent that respectively middle finger is around angle and the middle finger length of its root rotation; θ 4, l 4represent that respectively forefinger is around angle and the forefinger length of its root rotation; θ 5, θ 6represent that thumb is around the angle of its root rotation; θ 6represent that thumb is around the angle of its root rotation; θ 5represent that thumb is around the angle of its joint, middle part rotation;
(2) off-line training is carried out in staff region, obtains gesture feature sorter: use 600 open one's eyes wide mark gesture as positive sample and by 1200 non-target gestures as negative sample; The opencv traincascade program that use is increased income to be provided in the OpenCV of image processing class storehouse is trained.
(3) gather image;
(4) extract the partial binary mode characteristic of image, the gesture feature sorter obtaining by step (2) is searched for target gesture as shown in Figure 3 in image; When searching target gesture, carry out step (5);
The described gesture feature sorter obtaining by step (2) is searched for target gesture in image, is specially:
The subwindow of intercepting entire image all 30 * 30, each subwindow is successively by 20 grades of gesture feature sorters, eliminate step by step non-target gesture subwindow, only have the subwindow by all 20 grades of gesture feature sorters to be just defined as target gesture, if do not find target gesture in this layer of search, by subwindow with 1.3 times of amplifications, again by gesture feature sorter, detect.
In search procedure, in order to improve search efficiency, if previous frame image has searched target gesture, the hunting zone of current frame image is reduced into region around, target gesture region in previous frame image.As shown in Figure 4, suppose that the center, target gesture region (being the inside casing in Fig. 4) searching is some O in previous frame image, some O is d to the left margin distance of target gesture region 1, some O is d to right margin distance 2, some O is d to coboundary distance 3, some O is at d to lower boundary distance 4, new region of search is centered by an O, makees rectangular search frame (being the housing in Fig. 4), mid point O is 2*d to rectangular search frame left margin distance 1, O is 2*d to rectangular search frame right margin distance 2, O is 2*d to rectangular search upper frame edge circle distance 3, O is 2*d to rectangular search frame lower boundary distance 4.
(5) finger tip location:
(5-1) adopt the moving shape model of the gesture in step (1) to carry out evaluation fitting to the target gesture searching, obtain the state space initial value χ of target gesture 0, make the initial value p of posterior probability density 00| Z 0)=1;
(5-2) by adaptive skin color segmentation method, target gesture is observed, utilized particle filter tracking algorithm to carry out iteration renewal to the state of target gesture;
Described adaptive skin color segmentation method comprises the following steps:
(5-2-1) set up TSL space complexion model:
First by following formula, by RGB color space conversion, be TSL color space:
T = 1 2 π tan - 1 ( r ′ g ′ ) + 0.5 S = 9 5 ( r ′ 2 + g ′ 2 ) L = 0.299 * R + 0.587 * G + 0.114 * B
Wherein r ′ = ( r - 1 3 ) , g ′ = ( g - 1 3 )
r = R R + G + B , g = G R + G + B
R, G, B are respectively the RGB component under rgb color model; T, S, L are respectively the TSL component under TSL colour model;
(5-2-2) carry out Complexion filter: face and the hand region of the image by comprising area of skin color to 500 are sampled, estimate Mean Matrix E and the covariance matrix ∑ of the dimensional Gaussian distribution probability distribution parameter of T and S under TSL model;
Each pixel is detected, if the mahalanobis distance of the C=(T, S) that the T of a pixel and S component form vector and mean vector E is lower than threshold value Threshold, think that this pixel belongs to area of skin color; Described mahalanobis distance d=(C-E) t-1(C-E);
Described threshold value Threshold is determined by following process:
Distance according to Mean Matrix E and covariance matrix ∑ estimation C=(T, S) vector and mean vector E, obtains initial value;
According to following formula, calculate the degree of confidence confidence of each threshold value Threshold:
Confidence = PosSkin MaskArea * 2 - NegSkin BgArea
Template-setup in complexion model as shown in Figure 5.In above formula, PosSkin refers to the quantity of the pixel of the colour of skin in colour of skin template region (being gesture region in Fig. 5), MaskArea refers to the total area in colour of skin template region, NegSkin refers to the quantity of skin pixel point in background template region (being the black background in Fig. 5), and BgArea refers to the area in background template region.
The TSL complexion model parameter that the threshold value Threshold that gets degree of confidence maximum is current the best, and upgrade TSL space complexion model;
(5-2-3) bianry image obtaining after Complexion filter is carried out to dilation operation, the cavity causing to reduce Complexion filter.
Described iteration renewal process is specific as follows:
The observed quantity that defines t-1 two field picture is Z t-1, the state space of target gesture is χ during t-1 frame t-1posterior probability be p t-1t-1| Z t-1);
The optimal profile state of target gesture during according to t-1 frame during to t frame, the estimation probability of the optimal profile state of target gesture is: p tt| χ t-1); Order
During t frame, the likelihood probability of the profile state of target gesture is p t(Z t| χ t);
While obtaining t frame, the state space of target gesture is χ tposterior probability be:
p tt|Z t)=p t(Z tt)*∫p ttt-1)*p t-1t-1|Z t-1)dχ t-1/p t(Z t);
The posterior probability density p of target gesture during according to t frame tt| Z t) the optimal profile state of target gesture is while estimating t frame
(5-3) when t frame, in the optimal profile of target gesture, indicate the coordinate A of forefinger finger tip and the coordinate B of thumb finger tip;
(6) mouse mappings:
As shown in Figure 6, build right-angle triangle ABC: cross A point and do vertical line, cross B point and do horizontal line, intersection point is C, and establishing line segment AB and CB angle is α, and the angle of line segment BA and CA is β, and α is less than threshold alpha tbe mapped as left mouse button click action, Fig. 7 is that left mouse button is clicked schematic diagram.β is less than threshold value beta tbe mapped as and click action by mouse right button, Fig. 8 is for clicking schematic diagram by mouse right button.
The mid point M of line taking section AB is mapped as the relative coordinate of mouse, and establishing forefinger fingertip location in previous frame image is A *, the coordinate B of thumb finger tip *, line segment A *b *mid point be M *, directed line segment M*M is mapped as the relatively move vector of mouse on screen, and Fig. 9 is the schematic diagram that mouse moves.
Above-described embodiment is preferably embodiment of the present invention; but embodiments of the present invention are not limited by the examples; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.

Claims (8)

1. the mouse recognition method for gesture based on machine vision, is characterized in that, comprises the following steps:
(1) set up the moving shape model of gesture: the profile of the gesture of first the five fingers being opened carries out the basic gesture modeling of B batten; Then on the basis of basic gesture model, set up the moving shape model of the gesture of B batten, shown in the following formula of its state space:
χ = ( x , y , θ , s , λ , θ 1 , l 1 , θ 2 , l 2 , θ 3 , l 3 , θ 4 , l 4 , θ 5 , θ 6 )
Wherein, x, y is respectively horizontal ordinate and the ordinate of the position of gesture; θ represents the gesture anglec of rotation planar; S represents the size of gesture; θ 1, l 1represent that respectively little finger is around angle and the little finger length of its root rotation; θ 2, l 2represent respectively nameless angle and nameless length around its root rotation; θ 2, l 2represent that respectively middle finger is around angle and the middle finger length of its root rotation; θ 4, l 4represent that respectively forefinger is around angle and the forefinger length of its root rotation; θ 6represent that thumb is around the angle of its root rotation; θ 5represent that thumb is around the angle of its joint, middle part rotation;
(2) gesture is carried out to off-line training, obtain gesture feature sorter;
(3) gather image;
(4) extract the partial binary mode characteristic of image, the gesture feature sorter obtaining by step (2) is searched for target gesture in image; When searching target gesture, carry out step (5);
(5) finger tip location:
(5-1) adopt the moving shape model of the gesture in step (1) to carry out evaluation fitting to the target gesture searching, obtain the state space initial value χ of target gesture 0, make the initial value p of posterior probability density 0(χ 0|Z 0)=1;
(5-2) target gesture is observed, utilized particle filter tracking algorithm to carry out iteration renewal to the state of target gesture, described iteration renewal process is specific as follows:
The observed quantity that defines t-1 two field picture is Z t-1, the state space of target gesture is χ during t-1 frame t-1posterior probability be p τ-1t-1| Z τ-1);
The optimal profile state of target gesture during according to t-1 frame the optimal profile state of target gesture during to t frame estimation probability be: p tt| χ t-1); Order
During t frame, the likelihood probability of the profile state of target gesture is p τ(z τ| χ τ);
During t frame, the state space of target gesture is χ tposterior probability be:
p tt|z t)=p t(Z t|χt:)*∫p tt|χ t-1)*p t-:(χ t-1|Z t-1:)dχ t-1/p t(Z t);
The posterior probability density p of target gesture during according to t frame ττ| while Z) estimating t frame, the optimal profile state of target gesture is
(5-3) when t frame, in the optimal profile of target gesture, indicate the coordinate A of forefinger finger tip and the coordinate B of thumb finger tip;
(6) mouse mappings:
Build right-angle triangle ABC: cross A point and do vertical line, cross B point and do horizontal line, intersection point is C;
If line segment AB and CB angle are α, the angle of line segment BA and CA is β, and α is less than threshold alpha tbe mapped as left mouse button click action, β is less than threshold value beta tbe mapped as and click action by mouse right button;
The mid point M of line taking section AB is mapped as the relative coordinate of mouse; If forefinger fingertip location is A in previous frame image *, the coordinate B of thumb finger tip *, line segment A *b *mid point be M*, directed line segment M*M is mapped as the relatively move vector of mouse on screen.
2. the mouse recognition method for gesture based on machine vision according to claim 1, is characterized in that, step (2) is described carries out off-line training to gesture, obtains gesture feature sorter, is specially:
Use 600 open one's eyes wide mark gesture as positive sample and 1200 non-target gestures as negative sample; The opencv_traincascade program that use is increased income to be provided in the OpenCV of image processing class storehouse is trained.
According to claim 1 based on machine depending on _ feel to it is characterized in that the in hand ca gesture sca mouse de mark recognition methods of tra, the described gesture feature sorter obtaining by step (2) of step (4) is searched for target gesture in image, is specially:
The subwindow of intercepting entire image all 30 * 30, each subwindow, successively by 20 grades of gesture feature sorters, is eliminated non-target gesture subwindow step by step, and the subwindow by all 20 grades of gesture feature sorters is defined as target gesture; If do not find target gesture in this layer of search, by subwindow with 1.3 times of amplifications, again by gesture feature sorter, detect.
4. the mouse recognition method for gesture based on machine vision according to claim 3, it is characterized in that, in described search target gesture process, if previous frame image has searched target gesture, the hunting zone of current frame image is reduced into region around, target gesture region in previous frame image, is specially:
If the center of the target gesture region searching in previous frame image is some O, some O is d to target gesture region left margin distance 1, some O is d to right margin distance 2, some O is d to coboundary distance 3, some O is d to lower boundary distance 4, new region of search is centered by an O, makees rectangular search frame, some O is 2*d to rectangular search frame left margin distance 1, some O is 2*d to rectangular search frame right margin distance 2, some O is 2*d to rectangular search upper frame edge circle distance 3, some O is 2*d to rectangular search frame lower boundary distance 4.
5. the mouse recognition method for gesture based on machine vision according to claim 1, is characterized in that, step (5-2) is described to be observed target gesture, specifically by adaptive skin color segmentation method, observes; Described adaptive skin color segmentation method comprises the following steps:
(5-2-1) set up TSL space complexion model;
(5-2-2) carry out Complexion filter;
(5-2-3) bianry image obtaining after Complexion filter is carried out to dilation operation.
6. the mouse recognition method for gesture based on machine vision according to claim 5, is characterized in that, the described TSL of foundation of step (5-2-1) space complexion model is specially:
By following formula, by RGB color space conversion, be TSL color space:
T = 1 2 π tan - 1 ( r ′ g ′ ) + 0.5 S = 9 5 ( r ′ 2 + g ′ 2 ) L = 0.299 * R + 0.587 * G + 0.114 * B
Wherein r ′ = ( r - 1 3 ) , g ′ = ( g - 1 3 )
r = R R + G + B , g = G R + G + B
R, G, B are respectively the RGB component under rgb color model; T, S, L are respectively the TSL component under TSL colour model.
7. the mouse recognition method for gesture based on machine vision according to claim 6, is characterized in that, carries out Complexion filter described in step (5-2-2), is specially:
The face of 500 images that comprise area of skin color and hand region are sampled, Mean Matrix E and the covariance matrix ∑ of the dimensional Gaussian distribution probability distribution parameter of T and S under estimation TSL model;
Each pixel is detected, if the mahalanobis distance of the C=(T, S) that the T of a pixel and S component form vector and mean vector E is lower than threshold value Threshold, think that this pixel belongs to area of skin color; Described Ma Shi is huge from d=(C-E) t-1(C-E).
8. the mouse recognition method for gesture based on machine vision according to claim 7, is characterized in that, described threshold value Threshold is determined by following process:
Distance according to Mean Matrix E and covariance matrix ∑ estimation C=(T, S) vector and mean vector E, obtains initial value;
According to following formula, calculate the degree of confidence confidence of each threshold value Threshold:
Confldence = PosSkin MaskArea * 2 - NegSkin BgArea
Wherein PosSkin refers to the quantity of the pixel of the colour of skin in colour of skin template region, and MaskArea refers to the total area in colour of skin template region, and NegSkin refers to the quantity of skin pixel point in background template region, and BgArea refers to the area in background template region;
The threshold value Threshold of degree of confidence maximum is updated to the TSL complexion model parameter of current the best, and upgrades TSL space complexion model.
CN201110374316.3A 2011-11-22 2011-11-22 Mouse recognition method for gesture based on machine vision Expired - Fee Related CN102402289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110374316.3A CN102402289B (en) 2011-11-22 2011-11-22 Mouse recognition method for gesture based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110374316.3A CN102402289B (en) 2011-11-22 2011-11-22 Mouse recognition method for gesture based on machine vision

Publications (2)

Publication Number Publication Date
CN102402289A CN102402289A (en) 2012-04-04
CN102402289B true CN102402289B (en) 2014-09-10

Family

ID=45884573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110374316.3A Expired - Fee Related CN102402289B (en) 2011-11-22 2011-11-22 Mouse recognition method for gesture based on machine vision

Country Status (1)

Country Link
CN (1) CN102402289B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662471B (en) * 2012-04-09 2015-02-18 沈阳航空航天大学 Computer vision mouse
CN103809738B (en) * 2012-11-13 2017-03-29 联想(北京)有限公司 A kind of information collecting method and electronic equipment
CN103914149B (en) * 2014-04-01 2017-02-08 复旦大学 Gesture interaction method and gesture interaction system for interactive television
CN105261038B (en) * 2015-09-30 2018-02-27 华南理工大学 Finger tip tracking based on two-way light stream and perception Hash
CN106406518B (en) * 2016-08-26 2019-01-18 清华大学 Gesture control device and gesture identification method
CN106778670A (en) * 2016-12-30 2017-05-31 上海集成电路研发中心有限公司 Gesture identifying device and recognition methods
CN107085438B (en) * 2017-04-28 2020-02-07 中国船舶重工集团公司第七0九研究所 Unmanned aerial vehicle path correction method and system based on quasi-uniform spline curve
CN107390867B (en) * 2017-07-12 2019-12-10 武汉大学 Man-machine interaction system based on android watch
EP3435202A1 (en) * 2017-07-25 2019-01-30 Siemens Healthcare GmbH Allocation of a tool to a gesture
CN107608510A (en) * 2017-09-13 2018-01-19 华中师范大学 Method for building up, device and the electronic equipment in gesture model storehouse
CN109101872B (en) * 2018-06-20 2023-04-18 济南大学 Method for generating 3D gesture mouse
CN110567441B (en) * 2019-07-29 2021-09-28 广东星舆科技有限公司 Particle filter-based positioning method, positioning device, mapping and positioning method
CN115578627B (en) * 2022-09-21 2023-05-09 凌度(广东)智能科技发展有限公司 Monocular image boundary recognition method, monocular image boundary recognition device, medium and curtain wall robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101901052A (en) * 2010-05-24 2010-12-01 华南理工大学 Target control method based on mutual reference of both hands

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040001113A1 (en) * 2002-06-28 2004-01-01 John Zipperer Method and apparatus for spline-based trajectory classification, gesture detection and localization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101901052A (en) * 2010-05-24 2010-12-01 华南理工大学 Target control method based on mutual reference of both hands

Also Published As

Publication number Publication date
CN102402289A (en) 2012-04-04

Similar Documents

Publication Publication Date Title
CN102402289B (en) Mouse recognition method for gesture based on machine vision
CN102831404B (en) Gesture detecting method and system
CN101853071B (en) Gesture identification method and system based on visual sense
CN101344816B (en) Human-machine interaction method and device based on sight tracing and gesture discriminating
CN105739702B (en) Multi-pose finger tip tracking for natural human-computer interaction
CN104899600B (en) A kind of hand-characteristic point detecting method based on depth map
CN103226388B (en) A kind of handwriting sckeme based on Kinect
CN102981742A (en) Gesture interaction system based on computer visions
CN103257713B (en) A kind of gesture control method
CN104407694A (en) Man-machine interaction method and device combining human face and gesture control
RU2013154102A (en) FINGER RECOGNITION AND TRACKING SYSTEM
CN107357414B (en) Click action recognition method and device
CN104298354A (en) Man-machine interaction gesture recognition method
CN105335711A (en) Fingertip detection method in complex environment
CN109189219A (en) The implementation method of contactless virtual mouse based on gesture identification
Ranawat et al. Hand gesture recognition based virtual mouse events
Wang et al. Real-time visual static hand gesture recognition system and its FPGA-based hardware implementation
ul Haq et al. New hand gesture recognition method for mouse operations
Czupryna et al. Real-time vision pointer interface
CN116909393A (en) Gesture recognition-based virtual reality input system
CN110309689B (en) Gabor domain gesture recognition detection method based on ultra-wideband radar
CN103400118B (en) The gestural control method that a kind of mapping relations are adaptively adjusted
Thomas et al. A comprehensive review on vision based hand gesture recognition technology
Shaker et al. Real-time finger tracking for interaction
Dhamanskar et al. Human computer interaction using hand gestures and voice

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140910

Termination date: 20211122