CN101593022A - A kind of quick human-computer interaction of following the tracks of based on finger tip - Google Patents

A kind of quick human-computer interaction of following the tracks of based on finger tip Download PDF

Info

Publication number
CN101593022A
CN101593022A CNA2009100406993A CN200910040699A CN101593022A CN 101593022 A CN101593022 A CN 101593022A CN A2009100406993 A CNA2009100406993 A CN A2009100406993A CN 200910040699 A CN200910040699 A CN 200910040699A CN 101593022 A CN101593022 A CN 101593022A
Authority
CN
China
Prior art keywords
image
finger tip
hand
tracks
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100406993A
Other languages
Chinese (zh)
Other versions
CN101593022B (en
Inventor
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN2009100406993A priority Critical patent/CN101593022B/en
Publication of CN101593022A publication Critical patent/CN101593022A/en
Application granted granted Critical
Publication of CN101593022B publication Critical patent/CN101593022B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a kind of quick human-computer interaction of following the tracks of based on finger tip.This method adopts the wide-angle high definition camera of one 60 to 120 degree that one room area is carried out the high resolving power shooting earlier to the image pre-service, the gained image is carried out piecture geometry fault correct; Carry out extracting hand images then, the correcting image of gained is used colour of skin wave filter, motion filters and color dispenser handle, then the result is merged, the handle portion image splits from the correcting image of gained; Carrying out finger tip location at last, use the coarse localization that histogram carries out finger tip, is the center with the rough position of finger tip, constructs a search window, carries out the accurate location of finger tip by template matches.The present invention effectively improves scene image perception efficient, realizes that finger in a big way detects and the location; The image distance of eyes is judged as the foundation that can carry out the finger tip location, be need not any priori of environment, outstanding apart from robustness.

Description

A kind of quick human-computer interaction of following the tracks of based on finger tip
Technical field
The present invention relates to based on finger tip follow the tracks of man-machine interaction method, belong to computer vision and video tracking field, be applicable to the man-machine interaction link in the virtual reality system.
Background technology
Human-computer interaction technology is one of gordian technique of virtual reality system, and it has realized the mutual of people and computing machine, real world and virtual world.Yet gesture is a kind of from interpersonal communication mode intuitively.Hand tracking and gesture identification based on vision are to realize the requisite gordian technique of man-machine interaction of new generation.
In man-machine interaction, the tracking of hand mainly contains methods such as data glove and visual identity.
Data glove, promptly the people can put on a sensor that is similar to gloves, and computing machine can obtain the position of hand and the abundant informations such as stretching, extension situation of finger by it.The system of the free hand control target of doing as people such as .BThmaas in 1993 relies on the media of data glove as input, but this needs the experimenter to be with a specialized equipment.This is not easy to use and promote.Along with the development of computer hardware, computer vision is applied to the hand location gradually and follows the tracks of.Based on the hand tracking of vision, what at first will consider is exactly environmental factor, particularly background.In order to reduce the influence of background, often take to limit background, be black or white etc. entirely as background.Be exactly the method that adopts the finger mark in addition, but the user is brought inconvenience, so the notice of hand tracking is just transferred to the tracking of natural hand.
The method of following the tracks of based on the hand of vision more and more widely, especially in the field of handwriting recognition.Because towards developing rapidly of individual's smart machines such as smart mobile phone, it all is to be confined among a small circle that the hand that present most daily use is taked is followed the tracks of, promptly do not comprise other parts except hand in the video of Cai Jiing, this has limited the scope of activities of user and hand thereof, can only be near camera system the operational computations machine, lack dirigibility, also not too be fit to the many people occasion that exchanges while operating.
Summary of the invention
The objective of the invention is to, a kind of man-machine interaction method of can be the on a large scale movable use finger control based on computer vision is provided.
Purpose of the present invention is achieved through the following technical solutions:
A kind of quick human-computer interaction of following the tracks of based on finger tip may further comprise the steps:
(1) image pre-service: adopt the wide-angle high definition camera of one 60 to 120 degree that one room area is carried out the high resolving power shooting, the gained image is carried out piecture geometry fault correct;
(2) extracting hand images: the correcting image of step (1) gained is used colour of skin wave filter, motion filters and color dispenser handle, then the result is merged, the handle portion image splits from the correcting image of step (1) gained, comprises the steps:
A, carry out colour of skin Filtering Processing, adopt the TSL colour model to choose near image-region, the bianry image that obtains after the colour of skin filtering with skin color, wherein area of skin color is 1, other zones are 0, carry out dilation operation again, reduce the cavity that colour of skin filtering causes;
The algorithm of b, employing image inter-frame difference carries out motion filtering to be handled, and the moving region is separated from static background;
C, the bianry image behind the step a dilation operation is carried out color cut apart, the image that obtains is the bianry image that comprises face and hand area of skin color, extracts the complete image of hand;
D, on the basis that step a, b, c handle, carry out image co-registration, the image that the complete image and the motion filtering of hand obtained carries out AND operation, and the result is carried out dilation operation, is mainly comprised the bianry image of hand region; To mainly comprising the bianry image of hand region, if judging this image is the bianry image of new user's hand region, by from top to bottom, priority seeker's face from left to right, and calculate the image distance of its eyes, first meets people's acquire the right of control of binocular images distance condition; If judging this image is the bianry image of known user's hand region, then follow the tracks of its face location, recomputate the binocular images distance, if meet the binocular images distance condition, then follow the tracks of hand in the predefined region of search in the face lower right according to hand, concrete steps are carried out in the corresponding region of image, and further eliminate the image of non-hand in image, are only comprised the bianry image of hand; If the binocular images distance does not meet the requirement that can clearly distinguish in the image, abandon control, no longer carry out the finger tip location, then the new user of search;
(3) finger tip location: the image that only contains hand that obtains in the step (2) is carried out the finger tip location, at first, use the coarse localization that histogram carries out finger tip, stipulate that user's finger tip makes progress; To the binary image that only comprises hand that obtains in (2), carrying out the hand profile by rim detection extracts, and point carried out horizontal ordinate projection, from top to bottom, from left to right, search the place of projection value significant change, as the rough position of finger tip, be the center with this position, construct a search window; Then, carry out the accurate location of finger tip by template matches.
Describedly the gained image is carried out piecture geometry fault correct and to be meant the geometric distortion distortion of adopting cubic polynomial deformation technology and bilinear interpolation to eliminate the image that collects from the wide-angle imaging head.
Described cubic polynomial deformation technology and bilinear interpolation are by a selected self-defining benchmark image and fault image thereof, Simultaneous Equations, and find the solution by least square method, determine the concrete transformation relation of ideal image and fault image.
4 described AND operations are meant between the bianry image that the bianry image that obtains after color is cut apart and motion filtering obtain, binary logic that pointwise is carried out and computing.
Described dilation operation is meant in the mathematical morphology can be realized level and smooth or the cavity computing of minimizing image with defined template subimage to what original image carried out.
The search window of described step (3) is meant that with finger tip coarse positioning position be the center, and its length of side set by statistics, size is the rectangular window of twice of the maximum error of coarse positioning position and actual exact position.
The template matches of described step (3) is meant with defined several finger finger tip templates removes to mate the finger tip image that obtains, and finds the best match position of optimum matching template and finger tip image, and this matched position is the exact position of finger tip.
Described steps d meets the binocular images distance condition and is meant that this image distance greatly to guaranteeing that the user is the service regeulations that meet over against camera, has also guaranteed that finger can distinguish simultaneously in image.
Compared with prior art, the present invention has the following advantages:
(1) the present invention adopts wide-angle lens (60 to 120 °) and the shooting of high-resolution camera, allow user's (in the camera angular field of view) activity in bigger zone, and use anamorphose correction algorithm, effectively improve scene image perception efficient, realize that finger in a big way detects and the location;
(2) priority by area dividing manages multi-user's situation, has dwindled calculated amount simultaneously, greatly improves speed and efficient;
(3) natural hand is carried out real-time track and localization, need not any finger tip marking, have more high practicability;
(4) mode of using colour of skin filtering to combine with motion detection positions tracking to hand, not only improved the accuracy of hand location, and increased adaptability to environment (particularly background), can adapt to general indoor application, and can under the background of more complicated, use;
(4) image distance of eyes is judged as carrying out the foundation that finger tip locatees (image distance is excessive to show that finger is too low and cause and can't accurately locate in the resolution of image), need not any priori of environment, so outstanding apart from robustness, be better than prior art.
Description of drawings
Fig. 1 is the quick man-machine interactive system structural representation of following the tracks of based on finger tip, shows the system architecture of the quick man-machine interactive system of following the tracks of based on finger tip of the first embodiment of the present invention.
Fig. 2 is the quick human-computer interaction FB(flow block) of following the tracks of based on finger tip, shows the step of the specific implementation method of the quick man-machine interactive system of following the tracks of based on finger tip.
Embodiment
The invention will be further described below in conjunction with embodiment, but need to prove, embodiment does not constitute the restriction to the claimed scope of the present invention.
As shown in Figure 1, the quick man-machine interactive system of following the tracks of based on finger tip comprises wide-angle high definition camera 101, DSP (digital signal processor) equipment 102 and computing machine 106.DSP equipment 102 comprises image acquisition portion 103, signal conversion part 104 and image pretreatment portion 105; Wide-angle high definition camera 101 is connected with image acquisition portion 103 signals of DSP equipment 102, and image acquisition portion 103 is connected with image pretreatment portion 105 signal successively with signal conversion part 104; Image pretreatment portion 105 is connected with computing machine 106 signals.Adopt wide-angle high definition camera 101 to detect, image acquisition portion 103 by DSP equipment 102 carries out image acquisition, by signal conversion part 104 the video simulation input signal is converted to image digital signal, and carry out the image pre-service by image pretreatment portion 105, finish by computing machine 106 then that hand extracts and the identification of finger tip is located.
Wide-angle imaging head 101 is responsible for gathering large-scale high-definition image; DSP equipment 102 equipment are responsible for converting analog picture signal to data image signal, and carry out image and processing that piecture geometry fault is corrected.Computing machine 106 is responsible for finishing by colour of skin filtering, motion filtering and color and is cut apart the hand that extracts the user, further discerns the position of finger tip then, at last the image coordinate of finger tip is converted to actual coordinate, realizes control output.
The LifeCam NX-6000 wide-angle high definition camera of wide-angle high definition camera 101 optional MSs; DSP equipment 102 specifically can be selected the TMS320 series processors of TI company for use.
As shown in Figure 2, the quick human-computer interaction of following the tracks of based on finger tip specifically comprises the steps:
(1) image pre-service: adopt the wide-angle high definition camera of one 60 to 120 degree that one room area is carried out the high resolving power shooting, the gained image is carried out piecture geometry fault correct.There is more serious geometric distortion in the image that collects from the wide-angle imaging head, therefore before successive image is handled, must carry out piecture geometry fault and correct.Simplify down for operand, obtains correction effect as well as possible, can adopt the geometric distortion distortion of the image that the elimination of cubic polynomial deformation technology and bilinear interpolation collects from the wide-angle imaging head.Cubic polynomial deformation technology and bilinear interpolation are specific as follows:
If the pixel coordinate of ideal image g be (u, v), fault image f respective pixel coordinate be (x, y), then the cubic polynomial coordinate transform is closed and to be:
x = F x ( u , v ) = Σ i = 0 3 Σ j = 0 3 - i a ij u i v j y = F y ( u , v ) = Σ i = 0 3 Σ j = 0 3 - i b ij u i v j Formula 1.
A wherein IjAnd b Ij(i, j=0,1,2,3) are multinomial coefficient undetermined;
The multinomial coefficient a of formula in 1. IjAnd b Ij(i, j=0,1,2,3) are only relevant with camera parameters, can be by a selected self-defining benchmark image and fault image thereof, and Simultaneous Equations, and find the solution by least square method, obtain a IjAnd b IjValue, thereby determine the concrete transformation relation of ideal image and fault image.Because the x that 1. calculates of formula, y differs and is decided to be integer, thus can not directly use g (u, v)=(x y), and must carry out the gray scale difference value computing to f.Therefore, adopt bilinear interpolation, promptly formula is 2.:
G (u, v)=(1-α) (1-β) f (x 0, y 0)+α (1-β) f (x 0+ 1, y 0)+(1-α) β f (x 0, y 0+ 1)+α β f (x 0+ 1, y 0+ 1) α=x-x wherein 0, β=y-y 0F, g are respectively corresponding fault image and ideal image; X, y can be by with u, and 1. v substitution formula obtains, and x 0, y 0Be respectively and be not more than x, the maximum integer of y; (u v) carries out formula computing 2., finally obtains ideal diagram g to all pixels; Promptly realize image and and distortion correction.
(2) extracting hand images: the correcting image of step (1) gained is used colour of skin wave filter, motion filters and color dispenser handle, then the result is merged, the handle portion image splits from the correcting image of step (1) gained, and concrete steps are as follows:
A, carry out colour of skin Filtering Processing, adopt the TSL colour model to choose near image-region with skin color.The TSL colour model carries out colour of skin filtering, and the area of skin color that comes out than RGB, HIS, YIQ and CIELUV model filter is accurate.Wherein, the conversion of TSL colour model and RGB model sees that formula 3.; The TSL color space is brightness and colourity separate processes, and the RGB model is the colour model of original image, is converted to the TSL model and helps gathering together of the colour of skin.
T = 1 2 π tan - 1 ( r ′ g ′ ) + 0.5 S = 9 5 ( r ′ 2 + g ′ 2 ) L = 0.299 * R + 0.587 * G + 0.114 * B Formula 3.
Wherein r ′ = ( r - 1 3 ) , g ′ = ( g - 1 3 )
r = R R + G + B , g = G R + G + B
R, G, B are respectively the RGB component under the rgb color model; T, S, L are respectively T, S, the L component under the TSL colour model.Because coloured image generally adopts the RGB model, so when using the TSL model, must 3. change with formula.
Sample by the face and the hand region that comprise the image of area of skin color to 500, estimate (two-dimentional Gaussian distribution) probability distribution parameters (all value matrix E and covariance matrix ∑) of the colour of skin (T and S) under the TSL model; And adopt mahalanobis distance to carry out the differentiation of the colour of skin, promptly each pixel is detected, if (T, S) vector is lower than certain threshold value Threshold with the mahalanobis distance of mean vector E to the C=that the T of a pixel and S component are formed, and thinks that then this pixel belongs to area of skin color.Specific as follows:
Mahalanobis distance d=(C-E) T-1(C-E)
If d<Threshold, pixel belongs to area of skin color
If d>Threshold, pixel does not belong to area of skin color
Wherein Threshold is a threshold value, after obtaining area of skin color statistics (all value matrix E and covariance matrix ∑), estimates that (T S) with the distance of average E, obtains initial value to the normal colour of skin, adjusts through experiment again, can determine that threshold value is 0.99.
The bianry image (area of skin color is 1, and other zones are 0) that obtains after the colour of skin filtering is carried out dilation operation (the template subimage uses 3 * 3 template subimages, and each pixel is 1 in the template), reduce the cavity that colour of skin filtering causes.
B, carrying out when above-mentioned a handles, carrying out motion filtering concurrently handles, the moving region is separated from static background: specifically take the algorithm of image inter-frame difference to detect the moving region, in order to prevent that the sporadic variation that causes owing to reasons such as camera systems is judged to motion, further strengthed condition, promptly the pixel that has at least 3 frames all to change in continuous 5 two field pictures just is considered as the motion pixel.With the original image binaryzation, the bianry image that obtains is designated as B according to each pixel motion state, to each pixel B of B (i, j):
Figure A20091004069900091
The bianry image B that obtains after the difference is carried out erosion operation (use 3 * 3 template subimages, and each pixel being 1 in the template), more clearly moving region and background are made a distinction.
C, on the basis that step a handles, carry out color and cut apart, extract the complete image of hand:
Because the influence of brightness, colour of skin wave filter might be the non-colour of skin to pixel that belongs to the colour of skin or zone misjudgement.In order to get rid of the influence of erroneous judgement as far as possible, add the color dispenser, the similarity measure of definition color is divided into several zones to original image, and like this, whole hand has formed a connected domain.Use the color similarity measure under the rgb color space, (to neighbor a and b) specific as follows:
ρ(a,b)=0.3*|R a-R b|+0.59*|G a-G b|+0.11*|B a-B b|
Wherein, R a, G aAnd B aBe respectively the RGB component of a; R b, G bAnd B bBe respectively the RGB component of b
If ρ (a, b)<threshold then a and b belong to same color area, otherwise belong to different color area; Thresholds is a RGB vector distance of estimating the zones of different pixel, obtains initial value, determines through experiment adjustment again; The present invention determines that thresholds is 12.
The area of skin color that colour of skin point that the consideration erroneous judgement is excluded and colour of skin filtering obtain, should be communicated with in common the composition in the complete area of skin color, therefore adopt the color dividing method of region growing, promptly institute is a little a seed points in the area of skin color that obtains with colour of skin filtering, carry out region growing, obtain comprising the bianry image C of complete area of skin color at last.
D, on the basis that above-mentioned steps a, b, c handle, carry out image co-registration.At first, the image C that obtains after color is cut apart is the bianry image that comprises area of skin color such as face and hand: consider the motion of hand, the image that image C and motion filtering are finally obtained carries out AND operation, and the result is carried out dilation operation, and (the template subimage uses 3 * 3 template subimages, and each pixel is 1 in the template), just mainly comprised hand region bianry image D (compare with image C, only filtering non-moving region, wherein may comprise face).
At last, in bianry image C, if judging this image is the bianry image of new user's hand region, just by from top to bottom, priority seeker's face from left to right, and calculate the image distance of its eyes, first meets people's acquire the right of control of binocular images distance condition, and the user of acquire the right of control is tracked user, and system proceeds step (3) finger tip positioning step; Meet the binocular images distance condition and be meant that this image distance should be enough big, make and guarantee that the user is the service regeulations that meet over against camera, guarantee also that simultaneously finger has enough resolution in image, so that the location of finger tip is greater than 10 pixel distances as establishing the binocular images distance.Correspondingly, do not meet the binocular images required distance and be meant that image distance is too small, less than preset value, mean that then face does not face camera, not meeting service regeulations or people, to leave camera too far away, causes and can't follow the tracks of; If judging this image is the bianry image of known user's hand region, then follow the tracks of its face location, recomputate the binocular images distance, if meet the binocular images distance condition, then follow the tracks of hand in the predefined region of search in the face lower right according to hand, carry out in the corresponding region of image D, and further eliminate the image of non-hand in image D, the bianry image that finally obtains is designated as H; If the binocular images distance does not meet the binocular images distance condition, abandon control, no longer carry out the finger tip location, then in image C, continue the new user's (people's face) of search, if all can not find the people's face that meets the binocular images range image among the entire image C, then return step (1), restart.
(3) finger tip location: the image H that only contains hand that obtains in the step (2) is carried out the finger tip location.
At first, use the coarse localization that histogram carries out finger tip: bianry image H is carried out profile extract (extract contour images and be made as H1), and to carry out sizing grid be that the grid sampling of 2 * 2 pixels (exists point in the grid, point corresponding in the image after sampling still is a point), to guarantee the continuity of profile, the image after the grid sampling is made as H2.Because the rough position of finger tip generally is a profile on one of summit of four direction, finger can be similar to regard as by rectangle and a semicircle to be formed, therefore after in H2, finding out 4 candidate points (summits of 4 directions), to each candidate point, respectively from selecting the 2nd counterclockwise and clockwise, 3,4 totally 3 point, it is right to constitute 3 pixels, because the width of finger is approximate to be constant, so calculate the variance of distance of 3 pairs of pixels of the vicinity of each candidate point, the candidate point of variance minimum is exactly best candidate point, according to the position of this candidate point in H2, in image H1 (or H), find corresponding candidate point, as the rough position of finger tip.
In contour images H1, be the center with the rough position of finger tip, construct one search window, the point in this search window all might be the exact position of finger tip, and the accurate location of finger tip is just carried out at this window.Search window is as the criterion with all possible exact position that can comprise finger tip, generally can be set to 9 * 9 pixels.Then, all point to search window, carry out template matches, find out the exact position of finger tip: template matches is a method commonly used during finger detects at present, template matches is meant with several good finger finger tip templates of predefine removes to mate the finger tip image that obtains, find the best match position of optimum matching template and finger tip image, so that absolute distance is estimated minimum, this matched position is the exact position of finger tip.Distance measure commonly used has Euclidean distance, correlation distance etc.The present invention adopts the absolute value distance measure.Template matching method can be described with following formula:
( i m , j m ) = arg i , j , k min { Σ m = 0 M Σ n = 0 N | | p ( i + m , j + n ) - t k ( m , n ) | | }
P is the subimage to be matched in the search window in the formula, t kRepresent k template, template size is M * N.(i, j) any point in the expression search window, (i m, j m) coordinate of expression final detected finger tip exact position, m, the n temporary variable in the iterative process of in following formula, represent to be used for to sue for peace, p (i+m, what j+n) represent is that image is in coordinate (i+m, j+n) value on; Considering that finger points to generally can be not downward, and selecting 5 sizes for use is the finger tip template that the finger that comprises 0 °, 45 °, 90 °, 135 ° and 180 ° of 25 * 25 pixels points to.At last the fingertip location in the image is mapped to control coordinates such as indicator screen coordinate, as final coordinate output.
This method can be followed the tracks of user's finger fingertip substantially under small indoor environment.The algorithm that this method adopts is fairly simple, realize easily, and computational complexity is not high.The range scale of the output quantity by the human eye distance detecting, this method can have all for the user between a distance regions that close operation embodies and the output control accuracy, have the method for requirement to have more high practicability than prior art to operating distance.

Claims (8)

1, a kind of quick human-computer interaction of following the tracks of based on finger tip is characterized in that may further comprise the steps:
(1) image pre-service: adopt the wide-angle high definition camera of one 60 to 120 degree that one room area is carried out the high resolving power shooting, the gained image is carried out piecture geometry fault correct;
(2) extracting hand images: the correcting image of step (1) gained is used colour of skin wave filter, motion filters and color dispenser handle, then the result is merged, the handle portion image splits from the correcting image of step (1) gained, comprises the steps:
A, carry out colour of skin Filtering Processing, adopt the TSL colour model to choose near image-region, the bianry image that obtains after the colour of skin filtering with skin color, wherein area of skin color is 1, other zones are 0, carry out dilation operation again, reduce the cavity that colour of skin filtering causes;
The algorithm of b, employing image inter-frame difference carries out motion filtering to be handled, and the moving region is separated from static background;
C, the bianry image behind the step a dilation operation is carried out color cut apart, the image that obtains is the bianry image that comprises face and hand area of skin color, extracts the complete image of hand;
D, on the basis that step a, b, c handle, carry out image co-registration, the image that the complete image and the motion filtering of hand obtained carries out AND operation, and the result is carried out dilation operation, is mainly comprised the bianry image of hand region; To mainly comprising the bianry image of hand region, if judging this image is the bianry image of new user's hand region, by from top to bottom, priority seeker's face from left to right, and calculate the image distance of its eyes, first meets people's acquire the right of control of binocular images distance condition; If judging this image is the bianry image of known user's hand region, then follow the tracks of its face location, recomputate the binocular images distance, if meet the binocular images distance condition, then follow the tracks of hand in the predefined region of search in the face lower right according to hand, concrete steps are carried out in the corresponding region of image, and further eliminate the image of non-hand in image, are only comprised the bianry image of hand; If the binocular images distance does not meet the requirement that can clearly distinguish in the image, abandon control, no longer carry out the finger tip location, then the new user of search;
(3) finger tip location: the image that only contains hand that obtains in the step (2) is carried out the finger tip location, at first, use the coarse localization that histogram carries out finger tip, stipulate that user's finger tip makes progress; To the binary image that only comprises hand that obtains in (2), carrying out the hand profile by rim detection extracts, and point carried out horizontal ordinate projection, from top to bottom, from left to right, search the place of projection value significant change, as the rough position of finger tip, be the center with this position, construct a search window; Then, carry out the accurate location of finger tip by template matches.
2, quick human-computer interaction of following the tracks of based on finger tip according to claim 1 is characterized in that describedly the gained image is carried out piecture geometry fault correcting and being meant the geometric distortion distortion of adopting cubic polynomial deformation technology and bilinear interpolation to eliminate the image that collects from the wide-angle imaging head.
3, quick human-computer interaction of following the tracks of according to claim 2 based on finger tip, it is characterized in that described cubic polynomial deformation technology and bilinear interpolation, be by a selected self-defining benchmark image and fault image thereof, Simultaneous Equations, and find the solution by least square method, determine the concrete transformation relation of ideal image and fault image.
4, quick human-computer interaction of following the tracks of according to claim 1 based on finger tip, it is characterized in that between the bianry image that described AND operation is meant that the bianry image that obtains after color is cut apart and motion filtering obtain binary logic that pointwise is carried out and computing.
5, quick human-computer interaction of following the tracks of based on finger tip according to claim 1 is characterized in that described dilation operation is meant in the mathematical morphology with defined template subimage realizing smoothly or the empty computing of minimizing image that original image carries out.
6, quick human-computer interaction of following the tracks of according to claim 1 based on finger tip, the search window that it is characterized in that described step (3) is meant that with finger tip coarse positioning position be the center, and its length of side set by statistics, size is the rectangular window of twice of the maximum error of coarse positioning position and actual exact position.
7, quick human-computer interaction of following the tracks of according to claim 1 based on finger tip, the template matches that it is characterized in that described step (3) is meant that pointing finger tip templates with defined several removes to mate the finger tip image that obtains, find the best match position of optimum matching template and finger tip image, this matched position is the exact position of finger tip.
8, quick human-computer interaction of following the tracks of according to claim 1 based on finger tip, it is characterized in that described steps d meets the binocular images distance condition and is meant that this image distance greatly to guaranteeing that the user is the service regeulations that meet over against camera, has also guaranteed that finger can distinguish simultaneously in image.
CN2009100406993A 2009-06-30 2009-06-30 Method for quick-speed human-computer interaction based on finger tip tracking Expired - Fee Related CN101593022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100406993A CN101593022B (en) 2009-06-30 2009-06-30 Method for quick-speed human-computer interaction based on finger tip tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100406993A CN101593022B (en) 2009-06-30 2009-06-30 Method for quick-speed human-computer interaction based on finger tip tracking

Publications (2)

Publication Number Publication Date
CN101593022A true CN101593022A (en) 2009-12-02
CN101593022B CN101593022B (en) 2011-04-27

Family

ID=41407708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100406993A Expired - Fee Related CN101593022B (en) 2009-06-30 2009-06-30 Method for quick-speed human-computer interaction based on finger tip tracking

Country Status (1)

Country Link
CN (1) CN101593022B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102179814A (en) * 2011-03-23 2011-09-14 浙江大学 Method for controlling robot by using user hand commands
CN102221879A (en) * 2010-04-15 2011-10-19 韩国电子通信研究院 User interface device and method for recognizing user interaction using same
CN102520790A (en) * 2011-11-23 2012-06-27 中兴通讯股份有限公司 Character input method based on image sensing module, device and terminal
CN102592115A (en) * 2011-12-26 2012-07-18 Tcl集团股份有限公司 Hand positioning method and system
CN102622601A (en) * 2012-03-12 2012-08-01 李博男 Fingertip detection method
CN102662460A (en) * 2012-03-05 2012-09-12 清华大学 Non-contact control device of mobile terminal and control method thereof
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand
CN102799855A (en) * 2012-06-14 2012-11-28 华南理工大学 Video-streaming-based hand positioning method
CN102868811A (en) * 2012-09-04 2013-01-09 青岛大学 Mobile phone screen control method based on real-time video processing
CN103208005A (en) * 2012-01-13 2013-07-17 富士通株式会社 Object recognition method and object recognition device
WO2013104316A1 (en) * 2012-01-09 2013-07-18 西安智意能电子科技有限公司 Method and device for filter-processing imaging information of emission light source
CN103389799A (en) * 2013-07-24 2013-11-13 清华大学深圳研究生院 Method for tracking motion trail of fingertip
CN103529957A (en) * 2013-11-05 2014-01-22 上海电机学院 Position recognizing device and method
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location
CN103914684A (en) * 2012-12-28 2014-07-09 现代自动车株式会社 Method and system for recorgnizing hand gesture using selective illumination
CN103996019A (en) * 2014-02-24 2014-08-20 香港应用科技研究院有限公司 System and method used for detecting and tracking a plurality of portions on an object
CN104050454A (en) * 2014-06-24 2014-09-17 深圳先进技术研究院 Movement gesture track obtaining method and system
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
CN104520799A (en) * 2012-05-04 2015-04-15 索尼电脑娱乐美国公司 User input processing with eye tracking
CN104700073A (en) * 2015-02-10 2015-06-10 广东光阵光电科技有限公司 Method and device for identifying planar fingerprints
CN104700075A (en) * 2015-02-10 2015-06-10 广东光阵光电科技有限公司 Method and device for designing fingerprint scanning recognizing device
CN104796750A (en) * 2015-04-20 2015-07-22 京东方科技集团股份有限公司 Remote controller and remote-control display system
CN104951748A (en) * 2015-04-28 2015-09-30 广东光阵光电科技有限公司 False fingerprint recognition method and device
CN105306813A (en) * 2014-07-22 2016-02-03 佳能株式会社 IMAGE PROCESSING APPARATUS and IMAGE PROCESSING METHOD
CN105912126A (en) * 2016-04-26 2016-08-31 华南理工大学 Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN106709479A (en) * 2017-02-24 2017-05-24 深圳英飞拓科技股份有限公司 Video image-based face detection method and system
CN108012140A (en) * 2017-12-06 2018-05-08 长沙远达华信息科技有限公司 Virtual reality system based on 3D interactions
CN109923501A (en) * 2016-11-01 2019-06-21 香港科技大学 Aerial finger direct detection for equipment interaction
CN110188640A (en) * 2019-05-20 2019-08-30 北京百度网讯科技有限公司 Face identification method, device, server and computer-readable medium
CN113220125A (en) * 2021-05-19 2021-08-06 网易有道信息技术(北京)有限公司 Finger interaction method and device, electronic equipment and computer storage medium

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221879B (en) * 2010-04-15 2016-01-20 韩国电子通信研究院 User interface facilities and use it to identify the method for user interactions
CN102221879A (en) * 2010-04-15 2011-10-19 韩国电子通信研究院 User interface device and method for recognizing user interaction using same
CN102179814A (en) * 2011-03-23 2011-09-14 浙江大学 Method for controlling robot by using user hand commands
CN102520790A (en) * 2011-11-23 2012-06-27 中兴通讯股份有限公司 Character input method based on image sensing module, device and terminal
WO2013075466A1 (en) * 2011-11-23 2013-05-30 中兴通讯股份有限公司 Character input method, device and terminal based on image sensing module
CN102592115A (en) * 2011-12-26 2012-07-18 Tcl集团股份有限公司 Hand positioning method and system
WO2013104316A1 (en) * 2012-01-09 2013-07-18 西安智意能电子科技有限公司 Method and device for filter-processing imaging information of emission light source
CN103208005A (en) * 2012-01-13 2013-07-17 富士通株式会社 Object recognition method and object recognition device
CN102662460A (en) * 2012-03-05 2012-09-12 清华大学 Non-contact control device of mobile terminal and control method thereof
CN102662460B (en) * 2012-03-05 2015-04-15 清华大学 Non-contact control device of mobile terminal and control method thereof
CN102622601A (en) * 2012-03-12 2012-08-01 李博男 Fingertip detection method
CN104520799A (en) * 2012-05-04 2015-04-15 索尼电脑娱乐美国公司 User input processing with eye tracking
CN102799855B (en) * 2012-06-14 2016-01-20 华南理工大学 Based on the hand positioning method of video flowing
CN102799855A (en) * 2012-06-14 2012-11-28 华南理工大学 Video-streaming-based hand positioning method
CN102799875B (en) * 2012-07-25 2016-01-20 华南理工大学 Any hand shape hand tracking method
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand
CN102868811B (en) * 2012-09-04 2015-05-06 青岛大学 Mobile phone screen control method based on real-time video processing
CN102868811A (en) * 2012-09-04 2013-01-09 青岛大学 Mobile phone screen control method based on real-time video processing
CN103914684B (en) * 2012-12-28 2019-06-28 现代自动车株式会社 The method and system of gesture is identified using selective illumination
CN103914684A (en) * 2012-12-28 2014-07-09 现代自动车株式会社 Method and system for recorgnizing hand gesture using selective illumination
CN103389799B (en) * 2013-07-24 2016-01-20 清华大学深圳研究生院 A kind of opponent's fingertip motions track carries out the method for following the tracks of
CN103389799A (en) * 2013-07-24 2013-11-13 清华大学深圳研究生院 Method for tracking motion trail of fingertip
CN103529957A (en) * 2013-11-05 2014-01-22 上海电机学院 Position recognizing device and method
CN103593654A (en) * 2013-11-13 2014-02-19 智慧城市系统服务(中国)有限公司 Method and device for face location
CN103593654B (en) * 2013-11-13 2015-11-04 智慧城市系统服务(中国)有限公司 A kind of method and apparatus of Face detection
CN103996019B (en) * 2014-02-24 2017-03-29 香港应用科技研究院有限公司 For the system and method at multiple positions on one object of detect and track
CN103996019A (en) * 2014-02-24 2014-08-20 香港应用科技研究院有限公司 System and method used for detecting and tracking a plurality of portions on an object
CN104050454B (en) * 2014-06-24 2017-12-19 深圳先进技术研究院 A kind of motion gesture track acquisition methods and system
CN104050454A (en) * 2014-06-24 2014-09-17 深圳先进技术研究院 Movement gesture track obtaining method and system
CN105306813B (en) * 2014-07-22 2018-09-11 佳能株式会社 Image processing apparatus and image processing method
CN105306813A (en) * 2014-07-22 2016-02-03 佳能株式会社 IMAGE PROCESSING APPARATUS and IMAGE PROCESSING METHOD
CN104217197B (en) * 2014-08-27 2018-04-13 华南理工大学 A kind of reading method and device of view-based access control model gesture
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
CN104700075B (en) * 2015-02-10 2018-06-12 广东光阵光电科技有限公司 A kind of design method and its device of finger scan identification device
CN104700073A (en) * 2015-02-10 2015-06-10 广东光阵光电科技有限公司 Method and device for identifying planar fingerprints
CN104700075A (en) * 2015-02-10 2015-06-10 广东光阵光电科技有限公司 Method and device for designing fingerprint scanning recognizing device
CN104796750A (en) * 2015-04-20 2015-07-22 京东方科技集团股份有限公司 Remote controller and remote-control display system
CN104951748B (en) * 2015-04-28 2018-12-07 广东光阵光电科技有限公司 A kind of vacation fingerprint identification method and device
CN104951748A (en) * 2015-04-28 2015-09-30 广东光阵光电科技有限公司 False fingerprint recognition method and device
CN105912126A (en) * 2016-04-26 2016-08-31 华南理工大学 Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN105912126B (en) * 2016-04-26 2019-05-14 华南理工大学 A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN109923501A (en) * 2016-11-01 2019-06-21 香港科技大学 Aerial finger direct detection for equipment interaction
CN106709479A (en) * 2017-02-24 2017-05-24 深圳英飞拓科技股份有限公司 Video image-based face detection method and system
CN106709479B (en) * 2017-02-24 2019-11-05 深圳英飞拓科技股份有限公司 Method for detecting human face and system based on video image
CN108012140A (en) * 2017-12-06 2018-05-08 长沙远达华信息科技有限公司 Virtual reality system based on 3D interactions
CN110188640A (en) * 2019-05-20 2019-08-30 北京百度网讯科技有限公司 Face identification method, device, server and computer-readable medium
CN110188640B (en) * 2019-05-20 2022-02-25 北京百度网讯科技有限公司 Face recognition method, face recognition device, server and computer readable medium
CN113220125A (en) * 2021-05-19 2021-08-06 网易有道信息技术(北京)有限公司 Finger interaction method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN101593022B (en) 2011-04-27

Similar Documents

Publication Publication Date Title
CN101593022B (en) Method for quick-speed human-computer interaction based on finger tip tracking
CN106055091B (en) A kind of hand gestures estimation method based on depth information and correcting mode
Jain et al. Real-time upper-body human pose estimation using a depth camera
US8787663B2 (en) Tracking body parts by combined color image and depth processing
Chang et al. Tracking Multiple People Under Occlusion Using Multiple Cameras.
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN103607554A (en) Fully-automatic face seamless synthesis-based video synthesis method
WO2016034059A1 (en) Target object tracking method based on color-structure features
Hu et al. Hand pointing estimation for human computer interaction based on two orthogonal-views
CN102591533B (en) Multipoint touch screen system realizing method and device based on computer vision technology
CN104167006B (en) Gesture tracking method of any hand shape
CN103530599A (en) Method and system for distinguishing real face and picture face
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
TW201514830A (en) Interactive operation method of electronic apparatus
AU2020300067B2 (en) Layered motion representation and extraction in monocular still camera videos
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
CN111160291A (en) Human eye detection method based on depth information and CNN
Tian et al. Absolute head pose estimation from overhead wide-angle cameras
CN105261038B (en) Finger tip tracking based on two-way light stream and perception Hash
Gu et al. Hand gesture interface based on improved adaptive hand area detection and contour signature
CN111860142A (en) Projection enhancement oriented gesture interaction method based on machine vision
Wang et al. A new hand gesture recognition algorithm based on joint color-depth superpixel earth mover's distance
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
Zhang et al. Hand tracking algorithm based on superpixels feature
CN108694348B (en) Tracking registration method and device based on natural features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110427

Termination date: 20210630

CF01 Termination of patent right due to non-payment of annual fee