CN102831439A - Gesture tracking method and gesture tracking system - Google Patents

Gesture tracking method and gesture tracking system Download PDF

Info

Publication number
CN102831439A
CN102831439A CN2012102903371A CN201210290337A CN102831439A CN 102831439 A CN102831439 A CN 102831439A CN 2012102903371 A CN2012102903371 A CN 2012102903371A CN 201210290337 A CN201210290337 A CN 201210290337A CN 102831439 A CN102831439 A CN 102831439A
Authority
CN
China
Prior art keywords
gesture
tracking
tracker
target
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102903371A
Other languages
Chinese (zh)
Other versions
CN102831439B (en
Inventor
宋展
赵颜果
聂磊
杨卫
郑锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201210290337.1A priority Critical patent/CN102831439B/en
Publication of CN102831439A publication Critical patent/CN102831439A/en
Application granted granted Critical
Publication of CN102831439B publication Critical patent/CN102831439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a gesture tracking method. The gesture tracking method includes steps of designing gesture appearance models including image description modes for tracking prediction and prediction verification; acquiring initial states, positions and size information of targets by gesture detection; initializing a tracker for the targets according to the initial states by details of initializing the appearance models, namely initializing image description templates for tracking prediction and prediction verification, and initializing types, states and visibility of gestures; tracking; processing the states and the visibility of the targets according to information of the tracker so as to make a final estimate; and judging the visibility of the targets, starting the process of gesture detection again to acquire a tracking target if information is lost permanently, otherwise, continuing tracking the targets. The invention further provides a gesture tracking system. The gesture tracking method and the gesture tracking system have the advantages of simplicity, rapidness and stability.

Description

Gesture tracking and system
Technical field
The present invention relates to target following and field of human-computer interaction, relate in particular to a kind of TV that is applicable to and embed the gesture tracking and the system of platform based on vision and image.
Background technology
, extensively paid close attention in recent years as a kind of important man-machine interaction method based on the man-machine interactive operation of gesture.For example, through common camera collection user's motion picture, through algorithm for pattern recognition; Hand-characteristic in the image is carried out detection and tracking, the movable information of hand is converted into the movable information of TV screen cursor, feed back to the intelligent television terminal; And trigger corresponding operation order; Like the switching of TV programme, the moving of TV cursor, and simple game interactive etc.Gesture Recognition is based on camera that intelligent terminal was equipped with; Corresponding identification software is installed at the terminal; Can accomplish above operation, thereby on hardware cost and mode of operation, all have great advantage, thereby should technology become the standard configuration module of intelligent television gradually.A wherein involved key issue is exactly how accurately and glibly to carry out the tracking of hand-characteristic, thereby makes display mouse or TV screen cursor move and mobile exactly with what set about, and this process is also referred to as the gesture tracking technique.
Yet existing gesture tracking based on visible sensation method; Exist following common problem: 1) poor stability; Receive the influence of factors such as ambient lighting and complex background; And the angle that in motion process, produces of hand changes the change of the hand shape in the image that causes, and causes losing and operation disruption of tracking target extremely easily; 2) counting yield is low; Based on hand-characteristic information such as the colour of skin, shapes; Receive extraneous factor easily and disturb, and, improved the complexity of track algorithm again greatly based on methods such as online machine learning of high complexity and training; Huge operand makes it be difficult to the embedded platform in low arithmetic capability, like stable smooth operation on the intelligent television platform.
Thereby how to develop simple quick and stable gesture target tracking algorism; Make it on the embedded platform of low arithmetic capability, be able to application and become present urgent problem; And for all gesture interaction systems; The accuracy of following the tracks of is directly connected to the fluency and the experience degree of user's operation with stability, thereby is one of key issue of gesture man-machine interactive system.
Summary of the invention
The present invention is directed to above problem, proposed a kind of feasible simple quick and stable gesture tracking, said gesture tracking comprises: the apparent model of design gesture comprises the iamge description mode that is used for tracking prediction and prediction checking; Gestures detection obtains the original state of target, the i.e. position of target, size information; According to said original state the tracker of target is carried out initialization; Comprise the apparent model of initialization; Be the iamge description template that initialization is used for tracking prediction and prediction checking, and initialization tracker classification, state (position & size) and the observability of being followed the tracks of gesture that are write down; According to said tracker information, the state and the observability of target are made final estimation through tracking processing; Judge the observability of target, wherein,, then need restart detection and obtain a tracking target if forever lose, otherwise, continue to follow the tracks of.
Preferably, also comprise the steps: to set tracking restricted area R, the target that is used for following the tracks of present frame according to the dbjective state of previous frame.
Preferably, also comprise the steps: the operation in the said tracking processing, comprise prediction, checking and partial check, only be confined to carry out among the described tracking restricted area R.
Preferably, also comprise the steps: in said tracking restricted area R, other gestures of being followed the tracks of outside the gesture are detected, be used for that the apparent model to gesture upgrades when gesture is suddenlyd change.
Preferably, when from the local detection result, finding to have the variation of gesture classification, then abandon original gesture model, reinitialize tracker information and apparent model with the result who detects.
Preferably, in the step that dbjective state gives a forecast, employing be the way that color histogram combines cam-shift, according to former frame or before the dbjective state of some frames, the dbjective state in the present frame is made prediction.
Preferably, do predicting the outcome in the step of checking, employing be the such two kinds of describing modes of piecemeal LBP histogram and edge gradient direction histogram.
Preferably, also comprise the steps: result, the information of said tracker is upgraded, comprise to the renewal of apparent model with to the renewal of gesture-type, state and observability that tracker write down according to said tracking processing.
Preferably, in the time of target generation transient loss, do not stop trace daemon immediately; But according to the state setting tracking restricted area in a big way of previous frame, some frames backward continue in this restricted area, to do tracking processing.
The invention allows for a kind of gesture tracker; Wherein, the module that is used for tracking processing comprises following submodule: the apparent model of gesture, tracker initialization module, tracking prediction module, prediction authentication module, local detection module and model modification module.Wherein, the apparent model of gesture comprises the iamge description mode that is used for tracking prediction and prediction checking; The tracker initialization module; Be used to use said gestures detection module that the predefine gesture is detected, and when detecting certain type of gesture, tracker carried out initialization; Comprise the apparent model of initialization, and write down in the initialization tracker gesture classification, state and observability; The tracking prediction module is used to combine the apparent model of gesture to describe, according to former frame or before dbjective state in some frames, the dbjective state in the present frame is made prediction; The prediction authentication module from the pairing target image of predicted current frame state, extracts the characteristic be used to predict checking, does comparison with being used to predict the characteristics of image of checking in the apparent model of gesture accordingly, and whether confirm to predict the outcome effective; The model modification module is used for the result according to said tracking processing, the information of gesture-type, state and observability that tracker write down in the said tracker initialization module is upgraded, and the apparent model of gesture is upgraded.
Preferably, also comprise the local detection module, be used for dbjective state, confirm to follow the tracks of restricted area, other gestures of being followed the tracks of outside the gesture are detected according to former frame.
Based on above problem, the present invention proposes a kind of stable and gesture method for tracking target efficiently makes it can on embedded platforms such as intelligent television, stablize smooth the operation.From technological layer, 1) at first, the tracking restricted area dwindles following range through being set, reduce the Flame Image Process amount on the one hand, can effectively reduce the large scene background interference factor that global follow is brought on the other hand; 2) describing method through using various features to merge is done checking to the result of tracking prediction, effectively suppresses erroneous matching; 3) through local detection when gesture is suddenlyd change, in time trace model is upgraded; 4) behind the track rejection, on the basis of nearest state, continue to follow the tracks of, reduce the tracking that the target transient loss causes and stop, thereby make whole operation efficient more and smooth.
Description of drawings
Fig. 1 is the structural representation of an embodiment of gesture tracker of the present invention.
Fig. 2 is total process flow diagram of gesture tracking among the present invention.
Fig. 3 is the operational flowchart of in the tracking module single-frame images being handled among the present invention.
Fig. 4 follows the tracks of the restricted area synoptic diagram in the tracing process among the present invention.
Fig. 5 is the exemplary plot of four kinds of gestures used in the realization system of the present invention.
Embodiment
As shown in Figure 1, be the structural representation of a kind of gesture tracker 10 of the present invention.The gesture tracker is applied to intelligent television plateform system 1 etc.
In this embodiment, the plateform system 1 at gesture tracker 10 places at least also comprises image collection module 20 and gestures detection module 30.Image collection module 20 is camera normally, is used to catch user's gesture.In other embodiments, image collection module 20 also can be arranged in the gesture tracker 10.Gestures detection module 30 detects the predefine gesture being used for, and obtains initialized gesture state.
Gesture tracker 10 comprises: the apparent model 11 of gesture, tracker initialization module 12, tracking prediction module 13, prediction authentication module 14, local detection module 15 and model modification module 16.
The apparent model 11 of gesture comprises the iamge description mode that is used for tracking prediction and prediction checking.
In this embodiment; Through the apparent model that the mode of using the various features associating is expressed target, that is, select two stack features describing mode Ω p and Ω v respectively for use; The feature templates of setting up based on Ω p is used for the similarity measurement in the middle of the tracking prediction; Based on the feature templates that Ω v sets up, be used for doing further check the situation of prevention flase drop to following the tracks of to predict the outcome.
Tracker initialization module 12 uses the gestures detection module 30 of training in advance to carry out the detection of predefine gesture at predeterminable area (or in entire image), in case stably detect certain type of gesture, then testing result is in view of the above carried out initialization to the parameter of tracker.
In this embodiment, tracker information not only writes down by which kind of hand shape under the tracking gesture, is followed the tracks of the state (position & size) of gesture, is followed the tracks of the observability of gesture; The parameter information that also includes the tracked target apparent model.The initialized concrete mode of tracking target is with reference to the tracking target initialization among the embodiment hereinafter.
Tracking prediction module 13; Adopt color histogram to combine the cam-shift method, ins conjunction with the model description of tracked target, according to former frame perhaps before dbjective state in some frames; Dbjective state in the present frame is made prediction, and tracking prediction also only is confined to follow the tracks of in the restricted area carry out.The concrete mode of tracking prediction is with reference to tracking prediction among the hereinafter embodiment.
Prediction authentication module 14; From the pairing target image of predicted current frame state; Extraction is used to predict the characteristic of checking, does comparison with being used to predict the characteristics of image of checking in the apparent model of gesture accordingly, if similarity within the specific limits; Then representative is followed the tracks of successfully, otherwise thinks and follow the tracks of failure.The concrete mode of prediction authentication module is with reference to the checking that predicts the outcome among the hereinafter embodiment.
Local detection module 15 according to the dbjective state of former frame, confirms that one is followed the tracks of restricted area (according to being the continuity of hand exercise; Can not occur unexpected quick locus under the normal operation changes; Can dwindle the tracking surveyed area like this, improve counting yield), other gestures of being followed the tracks of outside the gesture are detected; This is used on the one hand judge whether the gesture shape switches, and improves the accuracy of gesture classification in following the tracks of on the other hand.The setting of tracking restricted area and the concrete mode of local detection are with reference to associated description among the hereinafter embodiment.
In this embodiment, through local detection module 15 the tracking restricted area is set in tracing process and dwindles following range, reduce the Flame Image Process amount on the one hand, can effectively reduce the large scene background interference factor that global follow is brought on the other hand.
Model modification module 16 is used for the result according to tracking processing, the information of tracker write down in the tracker initialization module 12 gesture-type, state and observability is upgraded, and the apparent model 11 of gesture is upgraded.Its embodiment is with reference to the renewal of the object module among the hereinafter embodiment.
In this embodiment, behind the track rejection, on the basis of nearest state, continue to follow the tracks of, reduce the tracking that the target transient loss causes and stop, thereby make whole operation efficient more and smooth.
As shown in Figure 2, for being directed against the operational flowchart of video flowing in the gesture tracking of the present invention's proposition, being used to show tracking and being used for the process that hockets between the initialized detection.
In step S201, image collection module 20 is obtained video image.
In step S202, gestures detection module 30 detects certain gestures in the detection limit zone.
In step S203, gestures detection module 30 judges whether to detect certain gestures.Wherein, if do not detect gesture, then return step S201 and continue to obtain video image, if detect certain gestures, then execution in step S204 gets into the step of being carried out by gesture tracker 10.
In step S204, the information of 12 pairs of trackers of tracker initialization module and the apparent model of gesture carry out initialization.
In this embodiment, above-mentioned initialization comprises the characteristics of image that extracts target image, and apparent model is carried out initialization: initially be used for the template of tracking prediction and the characteristics of image that initialization is used to predict checking; And the information such as gesture classification, state (size & position) and observability in the tracker of initialization simultaneously.
In step S205, image collection module 20 continues to obtain video image.
In step S206, gesture tracker 10 is done tracking according to the tracker current information and by the apparent model of tracking gesture.The flow process that the tracking module algorithm is implemented has obtained detailed statement in Fig. 3.
In this embodiment, the apparent model 11 of gesture includes the set omega p of the iamge description mode that is used for tracking and matching, and the set that is used for doing predicting the outcome the feature description mode of checking is Ω v.
In step S207, gesture tracker 10 judges whether target forever disappears.Wherein, if permanent the disappearance then returned step S201; Otherwise, then return step S205.
Among the present invention, the observability of target is divided into three kinds of states, i.e. " visible ", " transient loss ", " forever losing ".Specifically with reference to the explanation about observability of hereinafter.In this embodiment, the permanent disappearance is meant, reaches certain hour or in multiple image after this, all do not see target if target is in the transient loss stage.
As shown in Figure 3, be the operational flowchart of handling to single-frame images in the gesture tracking of the present invention's proposition.
In step S301, the apparent model 11 of gesture is at first according to the apparent model and the tracker information of testing result initialization gesture.
In step S302, image collection module 20 is obtained video image.
In step S303, judge according to the tracker information recorded whether previous frame follows the tracks of success.Wherein, if follows the tracks of successfully, execution in step S304 then, as if not following the tracks of successfully, execution in step S309 then.
In step S304, the tracking restricted area is set according to the previous frame state.
In step S305, tracking prediction module 15 gives a forecast to the gesture state of present frame in following the tracks of the restriction local.
In step S306,14 pairs of authentication modules of prediction predict the outcome and do checking, and whether evaluation predicts the outcome effective.Wherein, if effectively, then apparent model is implemented gradual renewal.
In step S307, model modification module 16 is done renewal according to tracking and testing result to the information of tracker and the apparent model of tracked target, and the gesture state of identification present frame.
In step S308, gesture tracker 10 is judged the current observability of target, is used to whether forever losing among Fig. 2 foundation is provided.
In step S309, gesture tracker 10 is provided with a bigger tracking restricted area according to the successful state of last tracking.
In step S310, local detection module 15 is done detection to other gestures of being followed the tracks of outside the gesture in following the tracks of restricted area.And testing result offered step S307 as the foundation of upgrading.
To sum up; The model of initialization tracked target at first; To ensuing video image, all can confirm that is followed the tracks of a restricted area according to the position of previous frame target, one side is based on trace model in this zone; Current state to tracked target provides prediction, uses moving window that other contingent predefine gestures are detected on the one hand; Simultaneously, with the result who follows the tracks of and detect the model of target is made correction and renewal again.The tracking of each frame and detection can be made judgement to target " visible " property after accomplishing, if the prediction authentication failed, and in regional area, do not detect new gesture, then target gets into " transient loss " state; Can transmit state that target successfully traced into for the last time if target gets into " transient loss " as current state, in several frames after this, continue tracking based on this state; If target has continuous multiple frames all to be in " transient loss " state, then target gets into " forever losing " state; If target is in " forever losing " state and then jumps out tracking module, get into the tracking initiation stage again, the detection of using the following gestures detection module of training of line to carry out the predefine gesture is preparatory.The judgement of target observability, its embodiment is with reference to the observability about target among the hereinafter embodiment.
About modelling, workflow and the functional module of gesture tracker, its detailed technology scheme is described below:
(1) design of the apparent model of gesture
The apparent model of gesture is the foundation of target following, and it has write down the portrayal to objective attribute target attribute, the standard of similarity measurement when the attributive character data are used to follow the tracks of on the one hand, on the other hand, the benchmark when being used for doing checking to predicting the outcome.Enumerate the describing mode of target image commonly used in the gesture tracking here:
(a) based on the description of geometric properties, such as provincial characteristics, profile, curvature, concavity and convexity etc.;
(b) based on histogrammic description, such as color histogram, texture histogram, gradient orientation histogram;
(c) based on the description of colour of skin degree of membership image;
(d) based on the description of pixel/ultra pixel contrast, like point to characteristic, Haar/Haar-like characteristic etc.;
Generally speaking, be used to predict that the describing mode of checking is different from the describing mode that is used to predict, the set of establishing the describing mode that is used to predict is Ω p, and the set of the describing mode that is used to verify is Ω v.In the instance that system of the present invention realizes, Ω p has comprised the color histogram of H and S passage in the HSV space, and Ω v has comprised piecemeal LBP histogram and represented to represent with the piecemeal gradient orientation histogram.
(2) tracking target initialization
The initialization of tracking target realizes through gestures detection; When in certain predefine zone of image or entire image, detecting target; From target image, extract characteristic objective attribute target attribute is described, the foundation of tracking phase prediction coupling and prediction checking after being used for.
The gestures detection in this stage can be that in entire image, to carry out also can be in certain regional area of image, to carry out; In order to reduce sensing range; Improve detection speed, consider also that in addition the user generally stands in the camera dead ahead and operates, the method that has adopted the specific region to detect among the present invention; Can be arranged on middle 1/4 part of image like the specific region, the benefit that this specific region is set is:
(a) meet the custom that nature is operated; As when the operative intelligence TV, the user generally can stand in screen (camera) dead ahead, when the user operates; General all is earlier hand to be raised to certain comfortable position P; Just begin certain gesture then,, rather than lift certain position in the process at staff so the tracking starting position in user's consciousness is P; Therefore be arranged on and do detection in the specific region, help realizing correct initialization, also meet the custom of normal running.
(b) reduce false drop rate, owing to be provided with the particular detection zone, thereby can reduce the zone searched for greatly, thereby effectively suppress the interference of complex background, dynamic background; The operation of convenient subject user suppresses the interference of non-subject user, suppresses the interference of unconscious gesture;
(c) strengthen the quality of follow-up tracking, if initialization occurs in the process that staff lifts, because the motion blur that causes of motion rapidly, the object module accuracy that possibly cause being initialised descends, and influences follow-up tracking quality; In the specific region, detect, can effectively suppress this situation.
(d) among a small circle, detect, can obviously improve the efficient and the accuracy of detection;
What the present invention paid close attention to is the tracking problem of single gesture, when system is not carrying out tracing task (such as just start or after certain tracing task stops), will carry out gestures detection, till finding a new tracking target.Initial phase can be that certain several predefine gesture is done detection, also can be that some certain gestures are done detection, and this depends on the needs of application system.Such as, only according to movement locus the time, can only do detection when the identification of dynamic gesture to certain single gesture, this has improved detection efficiency, and can not impact the application system; If the identification of dynamic gesture also depends on the shape of staff in the tracking, the classification of initialization gesture can impact recognition result, and possibly do detection this moment to a plurality of gestures.For example, only closed palm as shown in Figure 5 is done detection in the realization system of the present invention.
Detect used method about initialization, can combine movable information, texture information of colour of skin information or gesture or the like.Method commonly used has:
(a) judge candidate's target gesture zone through split plot design, carry out the identification of hand shape through the geometric configuration of analyzing the candidate region;
(b) for example LBP histogram, Haar characteristic, point are done detection to apparent attributes such as characteristics in conjunction with the moving window method to pass through appearance features.
In a realization system of the present invention, from sample data, extract the Haar characteristic under the line, each gesture training Ada-Boost sorter is done gesture-non-gesture distinguish; In the object initialization stage, use this sorter to combine the moving window method to detect such gesture.
(3) setting of tracking restricted area
Target-recognition zone is the continuity features according to target travel, and according to the state of target previous moment, the zone that the estimating target present frame possibly occur only is confined to seek in this zone the optimum matching with model then; And in fact, under the normal condition, the position of target all can be followed the tracks of within the restricted area at this.Based on this kind way, not only reduced the zone of search greatly, improved the efficient of following the tracks of, and, therefore helped suppressing drift and the erroneous matching in the target following owing to avoided coupling in unnecessary position.
The setting in this zone also reminds user's gesture motion unsuitable too fast potentially in addition, to avoid causing following the tracks of failure because of the too fast camera shooting picture that causes of moving is fuzzy.
In realization of the present invention system; We have tested color histogram+camshift tracking scheme respectively; LBP histogram+particle filter tracking scheme proves to add the restriction of following the tracks of restricted area, can suppress to follow the tracks of the erroneous matching at area of skin color such as people's face, neck, arms.
As shown in Figure 4; Inner side frame institute mark follow the tracks of resulting target gesture state (position and the size that comprise hand) for present frame; Outer side frame institute mark be according to the determined tracking restricted area of dbjective state, the prediction of the dbjective state of adjacent next frame will only be carried out in this follows the tracks of restricted area.
(4) tracking prediction
Tracking prediction, refer to according to the model of tracked target and target former frame or before the state of some frames, the current state of target is made the process of estimation.Enumerate several practical method for quick predicting here:
(a) with the distribution of color histogram expression target pixel value,, carry out camshift according to P and follow the tracks of based on the backpropagation image P of this color histogram calculation sources image;
(b) calculate colour of skin degree of membership figure P according to complexion model, P carries out the camshift tracking at the pixel value of certain some probability that represent this point be colour of skin point according to P;
(c) with source images/piecemeal LBP histogram/piecemeal gradient orientation histogram/Haar characteristic etc. as iamge description, follow the tracks of in conjunction with particle filtering method;
(d) on image, choose random point, the net point that perhaps even subdivision forms perhaps detects like Harris angle point, SIFT/SURF unique point; These points are done tracking based on optical flow method, the result who follows the tracks of is done the state that analysis-by-synthesis obtains target.
What use among the present invention is the tracking scheme that color histogram combines the cam-shift forecasting mechanism; In tracking, to the new video image of each width of cloth,, calculate to follow the tracks of the backpropagation image of the corresponding target image of restricted area institute according to the color histogram in the model, in this backpropagation image based on cam-shift scheme searching optimum matching.
(5) checking that predicts the outcome
The tracking prediction algorithm all is to seek and Model Matching degree soprano in all candidate state that comprise in certain regional extent basically, passes through someway from a series of candidate state of this region generating in other words, and therefrom chooses optimum matching person S.But this optimum matching person not necessarily is exactly real dbjective state, therefore need verify the i.e. prediction of indication checking among the present invention to it.
According to the described describing mode set omega v that is used to verify of object module, from the pairing target image of state S, extract the characteristic statement; And with model in the corresponding benchmark of describing contrast; If similarity in certain limit, is then thought and is followed the tracks of successfully, otherwise thinks and follow the tracks of failure.This scheme mainly is based on a kind of like this hypothesis, and promptly real dbjective state should be coincide with benchmark image on a plurality of attributes.Through the prediction Qualify Phase, possibly find that tracking prediction is invalid, think that target gets into " transient loss " state to the authentication failed that predicts the outcome this moment.
The prediction scheme that adopts in the present invention is (in the color histogram+camshift); Being used to of adopting predicts that the iamge description mode of checking has these two kinds of piecemeal LBP histogram and profile HOG histograms; When current state that and if only if is all relatively coincide with model under these two kinds of describing modes; Just be considered to follow the tracks of successfully, otherwise think and follow the tracks of failure.
(6) local detection
Dynamically in the hand target following process, not only need obtain the position of motion hand, and need make identification gesture shape in each frame of this process through tracking.
Many systems are the identification of discerning the static gesture in realizing following the tracks of through the pairing image-region of predicted state S is done; But this exists the problem of following two aspects; A) when following the tracks of when gradually drift taking place, the pairing image-region of state S also not exclusively coincide with real gesture zone, such as possibly being to be the part of the staff and the arm of center line with the wrist; Do identification to this zone this moment, and recognition result is specious in the majority; B) even if in correct situation of following the tracks of, only the pairing image of S is done disposable identification, the probability of identification error also is bigger.
Given this, the present invention proposes in above-mentioned tracking restricted area, and the scheme of using the multi-scale sliding window mouth to detect detects other predefine gesture-type of being followed the tracks of outside the gesture.To each type gestures detection to target window carry out cluster, obtain several bunches, in the corresponding window of all gestures bunch, select a degree of confidence soprano, calculate its corresponding hand gesture location and type, as detecting output; If any one type does not all detect target window, perhaps there be not satisfactory bunch, then present frame local detection no-output through cluster.
If local detection no-output; Then the classification results to the present frame gesture is by the gesture-type that is write down in the trace model, otherwise, if output is arranged; Then think the variation of gesture-type is arranged in following the tracks of; The gesture classification that this moment, classification results was exported for detection, and to the result of the gesture attitude assignment that write down in the trace model for detecting be reinitialized trace model with this testing result simultaneously.
Use the moving window testing result to do classification and improve classify accuracy, be based on a kind of like this understanding,, adopt the degree of confidence of degree of confidence than single time of repeatedly classification high because can produce the window that includes the target gesture in a large number in this process.
The beneficial effect that this method is brought is following:
(a) improve the central precision of tracking to the static gesture classification;
(b) solve the gesture sudden change, and the tracking that model has little time to learn to be caused is failed;
(c) than the model of on-line study, the sorter that is used to detect is trained under supervised learning and is obtained, and degree of confidence is high, is not easy to take place flase drop.
(7) renewal of object module
Under the tracking verification case of successful,, need do gradual renewal to object module in order to let object module can adapt to the apparent slow variation of target in the motion; It is fixed that update algorithm need be come according to concrete used characteristic and Forecasting Methodology and verification method in the model.In a realization system of the present invention, do tracking prediction with color histogram, piecemeal LBP histogram and edge gradient direction histogram are done checking, and the update method of these characteristics is following:
H c ( i ) = a H c ( i ) + ( 1 - a ) H c t ( i ) , i=1,...,N c
H l ( j ) = b H l ( j ) + ( 1 - b ) H l t ( j ) , j=1,...,N l
H e ( k ) = g H e ( k ) + ( 1 - g ) H e t ( k ) , k=1,...,N e
H wherein c, H l, H eColor histogram during representative model is represented respectively, piecemeal LBP histogram and edge gradient direction histogram;
Figure BSA00000764056400114
Then represent the corresponding description histogram of present frame target image respectively; N c, N l, N eRepresent each histogrammic dimension; H c(i) component on histogrammic i the dimension of representative; A, b, g are the corresponding turnover rates of various describing modes.
In tracing process, if gesture-type does not change, can carry out model modification according to such scheme, if the gesture classification changes, then target is carried out initialization again according to tracking mode instantly.
Use local detection and tracking results, following to the update rule of object module:
(a) if the local detection stage successfully detects other gesture classification targets, show then to have the gesture sudden change that master mould thoroughly lost efficacy, reinitialize object module parameter according to testing result this moment;
(b) if the local detection stage, do not detect other gesture targets, and success of the tracking of present frame (target is in transient loss state or prediction checking and shows that prediction lost efficacy), then object module is not done renewal;
(c) if present frame is followed the tracks of successfully, promptly the prediction empirical tests of present frame shows qualifiedly, then need carry out progressive renewal to the model of target.
(8) about the observability of target
System of the present invention is divided into three kinds of states with the observability of target, i.e. " visible ", " transient loss ", " forever losing ".Visible state is meant that promptly target is traced into and verifies through prediction at present frame.At a certain frame of tracking phase, if the checking that predicts the outcome is failed, then to the target following failure of this frame, target gets into " transient loss " stage; In " transient loss " stage; Still can confirm that is followed the tracks of a restricted area according to the state that is traced into for the last time; Carry out local detection and tracking in this zone; Dbjective state might be converted into " visible " state once more during this time, and condition has: (a) target is traced into again, and (b) local detection detects certain predefine gesture; Target is in the transient loss stage and reaches certain hour else if, and then dbjective state is converted into " forever losing " state by " transient loss ", and destroy object module this moment, stops trace daemon, gets into the initialization detection-phase again.
About checking result of the present invention and beneficial effect
This gesture tracking and system are that the android platform under the intelligent television hardware supports is tested; Hardware configuration is: processor host frequency is 700MHz; Installed System Memory is 200M, and the common WEB camera that connects through USB interface carries out video capture, and video image shows in the TV upper left corner.Experimental result proves that beneficial effect of the present invention is following:
(1) processing speed is fast, and is real-time.The present invention is based on the continuity features of gesture motion in the gesture identification system, be provided with the tracking restricted area, dwindled the scope of tracking prediction; Detect the quantity that only in regional area, reduces moving window, improved operational efficiency at embedded platform.Experiment showed, after the tracking initiation, carry out all operations that comprises whole flow processs such as tracking prediction, local detection, model modification, on aforesaid television system, can reach the speed of 30ms/frame.
(2) good stability of tracker, strong robustness.Because restricted area is followed the tracks of in adjustment in real time in following the tracks of; Thereby reduce coupling in unnecessary background area; Continue to implement tracking prediction during the target transient loss; Object module replacement problem when in regional area, detect solving the gesture sudden change can guarantee the stability and the robustness of tracker, thereby has avoided the problems such as tracking interruption that cause because of factors such as hand deformation, environmental disturbances in the existing method.
(3) precision of gesture identification is high in the tracking.Because in regional area, implementing moving window detects; If certain gesture exists; Then can produce many windows that comprise this gesture; Therefore having number of successful detects the existence of verifying corresponding gesture, and this is than other system of only only the pairing target image of tracking mode being done a subseries, and the probability of correct classification obtains the raising of big degree.Experiment showed, to allow any conversion gesture between four gestures as shown in Figure 5 in the tracking, recognition correct rate is more than 99%.
(4) this method is based on common camera, in time realizes through image recognition, need not the user and wears extra auxiliary equipment, also need not expensive 3D scanning device, does not increase hardware cost.
About description of use of the present invention
Embodiments of the invention are the intelligent television system, also can be used for other intelligent appliance equipment simultaneously.For example: mobile phone terminal, pounce on through mobile phone cam and to catch the hand exercise picture, realize the control of hand exercise to the mobile phone screen cursor; Air-conditioning equipment follow the tracks of to realize that through gesture hand exercise is to the control of air-conditioning wind direction etc.; The PC platform through the tracking to the hand target, is realized the motion control of screen mouse etc.In addition, also can carry out other kinds interactive operation through the track identification technology based on the movement locus of following the tracks of.
The above is merely the preferred embodiments of the present invention; Be not so limit claim of the present invention; Every equivalent structure or flow process conversion that utilizes instructions of the present invention and accompanying drawing content to be done; Or directly or indirectly be used in other relevant technical field, all in like manner be included in the scope of patent protection of the present invention.

Claims (11)

1. a gesture tracking is characterized in that, comprises the steps:
The apparent model of design gesture comprises the iamge description mode that is used for tracking prediction and prediction checking;
Gestures detection obtains the original state of target, the i.e. position of target, size information;
According to said original state the tracker of target is carried out initialization; Comprise the apparent model of initialization; It is the iamge description template that initialization is used for tracking prediction and prediction checking; And initialization tracker classification, state and the observability of being followed the tracks of gesture that are write down, wherein state comprises position and dimension information;
According to said tracker information, the state and the observability of target are made final estimation through tracking processing;
Judge the observability of target, wherein,, then need restart detection and obtain a tracking target if forever lose, otherwise, continue to follow the tracks of.
2. gesture tracking as claimed in claim 1 is characterized in that, also comprises the steps: to set tracking restricted area R, the target that is used for following the tracks of present frame according to the dbjective state of previous frame.
3. gesture tracking as claimed in claim 2 is characterized in that, also comprises the steps:
Operation in the said tracking processing comprises prediction, checking and partial check, only is confined to carry out among the described tracking restricted area R.
4. gesture tracking as claimed in claim 3 is characterized in that, also comprises the steps:
In said tracking restricted area R, other gestures of being followed the tracks of outside the gesture are detected, be used for that the apparent model to gesture upgrades when gesture is suddenlyd change.
5. gesture tracking as claimed in claim 3 is characterized in that, when from the local detection result, finding to have the variation of gesture classification, then abandons original gesture model, reinitializes tracker information and apparent model with the result who detects.
6. gesture tracking as claimed in claim 3; It is characterized in that, in the step that dbjective state gives a forecast, employing be the way that color histogram combines cam-shift; According to former frame or before the dbjective state of some frames, the dbjective state in the present frame is made prediction.
7. gesture tracking as claimed in claim 3 is done predicting the outcome in the step of checking, employing be the such two kinds of describing modes of piecemeal LBP histogram and edge gradient direction histogram.
8. gesture tracking as claimed in claim 1 is characterized in that, also comprises the steps:
According to the result of said tracking processing, the information of said tracker is upgraded, comprise to the renewal of apparent model with to the renewal of gesture-type, state and observability that tracker write down.
9. gesture tracking as claimed in claim 1, its characteristic also are, in the time of target generation transient loss, do not stop trace daemon immediately; But according to the state setting tracking restricted area in a big way of previous frame, some frames backward continue in this restricted area, to do tracking processing.
10. gesture tracker is applied to it is characterized in that in the system platform with image collection module and gestures detection module that said gesture tracker comprises:
The apparent model of gesture comprises the iamge description mode that is used for tracking prediction and prediction checking;
The tracker initialization module; Be used to use said gestures detection module that the predefine gesture is detected, and when detecting certain type of gesture, tracker carried out initialization; Comprise the apparent model of initialization, and write down in the initialization tracker gesture classification, state and observability;
The tracking prediction module is used to combine the apparent model of gesture to describe, according to former frame or before dbjective state in some frames, the dbjective state in the present frame is made prediction;
The prediction authentication module from the pairing target image of predicted current frame state, extracts the characteristic be used to predict checking, does comparison with being used to predict the characteristics of image of checking in the apparent model of gesture accordingly, and whether confirm to predict the outcome effective;
The model modification module is used for the result according to said tracking processing, the information of gesture-type, state and observability that tracker write down in the said tracker initialization module is upgraded, and the apparent model of gesture is upgraded.
11. gesture tracker as claimed in claim 10 is characterized in that, also comprises the local detection module, is used for the dbjective state according to former frame, confirms to follow the tracks of restricted area, and other gestures of being followed the tracks of outside the gesture are detected.
CN201210290337.1A 2012-08-15 2012-08-15 Gesture tracking method and system Active CN102831439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210290337.1A CN102831439B (en) 2012-08-15 2012-08-15 Gesture tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210290337.1A CN102831439B (en) 2012-08-15 2012-08-15 Gesture tracking method and system

Publications (2)

Publication Number Publication Date
CN102831439A true CN102831439A (en) 2012-12-19
CN102831439B CN102831439B (en) 2015-09-23

Family

ID=47334567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210290337.1A Active CN102831439B (en) 2012-08-15 2012-08-15 Gesture tracking method and system

Country Status (1)

Country Link
CN (1) CN102831439B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN103366188A (en) * 2013-07-08 2013-10-23 中科创达软件股份有限公司 Gesture tracking method adopting fist detection as auxiliary information
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103413323A (en) * 2013-07-25 2013-11-27 华南农业大学 Object tracking method based on component-level appearance model
CN104159040A (en) * 2014-08-28 2014-11-19 深圳市中兴移动通信有限公司 Photographing method and photographing device
CN104424634A (en) * 2013-08-23 2015-03-18 株式会社理光 Object tracking method and device
CN104699238A (en) * 2013-12-10 2015-06-10 现代自动车株式会社 System and method for gesture recognition of vehicle
CN104731323A (en) * 2015-02-13 2015-06-24 北京航空航天大学 Multi-rotating direction SVM model gesture tracking method based on HOG characteristics
WO2015096584A1 (en) * 2013-12-25 2015-07-02 乐视网信息技术(北京)股份有限公司 Method and system for setting position of moving cursor in display page with links
CN105528078A (en) * 2015-12-15 2016-04-27 小米科技有限责任公司 Method and device controlling electronic equipment
CN106469293A (en) * 2015-08-21 2017-03-01 上海羽视澄蓝信息科技有限公司 The method and system of quick detection target
CN106875428A (en) * 2017-01-19 2017-06-20 博康智能信息技术有限公司 A kind of multi-object tracking method and device
CN106920251A (en) * 2016-06-23 2017-07-04 阿里巴巴集团控股有限公司 Staff detecting and tracking method and device
CN107292223A (en) * 2016-04-13 2017-10-24 芋头科技(杭州)有限公司 A kind of online verification method and system of real-time gesture detection
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108108707A (en) * 2017-12-29 2018-06-01 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108491767A (en) * 2018-03-06 2018-09-04 北京因时机器人科技有限公司 Autonomous roll response method, system and manipulator based on Online Video perception
CN109101872A (en) * 2018-06-20 2018-12-28 济南大学 A kind of generation method of 3D gesture mouse
CN109145793A (en) * 2018-08-09 2019-01-04 东软集团股份有限公司 Establish method, apparatus, storage medium and the electronic equipment of gesture identification model
CN109613930A (en) * 2018-12-21 2019-04-12 中国科学院自动化研究所南京人工智能芯片创新研究院 Control method, device, unmanned vehicle and the storage medium of unmanned vehicle
CN109697394A (en) * 2017-10-24 2019-04-30 京东方科技集团股份有限公司 Gesture detecting method and gestures detection equipment
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment
CN110095111A (en) * 2019-05-10 2019-08-06 广东工业大学 A kind of construction method of map scene, building system and relevant apparatus
CN110493618A (en) * 2019-08-02 2019-11-22 广州长嘉电子有限公司 Android method for intelligently controlling televisions and system based on USB3.0 interface
CN111061367A (en) * 2019-12-05 2020-04-24 神思电子技术股份有限公司 Method for realizing gesture mouse of self-service equipment
CN111166479A (en) * 2014-05-14 2020-05-19 斯瑞克欧洲控股I公司 Navigation system and method for tracking position of work target
CN112132017A (en) * 2020-09-22 2020-12-25 广州华多网络科技有限公司 Image processing method and device and electronic equipment
CN113395450A (en) * 2018-05-29 2021-09-14 深圳市大疆创新科技有限公司 Tracking shooting method, device and storage medium
CN113989611A (en) * 2021-12-20 2022-01-28 北京优幕科技有限责任公司 Task switching method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
WO2012103874A1 (en) * 2011-02-04 2012-08-09 Eads Deutschland Gmbh Camera system for recording and tracking remote moving objects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216957A (en) * 2008-10-09 2011-10-12 埃西斯创新有限公司 Visual tracking of objects in images, and segmentation of images
WO2012103874A1 (en) * 2011-02-04 2012-08-09 Eads Deutschland Gmbh Camera system for recording and tracking remote moving objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙晓晓,贾秋玲: "对变形目标的跟踪研究", 《现代电子技术》 *

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366188A (en) * 2013-07-08 2013-10-23 中科创达软件股份有限公司 Gesture tracking method adopting fist detection as auxiliary information
CN103353935A (en) * 2013-07-19 2013-10-16 电子科技大学 3D dynamic gesture identification method for intelligent home system
CN103353935B (en) * 2013-07-19 2016-06-08 电子科技大学 A kind of 3D dynamic gesture identification method for intelligent domestic system
CN103413323B (en) * 2013-07-25 2016-01-20 华南农业大学 Based on the object tracking methods of component-level apparent model
CN103413323A (en) * 2013-07-25 2013-11-27 华南农业大学 Object tracking method based on component-level appearance model
CN103413120A (en) * 2013-07-25 2013-11-27 华南农业大学 Tracking method based on integral and partial recognition of object
CN103413120B (en) * 2013-07-25 2016-07-20 华南农业大学 Tracking based on object globality and locality identification
CN104424634A (en) * 2013-08-23 2015-03-18 株式会社理光 Object tracking method and device
CN104424634B (en) * 2013-08-23 2017-05-03 株式会社理光 Object tracking method and device
CN104699238A (en) * 2013-12-10 2015-06-10 现代自动车株式会社 System and method for gesture recognition of vehicle
CN104699238B (en) * 2013-12-10 2019-01-22 现代自动车株式会社 System and method of the gesture of user to execute the operation of vehicle for identification
WO2015096584A1 (en) * 2013-12-25 2015-07-02 乐视网信息技术(北京)股份有限公司 Method and system for setting position of moving cursor in display page with links
CN111166479B (en) * 2014-05-14 2024-02-13 斯瑞克欧洲控股I公司 Navigation system and method for tracking the position of a work object
CN111166479A (en) * 2014-05-14 2020-05-19 斯瑞克欧洲控股I公司 Navigation system and method for tracking position of work target
US10171753B2 (en) 2014-08-28 2019-01-01 Nubia Technology Co., Ltd. Shooting method, shooting device and computer storage medium
WO2016029746A1 (en) * 2014-08-28 2016-03-03 努比亚技术有限公司 Shooting method, shooting device and computer storage medium
CN104159040A (en) * 2014-08-28 2014-11-19 深圳市中兴移动通信有限公司 Photographing method and photographing device
CN104731323A (en) * 2015-02-13 2015-06-24 北京航空航天大学 Multi-rotating direction SVM model gesture tracking method based on HOG characteristics
CN104731323B (en) * 2015-02-13 2017-07-04 北京航空航天大学 A kind of gesture tracking method of many direction of rotation SVM models based on HOG features
CN106469293A (en) * 2015-08-21 2017-03-01 上海羽视澄蓝信息科技有限公司 The method and system of quick detection target
CN105528078B (en) * 2015-12-15 2019-03-22 小米科技有限责任公司 The method and device of controlling electronic devices
CN105528078A (en) * 2015-12-15 2016-04-27 小米科技有限责任公司 Method and device controlling electronic equipment
CN107292223A (en) * 2016-04-13 2017-10-24 芋头科技(杭州)有限公司 A kind of online verification method and system of real-time gesture detection
US10885639B2 (en) 2016-06-23 2021-01-05 Advanced New Technologies Co., Ltd. Hand detection and tracking method and device
TWI703507B (en) * 2016-06-23 2020-09-01 香港商阿里巴巴集團服務有限公司 Human hand detection and tracking method and device
WO2017219875A1 (en) * 2016-06-23 2017-12-28 阿里巴巴集团控股有限公司 Hand detecting and tracking method and device
US10885638B2 (en) 2016-06-23 2021-01-05 Advanced New Technologies Co., Ltd. Hand detection and tracking method and device
CN106920251A (en) * 2016-06-23 2017-07-04 阿里巴巴集团控股有限公司 Staff detecting and tracking method and device
KR20190020783A (en) * 2016-06-23 2019-03-04 알리바바 그룹 홀딩 리미티드 Hand detection and tracking methods and devices
KR102227083B1 (en) * 2016-06-23 2021-03-16 어드밴스드 뉴 테크놀로지스 씨오., 엘티디. Hand detection and tracking method and device
JP2019519049A (en) * 2016-06-23 2019-07-04 アリババ グループ ホウルディング リミテッド Hand detection and tracking method and apparatus
EP3477593A4 (en) * 2016-06-23 2019-06-12 Alibaba Group Holding Limited Hand detecting and tracking method and device
CN106875428A (en) * 2017-01-19 2017-06-20 博康智能信息技术有限公司 A kind of multi-object tracking method and device
US11169614B2 (en) 2017-10-24 2021-11-09 Boe Technology Group Co., Ltd. Gesture detection method, gesture processing device, and computer readable storage medium
CN109697394A (en) * 2017-10-24 2019-04-30 京东方科技集团股份有限公司 Gesture detecting method and gestures detection equipment
CN108108707A (en) * 2017-12-29 2018-06-01 北京奇虎科技有限公司 Gesture processing method and processing device based on video data, computing device
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108491767B (en) * 2018-03-06 2022-08-09 北京因时机器人科技有限公司 Autonomous rolling response method and system based on online video perception and manipulator
CN108491767A (en) * 2018-03-06 2018-09-04 北京因时机器人科技有限公司 Autonomous roll response method, system and manipulator based on Online Video perception
CN113395450A (en) * 2018-05-29 2021-09-14 深圳市大疆创新科技有限公司 Tracking shooting method, device and storage medium
CN109101872A (en) * 2018-06-20 2018-12-28 济南大学 A kind of generation method of 3D gesture mouse
CN109101872B (en) * 2018-06-20 2023-04-18 济南大学 Method for generating 3D gesture mouse
CN109145793A (en) * 2018-08-09 2019-01-04 东软集团股份有限公司 Establish method, apparatus, storage medium and the electronic equipment of gesture identification model
CN109613930B (en) * 2018-12-21 2022-05-24 中国科学院自动化研究所南京人工智能芯片创新研究院 Control method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN109613930A (en) * 2018-12-21 2019-04-12 中国科学院自动化研究所南京人工智能芯片创新研究院 Control method, device, unmanned vehicle and the storage medium of unmanned vehicle
CN109960403A (en) * 2019-01-07 2019-07-02 西南科技大学 For the visualization presentation of medical image and exchange method under immersive environment
CN110095111A (en) * 2019-05-10 2019-08-06 广东工业大学 A kind of construction method of map scene, building system and relevant apparatus
CN110493618A (en) * 2019-08-02 2019-11-22 广州长嘉电子有限公司 Android method for intelligently controlling televisions and system based on USB3.0 interface
CN111061367A (en) * 2019-12-05 2020-04-24 神思电子技术股份有限公司 Method for realizing gesture mouse of self-service equipment
CN111061367B (en) * 2019-12-05 2023-04-07 神思电子技术股份有限公司 Method for realizing gesture mouse of self-service equipment
CN112132017A (en) * 2020-09-22 2020-12-25 广州华多网络科技有限公司 Image processing method and device and electronic equipment
CN112132017B (en) * 2020-09-22 2024-04-02 广州方硅信息技术有限公司 Image processing method and device and electronic equipment
CN113989611A (en) * 2021-12-20 2022-01-28 北京优幕科技有限责任公司 Task switching method and device

Also Published As

Publication number Publication date
CN102831439B (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN102831439B (en) Gesture tracking method and system
CN105825524B (en) Method for tracking target and device
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
Choi et al. A general framework for tracking multiple people from a moving camera
US20180307319A1 (en) Gesture recognition
CN110751022A (en) Urban pet activity track monitoring method based on image recognition and related equipment
CN104992171A (en) Method and system for gesture recognition and man-machine interaction based on 2D video sequence
Schwarz et al. Manifold learning for tof-based human body tracking and activity recognition.
CN103105924B (en) Man-machine interaction method and device
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
Li et al. Robust multiperson detection and tracking for mobile service and social robots
CN102855461A (en) Method and equipment for detecting fingers in images
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
She et al. A real-time hand gesture recognition approach based on motion features of feature points
Liao et al. A two-stage method for hand-raising gesture recognition in classroom
Guo et al. Gesture recognition of traffic police based on static and dynamic descriptor fusion
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
Avola et al. Machine learning for video event recognition
CN109325387B (en) Image processing method and device and electronic equipment
CN108108648A (en) A kind of new gesture recognition system device and method
Thabet et al. Algorithm of local features fusion and modified covariance-matrix technique for hand motion position estimation and hand gesture trajectory tracking approach
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition
CN110197123A (en) A kind of human posture recognition method based on Mask R-CNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant