CN102509074B - Target identification method and device - Google Patents

Target identification method and device Download PDF

Info

Publication number
CN102509074B
CN102509074B CN201110317260.8A CN201110317260A CN102509074B CN 102509074 B CN102509074 B CN 102509074B CN 201110317260 A CN201110317260 A CN 201110317260A CN 102509074 B CN102509074 B CN 102509074B
Authority
CN
China
Prior art keywords
depth
limbs
target
target area
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110317260.8A
Other languages
Chinese (zh)
Other versions
CN102509074A (en
Inventor
李相涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201110317260.8A priority Critical patent/CN102509074B/en
Publication of CN102509074A publication Critical patent/CN102509074A/en
Application granted granted Critical
Publication of CN102509074B publication Critical patent/CN102509074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention is applicable to the electronic field and provides a target identification method and a device. The method comprises the following steps of: judging whether a starting action exists in ROI (region-of-interest) of a back depth map according to variation of depth value of a front depth map and the back depth map which are adjacent in a depth frame sequence; detecting regions which have the same colour frames according to a preset limb target model, and judging regions which accord with the limb target model to be limb target regions; storing characteristic set parameters of the limb target regions; tracking regions of the previous colour frame which are judged to be the limb target regions in a depth frame, and detecting the regions which are the same in the colour frame corresponding to the preset colour frame by utilizing the preset limb target model and characteristic set parameters of the previous limb target region which are stored, so as to obtain the limb target regions; acquiring coordinates of each limb target region, and identifying a target action according to the acquired coordinate values. In the target identification method and device provided by the embodiment of the invention, a depth image sequence and a colour image sequence are used for detecting the limb target regions, thus detection accuracy is effectively improved.

Description

A kind of target identification method and equipment
Technical field
The invention belongs to human-computer interaction technique field, relate in particular to a kind of target identification method and equipment.
Background technology
For example, in existing human-computer interaction device (intelligent television), user can carry out simple man-machine interaction with intelligent television.In human-computer interaction device, often relate to target action recognition technology, human-computer interaction device for example, responds user's control by identification target action (gesture).
Existing target identification method is identified people's limb action often by the processing of two dimensional image, but because two dimensional image is difficult to reflect comprehensively, exactly actual things, therefore the method is difficult to accurately distinguish target and identification people's limb action, and probability of misrecognition is larger.
Summary of the invention
The embodiment of the present invention provides a kind of target identification method, is intended to solve existing target identification method when the action of identification target, the problem that probability of misrecognition is larger.
The embodiment of the present invention is achieved in that a kind of target identification method, and described method comprises the steps:
Synchronously obtain degree of depth frame sequence and color framing sequence, described degree of depth frame sequence is several depth maps that depth camera gathers;
According to the variation of the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence, judge in region of interest ROI default in a rear width degree of depth frame whether exist and start action;
When the ROI of degree of depth frame exists startup action, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module;
Preservation is judged to be the characteristic set parameter of limbs target area, and a limbs target area detecting is as the target of following the tracks of;
In follow-up degree of depth frame, described tracking target is followed the tracks of, and use region identical in the characteristic set parameter pair of a upper limbs target area of default limbs object module and preservation and the color framing of described follow-up degree of depth frame synchronization to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area;
Obtain all coordinate figures in each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and move according to the direction of motion identification target of target;
The described variation according to the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence judges in region of interest ROI default in a rear width depth map whether exist the step that starts action to comprise:
Obtain respectively the depth value group in region of interest ROI default in former and later two degree of depth frames, and the depth value group of described former and later two degree of depth frames of calculating is poor;
Whether the difference of the depth value group that judgement is obtained meets default entry condition, when meeting default entry condition, judges that described ROI exists startup action, otherwise, do not exist and start action.
Another object of the embodiment of the present invention is to provide a kind of target motion identification device, and described equipment comprises:
Image acquisition unit, for synchronously obtaining degree of depth frame sequence and color framing sequence, described degree of depth frame sequence is several depth maps that depth camera gathers;
Whether moving region judging unit, exist and start action for judging according to the variation of the depth value of the adjacent forward and backward two width depth maps of degree of depth frame sequence in region of interest ROI default in a rear width depth map;
Limbs target area judging unit, while there is startup action for the ROI at degree of depth frame, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module;
Parameter storage unit, for preserving the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target;
Limbs target area correcting unit, for described tracking target being followed the tracks of at follow-up degree of depth frame, and use region identical in the characteristic set parameter pair of a upper limbs target area of default limbs object module and preservation and the color framing of described follow-up degree of depth frame synchronization to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area;
Target action recognition unit, for obtaining all coordinate figures of each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target;
Described moving region judging unit first obtains respectively the depth value group in region of interest ROI default in former and later two degree of depth frames, and the depth value group of described former and later two degree of depth frames of calculating is poor, whether the difference of the depth value group that judgement is obtained again meets default entry condition, when meeting default entry condition, judge that described ROI exists startup action, otherwise, do not exist and start action.
In the embodiment of the present invention, first obtain a range image sequence and a color image sequence, re-use range image sequence and detect region and the limbs target area of tracking after color image sequence filters that whether has motion state, use color image sequence to existing the region of motion state to filter, confirm limbs target area.Compared to the prior art, the embodiment of the present invention is combined with range image sequence and detects limbs target area together with color image sequence, has therefore effectively improved the accuracy detecting, and has reduced probability of misrecognition.
Accompanying drawing explanation
Fig. 1 is the target identification method process flow diagram that first embodiment of the invention provides;
Fig. 2 is the target identification method process flow diagram that second embodiment of the invention provides;
Fig. 3 is the target identification method process flow diagram that third embodiment of the invention provides;
Fig. 4 is the target identification method process flow diagram that fourth embodiment of the invention provides;
Fig. 5 is the structural drawing of the target motion identification device that provides of fifth embodiment of the invention;
Fig. 6 is the structural drawing of the target motion identification device that provides of sixth embodiment of the invention;
Fig. 7 is the structural drawing of the target motion identification device that provides of seventh embodiment of the invention;
Fig. 8 is the structural drawing of the target motion identification device that provides of eighth embodiment of the invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
For technical solutions according to the invention are described, below by specific embodiment, describe.
embodiment mono-:
Fig. 1 shows the process flow diagram of the target identification method that first embodiment of the invention provides, and the target identification method providing at the present embodiment comprises:
Step S11, synchronously obtains degree of depth frame sequence and color framing sequence, and this degree of depth frame sequence is several depth maps that depth camera gathers.
In the present embodiment, use depth camera collection per second to be greater than the depth map of 10 frames, conventionally gather 25 frame depth maps p.s., every width depth map all increases progressively numbering in order, and these have several depth maps composition degree of depth frame sequences that increase progressively numbering.
In the present embodiment, use common camera or depth camera to gather cromogram, the frequency that gathers cromogram is identical with the frequency of sampling depth figure, and with the depth map gathering be Same Scene, each width cromogram increases progressively numbering equally in order, has several cromograms composition color framing sequences that increase progressively numbering.
Step S12, judges in area-of-interest (Region of Interest, ROI) default in a rear width degree of depth frame whether exist and start action according to the variation of the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence, if not, execution step S13, if so, execution step S14.
At the present embodiment, according to depth value, determine motion state, depth value by area-of-interest in the adjacent forward and backward two width depth maps that obtain changes, and judges whether depth value variation meets default condition, if meet, judge in a rear width degree of depth frame and in ROI, exist and start action, execution step S14.If do not exist and start action in judgement ROI, execution step S13.
In the present embodiment, ROI can determine flexibly according to the position at the target place of identification, the head that for example target of identification is behaved, and this region can be the reasonable region at head place.If the target of identification is foot, this region can be the reasonable region at foot place.The shape in region can be for square or circular.
In addition, start the action that action can comprise different limbs targets (including but not limited to foot's target of staff target, people and people's head target etc.), such as in people's headwork, foot action, hand motion one or more etc., deliberate action, foot's deliberate action of people's head detected, all can start further identifying (step S14) after the deliberate action of hand.
Step S13, continues monitoring.
In the present embodiment, if judge, previous degree of depth frame does not exist starts action, continues monitoring next frame.
Step S14, detects region identical with ROI in synchronous color framing according to default limbs object module, and is limbs target area by the regional determination that meets limbs object module.
In the present embodiment, different limbs object module corresponding to identification target, for example, identify the head that target is behaved, and this limbs object module can be head model, and corresponding limbs target area is head zone.If the target of identification is foot, this limbs object module can be human foot model, and corresponding limbs target area is foot areas.If the target of this identification is hand, this limbs object module can be hand model, and corresponding limbs target area is staff region.
In the present embodiment, when exist starting the ROI of action and comprise non-limbs target area, as while comprising the non-limbs target area different from human body complexion, the same region in color framing can first detect non-limbs target area, dwindle sensing range, reduce calculated amount.When the non-limbs target area comprising is, while comprising the color region different from human body complexion, can detect the color different from human body complexion by complexion model, and abandon the region at this color place, dwindle sensing range.In the present embodiment, because range image sequence only need just can be determined the region of motion state according to depth value, filter the region of non-motion state, make color image sequence when confirming limbs target area, avoid the calculating of entire image, optimized algorithm, made to calculate simpler.
Step S15, preserves the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
In the present embodiment, after obtaining limbs target area, preserve the characteristic set parameter of this limbs target area.
Step S16, in follow-up degree of depth frame, this tracking target is followed the tracks of, and use identical region in the color framing of the characteristic set parameter pair degree of depth frame synchronization follow-up with this of a upper limbs target area of default limbs object module and preservation to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area.
In the present embodiment, after obtaining a limbs target area, continue in follow-up degree of depth frame sequence, to follow the tracks of the limbs target area obtaining, and constantly at the respective regions of color framing sequence, proofread and correct, improve the accuracy that limbs target area is judged.If follow the tracks of the failure of limbs target area, return to step S12, redefine tracking target.For example, after following the tracks of the failure of staff target, return to S12, the head or the foot that redefine people identify as tracking target.
Step S17, obtains all coordinate figures in each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target.
In the present embodiment, after each target travel, record the formed one group of coordinate figure of current motion, obtain all coordinate figures in each limbs target area, according to the direction of motion of a plurality of coordinate figure judgement targets of obtaining, again according to the result identification target action of judging, to carry out corresponding operation.
In first embodiment of the invention, first obtain a range image sequence and a color image sequence, re-use range image sequence and detect region and the limbs target area of tracking after color image sequence filters that whether has motion state, use color image sequence to existing the region of motion state to filter, confirm limbs target area.Owing to using range image sequence to detect limbs target area together with color image sequence, therefore effectively improved the accuracy detecting, reduced probability of misrecognition.
embodiment bis-:
Fig. 2 shows the flow process of the target identification method that the present invention two embodiment provide, and in the present embodiment, mainly the step S12 of embodiment mono-is described in more detail, and the target identification method that the present embodiment two provides mainly comprises:
Step S21, synchronously obtains degree of depth frame sequence and color framing sequence, and this degree of depth frame sequence is several depth maps that depth camera gathers.
In the present embodiment, the implementation of the implementation of step S21 and step S11 in above-described embodiment one is identical, at this, is no longer repeated in this description.
Step S22, obtains respectively the depth value group in region of interest ROI default in former and later two degree of depth frames, and the depth value group of described former and later two degree of depth frames of calculating is poor.
In the present embodiment, obtain the depth value group in region of interest ROI default in depth value group in region of interest ROI default in front width depth map and back panel depth map, and calculate depth value group poor of above-mentioned former and later two degree of depth frames.In ROI, have a plurality of depth values, the plurality of depth value has formed a depth value group, by obtaining in front width depth map the depth value group in ROI in the depth value group in ROI and back panel depth map, can calculate the difference of above-mentioned two depth value groups.
Step S23, whether the difference of the depth value group that judgement is obtained meets default entry condition, if not, execution step S24, if so, execution step S25.
When the difference of depth value group meets default entry condition, judge in a rear width depth map and in ROI, exist and start action, otherwise, judge in a rear width depth map and in ROI, do not exist and start action.
In the present embodiment, default entry condition is a set, calculates the difference of all depth values of previous degree of depth frame and all depth values of current degree of depth frame, and whether the difference that judgement is calculated comprises the difference in the set that has this default conditions for use.
Such as, whether have start action, be that all depth values by more adjacent former and later two degree of depth frames judge if detecting a ROI.A depth value supposing previous degree of depth frame is DEPTH pre(x, y, z), depth value group is DEPTH pre[x, y, z], a depth value of current degree of depth frame is DEPTH now(x, y, z), depth value group is DEPTH now[x, y, z], a changing value of degree of depth frame is DEPTH c(x, y, z), changing value group is DEPTH c[x, y, z], the set of default entry condition is SARTUP (D):
DEPTH C[x,y,z]=DEPTH now[x,y,z]-DEPTH pre[x,y,z]
Work as DEPTH cat least there is a DEPTH in [x, y, z] cduring (x, y, z) ∈ SARTUP (D), represent to exist and start action in this ROI.
Step S24, continues monitoring.
Step S25, detects region identical with ROI in synchronous color framing according to default limbs object module, and is limbs target area by the regional determination that meets limbs object module.
Step S26, preserves the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
In follow-up degree of depth frame sequence and color framing sequence, carry out following step S27:
Step S27, in follow-up degree of depth frame, this tracking target is followed the tracks of, and use identical region in the color framing of the characteristic set parameter pair degree of depth frame synchronization follow-up with this of a upper limbs target area of default limbs object module and preservation to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area.
Step S28, obtains all coordinate figures in each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target.
In the present embodiment, the implementation of the implementation of step S24-S28 and step S13-S17 in above-described embodiment one is identical, at this, is no longer repeated in this description.
In second embodiment of the invention, by obtaining the depth value group in ROI in former and later two degree of depth frames, obtain depth value group poor of these former and later two degree of depth frames, and by the difference of the depth value group of obtaining and default entry condition comparison, can judge in ROI and whether exist and start action.Because the difference of the depth value in same area can reflect the motion state in this region, so the present embodiment can obtain testing result more accurately.
embodiment tri-:
Fig. 3 shows the target identification method flow process that third embodiment of the invention provides, the limbs target of selecting is in the present embodiment staff, and the present embodiment is mainly described in more detail the step S27 of the step S16 of the step S26 of the step S15 of embodiment mono-, embodiment bis-and embodiment mono-, embodiment bis-:
Step S31, synchronously obtains degree of depth frame sequence and color framing sequence, and this degree of depth frame sequence is several depth maps that depth camera gathers.
Step S32, judges in area-of-interest default in a rear width depth map whether exist and start action according to the variation of the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence, if not, and execution step S33, if so, execution step S34.
Step S33, continues monitoring.
In the present embodiment, the implementation of the implementation of step S31~S33 and step S11~S13 in above-described embodiment one is identical, at this, is no longer repeated in this description.
In the present embodiment, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and the process (the step S25 in the step S14 in embodiment mono-and two) that is limbs target area by the regional determination that meets limbs object module specifically comprises S34-36.
Step S34, detects region identical in synchronous color framing according to complexion model and whether meets complexion model.
In the present embodiment, limbs object module comprises complexion model and characteristic model.Wherein, complexion model is a kind of numerical relationship model that the colour of skin in color space and other colouring discriminations are come, it belongs to the colour of skin with the color which pixel a kind of algebraically or the form of tabling look-up express, or symbolizes the color of a certain pixel and the similarity of the colour of skin.The present embodiment is first transformed into chromatic color space by color framing, the complexion model that utilization is set up in chromatic space splits the colour of skin from image, then judges whether to meet default complexion model, if meet, carry out S35, if do not meet, continue monitoring.
Wherein, chromatic color space changes and has stronger adaptability brightness, and the formula that color framing (RGB image) projects to chromatic color space is:
r = R R + G + B g = G R + G + B b = B R + G + B
The colour of skin is done to histogram calculation in chromatic space, and the feature that is similar to Gaussian distribution according to its distribution is set up dimensional Gaussian complexion model N (m, ∑ 2), mean value m=(μ wherein r, μ g), μ r, μ gbe respectively the mean value of r component and g component, covariance Σ 2 = | δ rr δ rg δ rg δ gg | , δ rrbe the covariance of two r components, δ rgfor the covariance of r component and g component, δ ggit is the covariance of two g components.Certainly, the formula that color framing (RGB image) projects to chromatic color space also can change accordingly according to the change of actual conditions, repeats no more herein.
Step S35, detects according to characteristic model the region meet complexion model and whether meets characteristic model.
Wherein, characteristic model mainly comprises the parameter that can reflect shape, such as:
(a) the rectangle fitting factor
Figure GDA00003395348800101
wherein, A 0the area that represents object, A rthe area that represents the minimum boundary rectangle (MER) of object, R has reflected the full level of an object to its MER.For rectangle object R, obtain maximal value 1, for circular object R value, be
Figure GDA00003395348800102
the value of the rectangle fitting factor is limited between 0 and 1.
(b) length breadth ratio A=W/L, wherein A equals the wide of boundary rectangle MER and long ratio, and this parameter can distinguish more very thin object and square or circular object.
(c) square invariant, square is a kind of linear characteristic, it has unchangeability for rotation, proportional zoom, the translation of image.Certainly, the parameter of characteristic model also can adopt other parameter to be not limited to the above-mentioned parameter of enumerating according to actual conditions.
In the present embodiment, whether the region that judgement meets complexion model meets the scope of above-mentioned parameter, if meet, performs step S36, continues monitoring if do not meet.
Step S36 is limbs target area by the regional determination that meets characteristic model.
Step S37, preserves the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
In the present embodiment, the characteristic set parameter of limbs target area comprises target colour of skin value range, and the length breadth ratio of target shape parameter, finger etc., such as target shape is described as rectangle etc. by target shape.
In follow-up degree of depth frame sequence and color framing sequence, carry out following step S38:
Step S38, in follow-up degree of depth frame, this tracking target is followed the tracks of, and use identical region in the color framing of the characteristic set parameter pair degree of depth frame synchronization follow-up with this of a upper limbs target area of default limbs object module and preservation to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area.
Concrete, suppose that the moving region obtaining is DOBJ in first degree of depth frame 0', in first color framing with first degree of depth frame synchronization to this DOBJ 0' first limbs target area of proofreading and correct rear acquisition is labeled as DOBJ 0, preserve this DOBJ 0relevant parameter, this DOBJ 0relevant parameter can be target colour of skin value range, target shape parameter etc.In second degree of depth frame, follow the tracks of this DOBJ 0, and in second color framing with this second degree of depth frame synchronization to this DOBJ 0proofread and correct in corresponding region, supposes that the region after color framing is proofreaied and correct is DOBJ 1if, DOBJ 1relevant parameter and the DOBJ preserving for the first time 0relevant parameter difference not quite (meet limbs object module and DOBJ 0relevant parameter), judge the region DOBJ after color framing is proofreaied and correct 1for limbs target area, and preserve DOBJ 1relevant parameter.Continuous to DOBJ at the 3rd degree of depth frame relay 1follow the tracks of, and in the 3rd color framing with the 3rd degree of depth frame synchronization with DOBJ 1proofread and correct in corresponding region, and by the relevant parameter in the region after proofreading and correct and preservation DOBJ 1relevant parameter compare, to obtain limbs target area.Above-mentioned steps is carried out in circulation, until proofreaied and correct the All Ranges that degree of depth frame sequence obtains.
Step S39, obtains all coordinate figures in each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target.
In the present embodiment, according to the limbs target area obtaining after the characteristic set parameter correction of the limbs target area of color framing and preservation each time, obtain a coordinate, such as using following formula to calculate the coordinate figure of a plurality of limbs target area obtaining:
x i = Σ x , y ∈ DOBJ i x × v c ( x , y ) Σ x , y ∈ DOBJ i v c ( x , y ) y i = Σ x , y ∈ DOBJ i y × v c ( x , y ) Σ x , y ∈ DOBJ i v c ( x , y ) z i = Σ x , y , z ∈ DOBJ i z × v d ( x , y ) Σ x , y ∈ DOBJ i v d ( x , y )
Wherein, x i, y i, z ibe respectively the coordinate figure of i the limbs target area after correction, x iwith y irespectively the first moment of limbs target area in coloured image, z ifor the mean depth value of limbs target area in depth map, v c(x, y) represents pixel value corresponding to point (x, y) in coloured image, v d(x, y) represents pixel value corresponding to point (x, y) in depth image, DOBJ irepresent i limbs target area.
In third embodiment of the invention, limbs object module comprises complexion model and characteristic model, behind the startup region obtaining according to degree of depth frame, re-using complexion model detects the same startup region of color framing, judge whether this region meets the requirement of complexion model, in this region, meets after complexion model, re-use characteristic model and judge whether this region meets the requirement of characteristic model, can further filter acquired limbs target area, so that the target obtaining is more accurate.
embodiment tetra-:
Fig. 4 shows the target identification method flow process that fourth embodiment of the invention provides, the limbs target of selecting is in the present embodiment staff, and the present embodiment is mainly described in more detail the step S39 of the step S28 of the step S17 of embodiment mono-, embodiment bis-and embodiment tri-:
Step S401, synchronously obtains degree of depth frame sequence and color framing sequence, and this degree of depth frame sequence is several depth maps that depth camera gathers.
Step S402, judges in area-of-interest default in a rear width depth map whether exist and start action according to the variation of the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence, if not, and execution step S403, if so, execution step S404.
Step S403, continues monitoring.
Step S404, detects region identical with ROI in synchronous color framing according to default limbs object module, and is limbs target area by the regional determination that meets limbs object module.
Step S405, preserves the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
In follow-up degree of depth frame sequence and color framing sequence, carry out following step S406:
Step S406, in follow-up degree of depth frame, this tracking target is followed the tracks of, and use identical region in the color framing of the characteristic set parameter pair degree of depth frame synchronization follow-up with this of a upper limbs target area of default limbs object module and preservation to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area.
In the present embodiment, the implementation of the implementation of step S401~S406 and step S11~S16 in above-described embodiment one is identical, at this, is no longer repeated in this description.
In the present embodiment, according to the direction of motion of the coordinate figure judgement target of obtaining, and the step (the step S17 in corresponding above-described embodiment, S28 and S39) of moving according to the direction of motion identification target of target mainly comprises the following steps:
Step S407, obtains all coordinate figures in each limbs target area, obtains maximum depth value and minimum depth value from the coordinate figure obtaining, and calculates the poor of maximum depth value and minimum depth value.
Step S408, judges whether the difference of maximum depth value and minimum depth value is greater than default difference threshold, if be greater than, and execution step S409, otherwise execution step S410.
Step S409, judges that the direction of target action is fore-and-aft direction.
Step S410, judges that the direction of target action is in-plane.
In an embodiment, after each target travel, record the formed one group of coordinate figure of current motion, suppose that this group coordinate figure is (x 1y 1z 1)~(x iy iz i).First a default difference threshold, and obtain in this group coordinate figure maximum depth value and minimum depth value, calculate the poor of this maximum depth value and minimum depth value, whether the difference that judges depth value that this is maximum and minimum depth value is greater than default difference threshold, if be greater than this difference threshold, the direction of motion of judging current target action is fore-and-aft direction, otherwise the direction of motion of judging the action of current target is in-plane, the target action of in-plane can be for upwards or downwards or left or to the right.
Further, judgement, when the direction of motion of target action is fore-and-aft direction, judges that according to the variation tendency of the depth value z value of whole group of coordinate target action is to shake forward or shake backward.Be judged to be forward when z value is become to negative value from 0 and shake, by z value from 0 become on the occasion of time be judged to be and shake backward etc.
In the present embodiment, when if the direction of motion of judgement target action is in-plane motion, with horizontal ordinate x value and the ordinate y value of whole group of coordinate, use multi-point fitting method matching straight line, if the angle of this straight line and x axle positive axis is in two default angle threshold ranges, judge that this target action is above-below direction motion, otherwise judge that this target action is left and right directions motion.Wherein two default angle threshold ranges can be set between 45 °~135 °.
Further, when the action of judgement target is above-below direction motion, according to the variation tendency of the y value of whole group of coordinate, distinguishing current target action is upwards to shake or shake downwards, and y value diminishes, and judgement is upwards shaken, and y value becomes shakes downwards greatly.
Further, when target action is left and right directions motion, according to the variation tendency of the x value of whole group of coordinate, distinguishing current target action is to shake left or shake to the right, and x value diminishes, and judgement is shaken left, and x value becomes shakes to the right greatly.
In fourth embodiment of the invention, by more maximum depth value and the minimum difference of depth value and the difference threshold of budget, judge that the direction of target action is fore-and-aft direction or in-plane.When the direction of target action is fore-and-aft direction, according to the direction of the depth value change direction judgement target action of whole group of coordinate, be to shake forward or shake backward; When the direction of target action is fore-and-aft direction, use the direction of multi-point fitting method judgement target action move up and down direction or side-to-side movement direction.Due to when identifying the action of target, use multi-point fitting method, therefore strengthened antijamming capability, effectively reduced the probability of misrecognition of knee-action direction and left and right direction of action simultaneously.
embodiment five:
Fig. 5 shows the structural drawing of the target motion identification device that fifth embodiment of the invention provides, and for convenience of explanation, Fig. 5 only shows the part relevant to the embodiment of the present invention.
This target motion identification device can be for the various information processing terminals, for example mobile phone, pocket computing machine (Pocket Personal Computer, PPC), palm PC, computing machine, notebook computer, personal digital assistant (Personal Digital Assistant, PDA) etc., can be to run on the unit that hardware cell in these terminals or software and hardware combine, also can be used as independently suspension member is integrated in these terminals or runs in the application system of these terminals
The equipment that the embodiment of the present invention five provides comprises:
Image acquisition unit 51, for synchronously obtaining degree of depth frame sequence and color framing sequence, this degree of depth frame sequence is several depth maps that depth camera gathers.
In the present embodiment, can adopt same depth camera sampling depth frame sequence and color framing sequence, certainly, also can use common camera to gather color framing sequence.
Whether moving region judging unit 52, exist and start action for judging according to the variation of the depth value of the adjacent forward and backward two width depth maps of degree of depth frame sequence in region of interest ROI default in a rear width depth map.
Limbs target area judging unit 53, while there is startup action for the ROI at degree of depth frame, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module.
Parameter storage unit 54, for preserving the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
Limbs target area correcting unit 55, for this tracking target being followed the tracks of at follow-up degree of depth frame, and use identical region in the color framing of the characteristic set parameter pair degree of depth frame synchronization follow-up with this of a upper limbs target area of default limbs object module and preservation to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area.
In the present embodiment, by limbs target area correcting unit 55, repeatedly proofread and correct and be judged to be order target area, make the limbs target area that finally obtains more accurate.
Target action recognition unit 56, for obtaining all coordinate figures of each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target.
In the present embodiment, by a plurality of limbs target area after proofreading and correct is abstract be coordinate figure after, target action recognition unit 56 obtains the coordinate figure of a plurality of limbs target area after limbs target area correcting unit 55 is proofreaied and correct, and according to the coordinate figure identification target action of obtaining.
In fifth embodiment of the invention, the degree of depth frame sequence that moving region judging unit 52 obtains according to image acquisition unit 51 detects the region that has motion state, the color framing Sequence Detection that limbs target area judging unit 53 obtains according to image acquisition unit 51 again exists whether the region of motion state is limbs target area, and definite initial limbs target area, by limbs target area correcting unit 55, obtain follow-up degree of depth frame sequence and the limbs target area of color framing sequence, and obtain the coordinate figure after proofread and correct each limbs target area by target action recognition unit 56, again according to this coordinate figure identification target action.Owing to using degree of depth frame sequence and color framing sequence simultaneously, confirm limbs target area, therefore effectively improved the accuracy detecting, and through repeatedly proofreading and correct behind limbs target area, reduced the view data that participates in computing, optimized algorithm, improved arithmetic speed, and the probability that has reduced the direction of motion of by mistake identifying target action.
embodiment six:
Fig. 6 shows the structure of the target motion identification device that sixth embodiment of the invention provides:
Image acquisition unit 61, for synchronously obtaining degree of depth frame sequence and color framing sequence, this degree of depth frame sequence is several depth maps that depth camera gathers.
Whether moving region judging unit 62, exist and start action for judging according to the variation of the depth value of the adjacent forward and backward two width depth maps of degree of depth frame sequence in region of interest ROI default in a rear width depth map.
In the present embodiment, moving region judging unit 62 comprises: depth difference determination module 621 and motion state judge module 622.
This depth difference determination module 621 is for obtaining respectively the depth value group in the default region of interest ROI of former and later two degree of depth frames, and calculates depth value group poor of these former and later two degree of depth frames.
In the present embodiment, when the depth value of former and later two degree of depth frames there are differences, the region there are differences may be the region of motion state.
This motion state judge module 622, for judging whether the difference of the depth value group of obtaining meets default entry condition, when meeting default entry condition, judges that this ROI exists startup action, otherwise, do not exist and start action.
In the present embodiment, limbs target area judging unit 63, parameter storage unit 64, limbs target area correcting unit 65 and target action recognition unit 66 are identical with limbs target area judging unit 53, parameter storage unit 54, limbs target area correcting unit 55 and the target action recognition unit 56 of embodiment five, repeat no more herein.
embodiment seven:
Fig. 7 shows the structure of the target motion identification device that sixth embodiment of the invention provides:
Wherein the image acquisition unit 71 of the present embodiment, moving region judging unit 72 are identical with image acquisition unit 61, the moving region judging unit 62 of embodiment six, repeat no more herein.
Limbs target area judging unit 73, while there is startup action for the ROI at degree of depth frame, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module.
In the present embodiment, default limbs object module comprises complexion model and characteristic model.Whether the colour of skin corresponding to pixel value that can detect this region according to complexion model is the distinctive colour of skin of target, and whether shape, the size that according to characteristic model, can detect the region that meets the peculiar colour of skin of target meet the feature that target has.
Optionally, this limbs target area judging unit 73 comprises region Face Detection module 731, provincial characteristics detection module 732 and limbs target area determination module 733.
Whether this region Face Detection module 731 meets complexion model for detect the region that synchronous color framing is identical according to complexion model.
Whether this provincial characteristics detection module 732 meets complexion model region for detecting according to characteristic model meets characteristic model.
This limbs target area determination module 733 is for being limbs target area by the regional determination that meets characteristic model.
Parameter storage unit 74, for preserving the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target.
In the present embodiment, the characteristic set parameter of limbs target area comprises target colour of skin value range, target shape parameter.
In the present embodiment, limbs target area correcting unit 75, target action recognition unit 76 are identical with limbs target area correcting unit 65, target action recognition unit 66 in embodiment six, repeat no more herein.
In seventh embodiment of the invention, calmodulin binding domain CaM Face Detection module 731, provincial characteristics detection module 732 and limbs target area determination module 733 identification limbs target areas, improved the accuracy of limbs target area identification.
embodiment eight:
Fig. 8 shows the structure of the target motion identification device that eighth embodiment of the invention provides:
In the present embodiment, image acquisition unit 81, moving region judging unit 82, limbs target area judging unit 83, parameter storage unit 84, limbs target area correcting unit 85 are identical with image acquisition unit 71, moving region judging unit 72, limbs target area judging unit 73, parameter storage unit 74, the limbs target area correcting unit 75 of embodiment seven, repeat no more herein.
Target action recognition unit 86, for obtaining all coordinate figures of each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target.
Optionally, target action recognition unit 86 comprises depth difference determination module 861, depth difference judge module 862 and direction judge module 863.
Depth difference determination module 861 for obtaining all coordinate figures of each limbs target area, obtains maximum depth value and minimum depth value from the coordinate figure obtaining, and calculates the poor of the depth value of described maximum and minimum depth value.
Depth difference judge module 862, for judging whether the depth value of described maximum and the difference of minimum depth value are greater than described default difference threshold.
Direction judge module 863, while being greater than described default difference threshold for the difference of the depth value maximum and minimum depth value, judging that the direction of target action is fore-and-aft direction, otherwise judges that the direction of target action is in-plane.
Optionally, direction judge module 863 comprises fore-and-aft direction judgement submodule 91, in-plane judgement submodule 92.
Fore-and-aft direction judge module 91, while being greater than described default difference threshold for the difference of the depth value maximum and minimum depth value, judges that the direction of target action is fore-and-aft direction.
In the present embodiment, when the direction of motion of target action is fore-and-aft direction motion, according to the variation tendency of the depth value z value of whole group of coordinate, judge that target action is to shake forward or shake backward.Be judged to be forward when z value is become to negative value from 0 and shake, by z value from 0 become on the occasion of time be judged to be and shake backward etc.
In-plane judgement submodule 92, for maximum depth value and minimum depth value be not more than described default difference threshold time, the direction of judging target action is in-plane, and with horizontal ordinate x value and the ordinate y value matching straight line of all coordinates of obtaining, further judge that the angle of described straight line and x axle positive axis is whether between described two default angle threshold values, if between described two default angle threshold values, the direction of judging target action is above-below direction, otherwise, judge that the direction of target action is left and right directions.
In eighth embodiment of the invention, direction of motion by default difference threshold and angle threshold determination target direction or the side-to-side movement direction direction that still moves up and down that seesaws, owing to identifying the action of target with multi-point fitting, the probability that has therefore effectively strengthened antijamming capability and reduced the direction of motion of by mistake identifying target action.
In the embodiment of the present invention, first obtain a range image sequence and a color image sequence, re-use range image sequence and detect region and the limbs target area of tracking after color image sequence filters that whether has motion state, the characteristic set parameter of a upper limbs target area of use color image sequence and preservation, to existing the region of motion state to filter, is confirmed limbs target area.Owing to using range image sequence to detect limbs target area together with color image sequence, therefore effectively improved the accuracy detecting, simultaneously because range image sequence only need just can be determined the region of motion state according to depth value, filter the region of non-motion state, make color image sequence when confirming limbs target area, avoid the calculating of entire image, optimized algorithm, made to calculate simpler.The embodiment of the present invention is behind definite limbs target area, according to a plurality of coordinate figures of this limbs target area, determine the coordinates of targets in each two field picture, and according to the action of this coordinates of targets identification target, due to when identifying the action of target, use multi-point fitting method, therefore strengthen antijamming capability, effectively reduced the probability of misrecognition to the direction of motion of target action simultaneously.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. a target identification method, is characterized in that, described method comprises the steps:
Synchronously obtain degree of depth frame sequence and color framing sequence, described degree of depth frame sequence is several depth maps that depth camera gathers;
According to the variation of the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence, judge in region of interest ROI default in a rear width degree of depth frame whether exist and start action;
When the ROI of degree of depth frame exists startup action, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module;
Preservation is judged to be the characteristic set parameter of limbs target area, and a limbs target area detecting is as tracking target;
In follow-up degree of depth frame, described tracking target is followed the tracks of, and use region identical in the characteristic set parameter pair of a upper limbs target area of default limbs object module and preservation and the color framing of described follow-up degree of depth frame synchronization to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area;
Obtain all coordinate figures in each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and move according to the direction of motion identification target of target;
The described variation according to the depth value of adjacent forward and backward two width depth maps in degree of depth frame sequence judges in region of interest ROI default in a rear width depth map whether exist the step that starts action to comprise:
Obtain respectively the depth value group in region of interest ROI default in former and later two degree of depth frames, and the depth value group of described former and later two degree of depth frames of calculating is poor;
Whether the difference of the depth value group that judgement is obtained meets default entry condition, when meeting default entry condition, judges that described ROI exists startup action, otherwise, do not exist and start action.
2. the method for claim 1, is characterized in that,
Described limbs object module comprises complexion model and characteristic model, and the characteristic set parameter of described limbs target area comprises target colour of skin value range, target shape parameter;
The described ROI at degree of depth frame exists while starting action, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and the step that is limbs target area by the regional determination that meets limbs object module is specially:
According to described complexion model, detect region identical in synchronous color framing and whether meet complexion model;
Whether the region that meets described complexion model according to described characteristic model detection meets characteristic model;
By the regional determination that meets characteristic model, it is limbs target area.
3. the method for claim 1, is characterized in that,
According to x i = Σ x , y ∈ DOBJ i x × v c ( x , y ) Σ x , y ∈ DOBJ i v c ( x , y ) .
y i = Σ x , y ∈ DOBJ i y × v c ( x , y ) Σ x , y ∈ DOBJ i v c ( x , y ) . And
z i = Σ x , y , z ∈ DOBJ i z × v d ( x , y ) Σ x , y ∈ DOBJ i v d ( x , y )
Obtain the coordinate figure after proofread and correct limbs target area, wherein x i, y i, z ibe respectively the coordinate figure of i the limbs target area after correction, v c(x, y) represents pixel value corresponding to point (x, y) in coloured image, v d(x, y) represents pixel value corresponding to point (x, y) in depth image, DOBJ irepresent i limbs target area.
4. the method for claim 1, is characterized in that, described in obtain the coordinate figure of each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and comprise according to the step of the direction of motion identification target action of target:
Obtain all coordinate figures in each limbs target area, from the coordinate figure obtaining, obtain maximum depth value and minimum depth value, and calculate the poor of the depth value of described maximum and minimum depth value;
Judge whether the depth value of described maximum and the difference of minimum depth value are greater than default difference threshold;
If the difference of maximum depth value and minimum depth value is greater than described default difference threshold, judges that the direction of target action is fore-and-aft direction, otherwise judge that the direction of target action is in-plane.
5. method as claimed in claim 4, is characterized in that,
When the direction of judging target action is in-plane, with horizontal ordinate x value and the ordinate y value matching straight line of all coordinates of obtaining;
Whether the angle that judges described straight line and x axle positive axis between two default angle threshold values, if between described two default angle threshold values, judges that the direction of target action is above-below direction, otherwise, judge that the direction of target action is left and right directions.
6. a target motion identification device, is characterized in that, comprising:
Image acquisition unit, for synchronously obtaining degree of depth frame sequence and color framing sequence, described degree of depth frame sequence is several depth maps that depth camera gathers;
Whether moving region judging unit, exist and start action for judging according to the variation of the depth value of the adjacent forward and backward two width depth maps of degree of depth frame sequence in region of interest ROI default in a rear width depth map;
Limbs target area judging unit, while there is startup action for the ROI at degree of depth frame, according to default limbs object module, region identical with ROI in synchronous color framing is detected, and be limbs target area by the regional determination that meets limbs object module;
Parameter storage unit, for preserving the characteristic set parameter that is judged to be limbs target area, and a limbs target area detecting is as tracking target;
Limbs target area correcting unit, for described tracking target being followed the tracks of at follow-up degree of depth frame, and use region identical in the characteristic set parameter pair of a upper limbs target area of default limbs object module and preservation and the color framing of described follow-up degree of depth frame synchronization to proofread and correct, and be limbs target area by the regional determination that meets the characteristic set parameter of limbs object module and a upper limbs target area;
Target action recognition unit, for obtaining all coordinate figures of each limbs target area, according to the direction of motion of the coordinate figure judgement target of obtaining, and moves according to the direction of motion identification target of target;
Described moving region judging unit first obtains respectively the depth value group in region of interest ROI default in former and later two degree of depth frames, and the depth value group of described former and later two degree of depth frames of calculating is poor, whether the difference of the depth value group that judgement is obtained again meets default entry condition, when meeting default entry condition, judge that described ROI exists startup action, otherwise, do not exist and start action.
7. equipment as claimed in claim 6, is characterized in that,
Described limbs object module comprises complexion model and characteristic model, and the characteristic set parameter of described limbs target area comprises target colour of skin value range, target shape parameter;
Described limbs target area judging unit comprises:
Whether region Face Detection module, meet complexion model for detect the region that synchronous color framing is identical according to complexion model;
Provincial characteristics detection module, whether the region that meets complexion model for detecting according to characteristic model meets characteristic model;
Limbs target area determination module, for being limbs target area by the regional determination that meets characteristic model.
8. equipment as claimed in claim 6, is characterized in that, described target action recognition unit comprises:
Depth difference determination module for obtaining all coordinate figures of each limbs target area, obtains maximum depth value and minimum depth value from the coordinate figure obtaining, and calculates the poor of the depth value of described maximum and minimum depth value;
Depth difference judge module, for judging whether the depth value of described maximum and the difference of minimum depth value are greater than default difference threshold;
Direction judge module, while being greater than described default difference threshold for the difference of the depth value maximum and minimum depth value, judging that the direction of target action is fore-and-aft direction, otherwise judges that the direction of target action is in-plane.
9. equipment as claimed in claim 8, it is characterized in that, described direction judge module is when the direction of judging target action is in-plane, horizontal ordinate x value and ordinate y value matching straight line with all coordinates of obtaining, whether the angle that judges described straight line and x axle positive axis between two default angle threshold values, if between described two default angle threshold values, judges that the direction of target action is above-below direction, otherwise, judge that the direction of target action is left and right directions.
CN201110317260.8A 2011-10-18 2011-10-18 Target identification method and device Active CN102509074B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110317260.8A CN102509074B (en) 2011-10-18 2011-10-18 Target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110317260.8A CN102509074B (en) 2011-10-18 2011-10-18 Target identification method and device

Publications (2)

Publication Number Publication Date
CN102509074A CN102509074A (en) 2012-06-20
CN102509074B true CN102509074B (en) 2014-01-29

Family

ID=46221155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110317260.8A Active CN102509074B (en) 2011-10-18 2011-10-18 Target identification method and device

Country Status (1)

Country Link
CN (1) CN102509074B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514429B (en) * 2012-06-21 2018-06-22 夏普株式会社 Detect the method and image processing equipment of the privileged site of object
KR101909630B1 (en) * 2012-10-30 2018-10-18 삼성전자주식회사 Method and apparatus of recognizing a motion
CN103839040B (en) * 2012-11-27 2017-08-25 株式会社理光 Gesture identification method and device based on depth image
CN103458261B (en) * 2013-09-08 2015-04-08 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN104952254B (en) * 2014-03-31 2018-01-23 比亚迪股份有限公司 Vehicle identification method, device and vehicle
CN104978558B (en) * 2014-04-11 2018-05-08 北京数码视讯科技股份有限公司 The recognition methods of target and device
CN105912982B (en) * 2016-04-01 2020-07-14 北京明泰朗繁精密设备有限公司 Control method and device based on limb action recognition
US10616471B2 (en) * 2016-09-06 2020-04-07 Apple Inc. Image adjustments based on depth of field estimations
CN106778471B (en) * 2016-11-17 2019-11-19 京东方科技集团股份有限公司 Automatically track shopping cart
KR102113591B1 (en) 2017-09-12 2020-05-22 선전 구딕스 테크놀로지 컴퍼니, 리미티드 Face detection activation method, face detection activation device and electronic device
CN109584297A (en) * 2018-10-24 2019-04-05 北京升哲科技有限公司 Object detection method and device
CN110363793B (en) * 2019-07-24 2021-09-21 北京华捷艾米科技有限公司 Object tracking method and device
CN112529816A (en) * 2020-12-22 2021-03-19 西安诺瓦星云科技股份有限公司 Data processing method, data processing device, storage medium and computer equipment
CN112686231B (en) * 2021-03-15 2021-06-01 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment
CN115147451A (en) * 2021-03-29 2022-10-04 华为技术有限公司 Target tracking method and device thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339607B (en) * 2008-08-15 2012-08-01 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN101989326B (en) * 2009-07-31 2015-04-01 三星电子株式会社 Human posture recognition method and device
CN102074018B (en) * 2010-12-22 2013-03-20 Tcl集团股份有限公司 Depth information-based contour tracing method

Also Published As

Publication number Publication date
CN102509074A (en) 2012-06-20

Similar Documents

Publication Publication Date Title
CN102509074B (en) Target identification method and device
CN101593022B (en) Method for quick-speed human-computer interaction based on finger tip tracking
CN107564034A (en) The pedestrian detection and tracking of multiple target in a kind of monitor video
CN103700113B (en) A kind of lower regarding complex background weak moving target detection method
CN109086724B (en) Accelerated human face detection method and storage medium
US7840035B2 (en) Information processing apparatus, method of computer control, computer readable medium, and computer data signal
KR100651034B1 (en) System for detecting targets and method thereof
CN102521567B (en) Human-computer interaction fingertip detection method, device and television
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN104281839A (en) Body posture identification method and device
CN109711332B (en) Regression algorithm-based face tracking method and application
CN106200971A (en) Man-machine interactive system device based on gesture identification and operational approach
CN105426816A (en) Method and device of processing face images
CN106778767B (en) Visual image feature extraction and matching method based on ORB and active vision
CN104599288A (en) Skin color template based feature tracking method and device
KR20110021500A (en) Method for real-time moving object tracking and distance measurement and apparatus thereof
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN112329646A (en) Hand gesture motion direction identification method based on mass center coordinates of hand
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN109658441A (en) Foreground detection method and device based on depth information
CN103902954A (en) Porn video identification method and system
CN106340040B (en) Tracking refers to the method and device thereof of contouring
CN106406507B (en) Image processing method and electronic device
CN110008834B (en) Steering wheel intervention detection and statistics method based on vision
Zhongliang et al. An algorithm of straight line features matching on aerial imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant