CN103186230A - Man-machine interaction method based on color identification and tracking - Google Patents

Man-machine interaction method based on color identification and tracking Download PDF

Info

Publication number
CN103186230A
CN103186230A CN2011104566630A CN201110456663A CN103186230A CN 103186230 A CN103186230 A CN 103186230A CN 2011104566630 A CN2011104566630 A CN 2011104566630A CN 201110456663 A CN201110456663 A CN 201110456663A CN 103186230 A CN103186230 A CN 103186230A
Authority
CN
China
Prior art keywords
color
tracking
color body
man
tracks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104566630A
Other languages
Chinese (zh)
Other versions
CN103186230B (en
Inventor
王铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING CHAOGE DIGITAL TECHNOLOGY Co Ltd
Original Assignee
BEIJING CHAOGE DIGITAL TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING CHAOGE DIGITAL TECHNOLOGY Co Ltd filed Critical BEIJING CHAOGE DIGITAL TECHNOLOGY Co Ltd
Priority to CN201110456663.0A priority Critical patent/CN103186230B/en
Publication of CN103186230A publication Critical patent/CN103186230A/en
Application granted granted Critical
Publication of CN103186230B publication Critical patent/CN103186230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a man-machine interaction method based on color identification and tracking, which comprises the following steps: the color of a candidate object is identified according to the image of the candidate object, and when the color of the candidate object belongs to one of various predetermined colors, the color is set as a tracking color, and the candidate object is set as a tracking color body; the center position of the tracking color body in a next image is predicted according to the central position of the tracking color body in the current image; whether pixels in a coordinate area are pixels of the tracking color body or not is judged in the preset coordinate area near the predicted center position; the center position of the tracking color body is calculated according to the obtained pixels of the tracking color body, and the center position of the trcking color body in the next image is continuously predicted according to the center position of the tracking color body; the shape of the tracking color body is got according to the pixels of the tracking color body; and the man-machine interaction is carried out through using coordinate of the shape of the tracking color body in the image to judge and control behaviors. The interaction method is low in cost and high in accuracy.

Description

Man-machine interaction method based on the color recognition and tracking
Technical field
The application relates to field of human-computer interaction, relates in particular to a kind of man-machine interaction method based on the color recognition and tracking.
Background technology
But human-computer interaction function is mainly controlled the operation of relevant devices by the external unit of input and output.Mouse and keyboard be the most common also be the most classical man-machine interaction hardware.Nineteen eighty-three, the first item mouse is followed the issue of the Lisa of Apple computer; Soon subsequently, microsoft operation system Windows 3.1 announces its compatibility, and the system of mouse attended operation hereafter and computer universal begins to become the standard configuration product.Especially the appearance of mouse, graphical and control intuitively meets people's custom naturally more, and this is the revolution first time of man-machine interaction.
The multiple point touching technology of popularizing rapidly was the revolution second time on the man-machine interaction history in the last few years, and that lead it is Apple and its revolutionary mobile phone iPhone.The interaction schemes of individual mobile intelligent terminal is confined to the setting of the keyboard and mouse of conventional P C classics always at the beginning, but the portability of keyboard and mouse can't satisfy the demand of intelligent terminal.But multiple point touching has been opened an other fan window, and it allows everyone recognize that keyboard in fact can become the part of touch, and a lot of order can be finished by the difference of a plurality of fingers paddling mode on touch-screen in fact.Such integration allows portable terminal really break away from the thinking yoke of conventional P C terminal.Multiple point touching is finished man-machine interaction with gesture, is easier to left-hand seat, and is simultaneously also more natural.
Along with increasing substantially of computer process ability in the last few years, and the development of technology such as pattern-recognition, the foreground has been gone in body sense control gradually from the backstage, developing into the revolution on the man-machine interaction history for the third time gradually.Body sense control main thought is by equipment such as camera and outer handle, and the real-time motion state of catching human body is finished various command according to the motion of human body.The more famous Kinect that Microsoft is arranged, the Wii of Nintendo, the PS Move of Sony etc., they generally use camera to catch user's limb action, or carry out face recognition.Inductor is built-in microphone also, can be used for identifying phonetic order, but price general charged is also comparatively expensive, generally all is to use in field of play at present.
Though body sense control technology had obtained significant progress in the last few years, himself drawback also clearly, the equipment of body sense control all compare costliness, is inconvenient to carry on the one hand; The interference, the precision that are subjected to various situations on the other hand easily can not get guaranteeing.Therefore it is necessary finding a kind of more cheap, accurate more man-machine interaction method.
Summary of the invention
Provide hereinafter about brief overview of the present invention, in order to basic comprehension about some aspect of the present invention is provided.Should be appreciated that this general introduction is not about exhaustive general introduction of the present invention.It is not that intention is determined key of the present invention or pith, neither be intended to limit scope of the present invention.Its purpose only is that the form of simplifying provides some concept, with this as the preorder of discussing after a while in greater detail.
A fundamental purpose of the present invention is to provide the man-machine interaction method that a kind of cost is lower and precision is higher.
For achieving the above object, the invention provides a kind of man-machine interaction method based on the color recognition and tracking, may further comprise the steps:
Step 1: according to the color of the image identification candidate object of candidate's object of taking, and when the color of candidate's object belongs to one of predetermined multiple color, set the color of described candidate's object for following the tracks of color, set candidate's object for following the tracks of the color body;
Step 2: according to the center of the center prediction of in current images, following the tracks of color body tracking color body in the next frame image;
Step 3: near the default coordinates regional the center of prediction, judge whether each pixel of described coordinates regional is the pixel of described tracking color body;
Step 4: the pixel according to the tracking color body that obtains calculates the center of described tracking color body, and returns step 2 to continue prediction in the center of following the tracks of the color body described in the next frame image down according to the center of described tracking color body;
Step 5: according to detected all belong to the pixel of following the tracks of the color body, obtain the profile of described tracking color body; And
Step 6 judges that according to coordinate and the coordinate motion track of profile in image of described tracking color body the control behavior is to carry out man-machine interaction.
The present invention is directed to the application scenarios of body sense control, man-machine interaction method based on the color recognition and tracking has been proposed, can set tracking color body by predetermined color, identification colors, can accurately follow the tracks of color by position prediction and correction, both reduce other cost of body perception, also improved the tracking accuracy rate.
Description of drawings
With reference to below in conjunction with the explanation of accompanying drawing to the embodiment of the invention, can understand above and other purpose of the present invention, characteristics and advantage more easily.Parts in the accompanying drawing are just in order to illustrate principle of the present invention.In the accompanying drawings, same or similar technical characterictic or parts will adopt identical or similar Reference numeral to represent.
Fig. 1 is a kind of process flow diagram of embodiment that the present invention is based on the man-machine interaction method of color recognition and tracking.
Fig. 2 is the process flow diagram of step S1 among Fig. 1.
The process flow diagram of step S2 among Fig. 3 Fig. 1.
Embodiment
Embodiments of the invention are described with reference to the accompanying drawings.The element of describing in an accompanying drawing of the present invention or a kind of embodiment and feature can combine with element and the feature shown in one or more other accompanying drawing or the embodiment.Should be noted that for purpose clearly, omitted the parts that have nothing to do with the present invention, those of ordinary skills are known and expression and the description of processing in accompanying drawing and the explanation.
Referring to figs. 1 to Fig. 3, the man-machine interaction method based on the color recognition and tracking of the present invention can be used for may further comprise the steps S1-S6 for multiple hardwares products such as computing machine, game machine, IPTV set-top box, high definition player provide man-machine control function:
Step S1: according to the color of the image identification candidate object of candidate's object of taking, and when the color of candidate's object belongs to one of predetermined multiple color, set the color of candidate's object for following the tracks of color, set candidate's object for following the tracks of color body I.
In the embodiments of the invention, make up 7 kinds of colors according to rainbow definition, that is, red, orange, yellow, green, blue, indigo, purple are as the multiple color of being scheduled to.Three kinds of components supposing HSV (colouring information, purity, the lightness) space of 7 kinds of colors meet normal distribution respectively:
The H component:
Figure BDA0000127539360000031
Probability distribution f Hi ( x ) = 1 σ Hi 2 π exp ( - ( x - μ Hi ) 2 / 2 σ Hi 2 ) - - - ( 1 )
The S component:
Figure BDA0000127539360000033
Probability distribution f Si ( x ) = 1 σ Si 2 π exp ( - ( x - μ Si ) 2 / 2 σ Si 2 ) - - - ( 2 )
The V component: Probability distribution f Vi ( x ) = 1 σ Vi 2 π exp ( - ( x - μ Vi ) 2 / 2 σ Vi 2 ) - - - ( 3 )
I ∈ 1,2,3 ..., 7} represents i kind color, f Hi(x), f Si(x), f Vi(x) represent the H of single pixel x respectively, S, the V component belongs to the probability of i kind color respectively.
Alternatively, step S1 may further comprise the steps:
Step S11: take the initialization interface;
Step S12: show the initialized location scope in the initialization interface, this initialized location scope can mark in the initialization interface with rectangle frame;
Step S13: taking in the described initialized location scope is the interior target object of rectangle frame (as the pen of yellow, plastic bottle of red square object, green etc.); And
Step S14: the color of recognition target object, when the color of target object belonged to one of predetermined multiple color, the color of target setting object was for following the tracks of color body I.
Each pixel x in the initialized location scope, its color that belongs to is:
max i{f Hi(x h)*f Si(x s)*f Vi(x v)}
(4)
s.t.i∈{1,2,3,...,7}
Then following the tracks of color body I can obtain by each the pixel x solving equation (4) in the initialized location scope.
X wherein h, x s, x vRepresent H, the S of this pixel and the numerical value of V component respectively.
Among the step S14, take target object in the initialized location scope and comprise that shooting moves the target object of a segment distance along default route from the initialized location scope.
Follow the tracks of the color body set finish after, just can follow the tracks of the tracking color body of setting, for example, when target object be yellow, the color of setting was yellow, carries out man-machine interaction by the tracking to yellow.
Step S2: according to the center of the center prediction of in current images, following the tracks of color body tracking color body in the next frame image.Alternatively, in step S2, by the center of Kalman filter predicting tracing color body I.
The system state of definition Kalman filter is X k=(s x, s y, v x, v y), s x, s yRepresentative is followed the tracks of the center of color body I at the coordinate of x axle and y axle, v respectively x, v yBe respectively to follow the tracks of the center of color body I in the speed of x axle and y axle.And on image, can only observe the position that obtains following the tracks of color body I, so definition two-dimensional observation vector Z k, i.e. (x Zk, y Zk), x ZkBe the center of the following the tracks of color body I observation vector at the x axle, y ZkFollow the tracks of the center of color body I at the observation vector of y axle.
Because following the tracks of color body I is uniform motion in the unit interval, definition status transition matrix A is:
A = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1 - - - ( 5 )
Wherein, Δ t represents the time interval between the two continuous frames image.
The state equation of Kalman filter is:
X k+1=AX k+W k (6)
By the relation between system state and the observer state as can be known, the observing matrix of Kalman filter is:
H = 1 0 0 0 0 1 0 0 - - - ( 7 )
The observation equation of Kalman filter is:
Z k=HX k+V k (8)
Can suppose W in addition kAnd V kAll be zero-mean and noise vector independently, therefore establish W kAnd V kCovariance matrix Q kAnd R kBe respectively:
Q k = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 - - - ( 9 )
R k = 1 0 0 1 - - - ( 10 )
The system state predictive equation is:
X ^ k + 1 = AX k - - - ( 11 )
The error covariance predictive equation is:
P ^ k + 1 = AP k A T + Q k - - - ( 12 )
A wherein TTransition matrix for state-transition matrix A.
Kalman gain coefficient equation is:
G k + 1 = P ^ k + 1 H T H P ^ k + 1 H T + R k - - - ( 13 )
H wherein TTransition matrix for observing matrix H.
The state update equation of system state is:
X k + 1 = X ^ k + 1 + G k + 1 ( Z k + 1 - H X ^ k + 1 ) - - - ( 14 )
The update equation of error covariance predictive equation is:
P k + 1 = ( I - G k + 1 H ) P ^ k + 1 - - - ( 15 )
Wherein I is unit matrix.
In the above-mentioned formula, symbol " ^ " is represented " priori state estimation (prior state estimate) " in Kalman filter.
On the whole, the method for the position of above-mentioned use Kalman filtering predicting tracing color body I mainly comprises four-stage: the initialization of wave filter, status predication, acquisition approximate value and state correction.With reference to figure 3, step S3 may further comprise the steps:
Step S21: the system state X that sets up Kalman filter according to the center of following the tracks of color body I and speed kAnd will follow the tracks of center and the speed of color body I in present image and compose to following the tracks of the system state of color body I in first two field picture, speed is made as 0, follows the tracks of the center of color body I and represents with its coordinate at x axle and y axle;
Step S22: according to following the tracks of its position in the next frame image of the center tentative prediction of color body I in present image, this step comprises the prediction (X of system state K+1=AX k), the prediction (Z of observation vector K+1=HX K+1) and the prediction of error covariance
Figure BDA0000127539360000066
Step S23: carry out color search and coupling (can search for and mate by equation 4) in its vicinity according to the tentative prediction result, determine to follow the tracks of color body I near the apparent position of the position of this tentative prediction; And
Step S24: carry out the state correction according to the position of the tentative prediction of color of object I and apparent position that step S23 obtains and follow the tracks of the center of color body I in the next frame image to obtain.
Step S3: near the default coordinates regional the center of prediction, judge whether each pixel of described coordinates regional is the pixel of described tracking color body.Alternatively, step S3 comprises: near the default coordinates regional the center of prediction, whether judge probability that each pixel x of this coordinates regional belongs to color I greater than preset threshold value ε, wherein, the probability that each pixel x belongs to color I is:
P{x∈I}=f Hi(x h)*f Si(x s)*f Vi(x v) (16)
{ x ∈ I} is greater than preset threshold value ε, then judges this pixel for following the tracks of the pixel of color body I, otherwise judges that this pixel is the pixel of background if each pixel x belongs to the probability P of color I.
Step S4: according to the pixel of the tracking color body I that obtains among the step S3, calculate the center of following the tracks of color body I, and repeating step S2-S4 is with the moving target of continuation predicting tracing color body I, the i.e. center of predicting tracing color body I in following next frame image.In this step, upgrade Kalman filter further to predict according to the center of the tracking color body I that calculates.
Step S5: according among the step S3 detected all belong to the pixel of following the tracks of color body I, obtain the profile of following the tracks of the color body.Alternatively, step S5 comprises: obtain the minimum external shape of following the tracks of color body I with and highlight the minimum external shape of following the tracks of color I.In this step, can remove noise and highlight by methods such as burn into expansions and follow the tracks of color body I, belong to the maximum UNICOM component of the pixel of following the tracks of color body I and resolve its minimum external shape to highlight by resolving all.
So far, the tracking color body I that each two field picture of input, initialization are confirmed will be from start to finish locked, the border has taken place to block or remove even follow the tracks of color body I, in case tracking color body I appears at also can be at once locked in the image, realized following the tracks of the real-time follow-up of color body I.By the volume coordinate that the recognizable object object is followed the tracks of in the enforcement of following the tracks of color body I, and then judge the control behavior, carry out man-machine interaction.
Step S6: judge that according to the coordinate of the minimum external shape of following the tracks of color body I in image, angle and the coordinate motion track of minimum external shape the control behavior is to carry out man-machine interaction.The angle of minimum external shape can be determined by the coordinate of minimum external shape in image.
The present invention is directed to the application scenarios of body sense control, man-machine interaction method based on the color recognition and tracking has been proposed, trained many Gausses color classification device and Kalman filter to combine to handle the tracking problem of color object, both reduced other cost of body perception, also improved the tracking accuracy rate, use method of the present invention when the enterprising pedestrian's machine of common computer is mutual, processing speed can reach for 170 frame/seconds, had extraordinary practicality.
In the method for the invention, obviously, after can decomposing, make up and/or decompose, each parts or each step reconfigure.These decomposition and/or reconfigure and to be considered as equivalents of the present invention.Simultaneously, in the above in the description to the specific embodiment of the invention, can in one or more other embodiment, use in identical or similar mode at the feature that a kind of embodiment is described and/or illustrated, combined with the feature in other embodiment, or the feature in alternative other embodiment.
Should emphasize that term " comprises/comprise " existence that refers to feature, key element, step or assembly when this paper uses, but not get rid of the existence of one or more further feature, key element, step or assembly or additional.
Though described the present invention and advantage thereof in detail, be to be understood that and under the situation that does not exceed the spirit and scope of the present invention that limited by appended claim, can carry out various changes, alternative and conversion.And the application's scope is not limited only to the specific embodiment of the described process of instructions, equipment, means, method and step.The one of ordinary skilled in the art will readily appreciate that from disclosure of the present invention, can use according to the present invention and carry out and process, equipment, means, method or the step essentially identical function of corresponding embodiment described herein or acquisition result essentially identical with it, existing and that will be developed in the future.Therefore, appended claim is intended to comprise such process, equipment, means, method or step in their scope.

Claims (9)

1. man-machine interaction method based on the color recognition and tracking may further comprise the steps:
Step 1: according to the color of the image identification candidate object of candidate's object of taking, and when the color of candidate's object belongs to one of predetermined multiple color, set the color of described candidate's object for following the tracks of color, set candidate's object for following the tracks of the color body;
Step 2: according to the center of the center prediction of in current images, following the tracks of color body tracking color body in the next frame image;
Step 3: near the default coordinates regional the center of prediction, judge whether each pixel of described coordinates regional is the pixel of described tracking color body;
Step 4: the pixel according to the tracking color body that obtains calculates the center of described tracking color body, and returns step 2 to continue prediction in the center of following the tracks of the color body described in the next frame image down according to the center of described tracking color body;
Step 5: according to detected all belong to the pixel of following the tracks of the color body, obtain the profile of described tracking color body; And
Step 6 judges that according to the coordinate of profile in image of described tracking color body the control behavior is to carry out man-machine interaction.
2. the man-machine interaction method based on the color recognition and tracking as claimed in claim 1, it is characterized in that: described step 1 comprises:
Take the initialization interface;
In described initialization interface, show the initialized location scope;
Take the target object in the described initialized location scope; And
Identify the color of described target object, when the color of described target object belonged to one of predetermined multiple color, the color of setting described target object was described tracking color body.
3. the man-machine interaction method based on the color recognition and tracking as claimed in claim 2 is characterized in that: the target object in the described initialized location scope of described shooting comprises that shooting moves the target object of a segment distance along default route from described initialized location scope.
4. the man-machine interaction method based on the color recognition and tracking as claimed in claim 1 is characterized in that: described step 2 comprises by Kalman filtering predicts the center of following the tracks of the color body in the next frame image.
5. the man-machine interaction method based on the color recognition and tracking as claimed in claim 4 is characterized in that, predicts that by Kalman filtering the center of following the tracks of the color body in the next frame image comprises:
Make up Kalman filter, its system state is: X k=(s x, s y, v x, v y), s wherein x, s yRepresent the center of described tracking color body respectively at the coordinate of x axle and y axle, v x, v yBe respectively described tracking color body in the speed of x axle and y axle, initial velocity is made as 0;
The two-dimensional observation vector of described tracking color body in image is: Z k=(x Zk, y Zk), x ZkBe the center of the described tracking color body observation vector at the x axle, y ZkBe the center of the described tracking color body observation vector at the y axle;
The system state equation of described Kalman filter is: X K+1=AX k+ W k, wherein A is state-transition matrix, W kBe noise vector;
Described state-transition matrix A is:
A = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1
Wherein, Δ t represents the time interval between the two continuous frames image;
The observing matrix of described Kalman filter is:
H = 1 0 0 0 0 1 0 0
The observer state equation of described Kalman filter is:
Z k=HX k+V k
V wherein kBe noise vector, wherein, W kAnd V kCovariance matrix Q kAnd R kBe respectively
Q k = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
R k = 1 0 0 1
The predictive equation of the system state of described Kalman filter is:
X ^ k + 1 = AX k
The error covariance predictive equation of described Kalman filter is:
P ^ k + 1 = A P k A T + Q k
The gain coefficient equation of described Kalman filter is:
G k + 1 = P ^ k + 1 H T H P ^ k + 1 H T + R k
The state update equation of the system state of described Kalman filter is:
X k + 1 = X ^ k + 1 + G k + 1 ( Z k + 1 - H X ^ k + 1 )
The update equation of the error covariance predictive equation of described Kalman filter is:
P k + 1 = ( I - G k + 1 H ) P ^ k + 1 , Wherein I is unit matrix.
6. the man-machine interaction method based on the color recognition and tracking as claimed in claim 1 is characterized in that: described predetermined multiple color comprises red, orange, yellow, green, blue, indigo, purple.
7. the man-machine interaction method based on the color recognition and tracking as claimed in claim 1 is characterized in that: obtain the profile of following the tracks of the color body in the step 5 and comprise the minimum external shape of obtaining tracking color body.
8. the man-machine interaction method based on the color recognition and tracking as claimed in claim 7 is characterized in that: obtain the profile of following the tracks of the color body in the step 5 and comprise the minimum external shape that highlights tracking color body.
9. the man-machine interaction method based on the color recognition and tracking as claimed in claim 1, it is characterized in that: whether each pixel of judging described coordinates regional in the step 3 is that the pixel of described tracking color body comprises that whether each pixel of judging described coordinates regional belongs to the probability of described tracking color greater than preset threshold value, if it is the pixel of described tracking color body that the probability that each pixel of described coordinates regional belongs to described tracking color, is then judged this pixel greater than preset threshold value.
CN201110456663.0A 2011-12-30 2011-12-30 Man-machine interaction method based on colour recognition with tracking Active CN103186230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110456663.0A CN103186230B (en) 2011-12-30 2011-12-30 Man-machine interaction method based on colour recognition with tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110456663.0A CN103186230B (en) 2011-12-30 2011-12-30 Man-machine interaction method based on colour recognition with tracking

Publications (2)

Publication Number Publication Date
CN103186230A true CN103186230A (en) 2013-07-03
CN103186230B CN103186230B (en) 2017-06-06

Family

ID=48677430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110456663.0A Active CN103186230B (en) 2011-12-30 2011-12-30 Man-machine interaction method based on colour recognition with tracking

Country Status (1)

Country Link
CN (1) CN103186230B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104460988A (en) * 2014-11-11 2015-03-25 陈琦 Input control method of intelligent cell phone virtual reality device
CN104615231A (en) * 2013-11-01 2015-05-13 中国移动通信集团公司 Determination method for input information, and equipment
CN104935900A (en) * 2014-03-19 2015-09-23 智原科技股份有限公司 Image sensing device, color correction matrix correction method and lookup table establishment method
CN105975119A (en) * 2016-04-21 2016-09-28 北京集创北方科技股份有限公司 Multi-target tracking method, and touch screen control method and system
CN107077624A (en) * 2014-09-23 2017-08-18 微软技术许可有限责任公司 Track hand/body gesture
WO2017190614A1 (en) * 2016-05-06 2017-11-09 深圳市国华识别科技开发有限公司 Intelligent terminal based man-machine interaction method and system
CN108652678A (en) * 2018-03-13 2018-10-16 北京峰誉科技有限公司 A kind of method and device from motion tracking urine
CN110414495A (en) * 2019-09-24 2019-11-05 图谱未来(南京)人工智能研究院有限公司 A kind of gesture identification method, device, electronic equipment and readable storage medium storing program for executing
CN111104948A (en) * 2018-10-26 2020-05-05 中国科学院长春光学精密机械与物理研究所 Target tracking method based on adaptive fusion of double models

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101896877A (en) * 2007-12-18 2010-11-24 松下电器产业株式会社 Spatial input operation display apparatus
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101896877A (en) * 2007-12-18 2010-11-24 松下电器产业株式会社 Spatial input operation display apparatus
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615231A (en) * 2013-11-01 2015-05-13 中国移动通信集团公司 Determination method for input information, and equipment
CN104615231B (en) * 2013-11-01 2019-01-04 中国移动通信集团公司 A kind of determination method and apparatus inputting information
CN104935900A (en) * 2014-03-19 2015-09-23 智原科技股份有限公司 Image sensing device, color correction matrix correction method and lookup table establishment method
CN107077624A (en) * 2014-09-23 2017-08-18 微软技术许可有限责任公司 Track hand/body gesture
CN104460988B (en) * 2014-11-11 2017-12-22 陈琦 A kind of input control method of smart mobile phone virtual reality device
CN104460988A (en) * 2014-11-11 2015-03-25 陈琦 Input control method of intelligent cell phone virtual reality device
CN105975119B (en) * 2016-04-21 2018-11-30 北京集创北方科技股份有限公司 Multi-target tracking method, touch screen control method and system
CN105975119A (en) * 2016-04-21 2016-09-28 北京集创北方科技股份有限公司 Multi-target tracking method, and touch screen control method and system
WO2017190614A1 (en) * 2016-05-06 2017-11-09 深圳市国华识别科技开发有限公司 Intelligent terminal based man-machine interaction method and system
CN108652678A (en) * 2018-03-13 2018-10-16 北京峰誉科技有限公司 A kind of method and device from motion tracking urine
CN108652678B (en) * 2018-03-13 2024-05-17 上海科勒电子科技有限公司 Method and device for automatically tracking urine
CN111104948A (en) * 2018-10-26 2020-05-05 中国科学院长春光学精密机械与物理研究所 Target tracking method based on adaptive fusion of double models
CN110414495A (en) * 2019-09-24 2019-11-05 图谱未来(南京)人工智能研究院有限公司 A kind of gesture identification method, device, electronic equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN103186230B (en) 2017-06-06

Similar Documents

Publication Publication Date Title
CN103186230A (en) Man-machine interaction method based on color identification and tracking
JP6765545B2 (en) Dynamic gesture recognition method and device, gesture dialogue control method and device
CN103098076B (en) Gesture recognition system for TV control
Chen et al. Human action recognition using star skeleton
EP2426598B1 (en) Apparatus and method for user intention inference using multimodal information
Chaudhary et al. Intelligent approaches to interact with machines using hand gesture recognition in natural way: a survey
Raheja et al. Robust gesture recognition using Kinect: A comparison between DTW and HMM
Barros et al. A dynamic gesture recognition and prediction system using the convexity approach
Chen et al. A real-time dynamic hand gesture recognition system using kinect sensor
CN106934333B (en) Gesture recognition method and system
US20080085048A1 (en) Robotic gesture recognition system
Qi et al. Computer vision-based hand gesture recognition for human-robot interaction: a review
CN104123007A (en) Multidimensional weighted 3D recognition method for dynamic gestures
CN103150740A (en) Method and system for moving target tracking based on video
EP2538372A1 (en) Dynamic gesture recognition process and authoring system
CN103793056A (en) Mid-air gesture roaming control method based on distance vector
Dhule et al. Computer vision based human-computer interaction using color detection techniques
Itkarkar et al. A survey of 2D and 3D imaging used in hand gesture recognition for human-computer interaction (HCI)
Wang et al. Immersive human–computer interactive virtual environment using large-scale display system
CN105261038A (en) Bidirectional optical flow and perceptual hash based fingertip tracking method
Ingersoll Vision based multiple target tracking using recursive RANSAC
CN103000054A (en) Intelligent teaching machine for kitchen cooking and control method thereof
CN105760822B (en) A kind of vehicle drive control method and system
Sanchez-Matilla et al. Motion prediction for first-person vision multi-object tracking
Araki et al. Real-time both hands tracking using camshift with motion mask and probability reduction by motion prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Man-machine interaction method based on color identification and tracking

Effective date of registration: 20190610

Granted publication date: 20170606

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: Beijing Chaoge Digital Technology Co., Ltd.

Registration number: 2019990000529

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20210928

Granted publication date: 20170606

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: BEIJING CHAOGE DIGITAL TECHNOLOGY Co.,Ltd.

Registration number: 2019990000529

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Human computer interaction method based on color recognition and tracking

Effective date of registration: 20210928

Granted publication date: 20170606

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: BEIJING CHAOGE DIGITAL TECHNOLOGY Co.,Ltd.

Registration number: Y2021990000903

PE01 Entry into force of the registration of the contract for pledge of patent right