CN104123008A - Man-machine interaction method and system based on static gestures - Google Patents

Man-machine interaction method and system based on static gestures Download PDF

Info

Publication number
CN104123008A
CN104123008A CN201410371319.5A CN201410371319A CN104123008A CN 104123008 A CN104123008 A CN 104123008A CN 201410371319 A CN201410371319 A CN 201410371319A CN 104123008 A CN104123008 A CN 104123008A
Authority
CN
China
Prior art keywords
gesture
skin
colour
model
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410371319.5A
Other languages
Chinese (zh)
Other versions
CN104123008B (en
Inventor
王鸿鹏
尤磊
谭典雄
杨祥红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201410371319.5A priority Critical patent/CN104123008B/en
Publication of CN104123008A publication Critical patent/CN104123008A/en
Application granted granted Critical
Publication of CN104123008B publication Critical patent/CN104123008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a man-machine interaction method and system based on static gestures. The man-machine interaction method comprises the gesture recognition method, and the gesture recognition method comprises the steps of setting up a real-time skin color model, setting up a gesture geometric model, setting up a tracking model and carrying out recognition. The man-machine interaction method and system based on static gestures has the advantages that man-machine interaction can be achieved through gestures of people, and people can issue instructions to a machine. From the aspect of an interaction mode, the novel, concise and user-friendly man-machine interaction mode is provided. From the aspect of a system implementation method, a gesture recognition unit used in the system effectively solves the problems that a traditional gesture recognition mode is poor in stability and depends on a PC platform seriously, and the number of instructions is small and can not be expanded.

Description

A kind of man-machine interaction method and system based on static gesture
Technical field
The present invention relates to artificial intelligence field, relate in particular to a kind of man-machine interaction method and system based on static gesture.
Background technology
Gesture interaction is one of desirable man-machine interaction mode, and Gesture Recognition is the important technical of gesture interaction.Gesture Recognition generally comprises skin color segmentation, gesture extracted region, gesture feature extraction, gesture feature classification (identification) this four part.In traditional Gesture Recognition, these technology modules are realized each variantly, but totally have room for improvement.Tradition Gesture Recognition generally has following shortcoming:
One, in traditional gesture identification method, RGB color conversion is arrived Ycbcr or hsv color space by general employing of skin color segmentation part, then obtains skin color segmentation module by limiting threshold value.Such skin color segmentation disposal route conventionally can be because of the impact of ambient lighting, and it is unstable that effect becomes.And can not handle the interference of class area of skin color in environment well, require higher to environment for use.
Two, gesture extracted region part in traditional gesture identification method, many times directly area of skin color is regarded as to gesture region, even if add gesture region decision, also be generally to adopt maximum colour of skin connected region as gesture region, this weak feature judgement is difficult to face complexion area to distinguish with gesture area of skin color in actual applications, finally causes gesture identification failure
Three, gesture feature Extraction parts in traditional gesture identification method, generally adopt convex closure area take ratio as the key feature of gesture-type, point in addition number, gesture barycenter etc. in addition and be also often used as gesture-type feature.These features can be to a certain extent for gesture classification provides characteristic, but when time more than gesture kind can embody obvious limitation, misclassification rate obviously improves.
Four, at gesture feature classification this part, in traditional gesture identification method, generally focus on the static gesture identification of single image, ignore the continuity of video data, could not make full use of data further to improve the reliability of identification.
Summary of the invention
In order to solve the problems of the prior art, the invention provides a kind of man-machine interaction method based on static gesture.
The invention provides a kind of man-machine interaction method based on static gesture, comprise gesture identification method, comprise at described gesture identification method:
Set up real-time complexion model step: from image, extract colour of skin piece;
Set up gesture geometric model step: from broca scale picture, extract gesture feature, the instruction of definition static gesture;
Set up trace model step: images of gestures is followed the tracks of;
Identification step: identify for gesture instruction;
Comprise in described real-time complexion model step:
Initial colour of skin obtaining step: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model calculation procedure: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin determining step: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
As a further improvement on the present invention, comprise in described gesture geometric model step:
Gesture model construction step: utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color;
Hand-type area of skin color determining step: on the gesture geometric model basis having built, judge whether this model meets the actual characteristic of hand, if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing step: meet User Defined gesture instruction.
As a further improvement on the present invention, described trace model step can complete the tracking to 1~2 hand of user, described trace model step is obtained input data from real-time complexion model step, obtain initial tracking window from gesture geometric model step, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model step, identification step.
As a further improvement on the present invention, in described identification step, the trace information that the geometric properties obtaining according to gesture geometric model and trace model obtain identifies certain gestures instruction.
As a further improvement on the present invention, this man-machine interaction method comprises:
Video acquisition step: gather user's gesture data, and be transferred to core processing step;
Core processing step: analyze the gesture instruction in video by gesture identification method, then gesture instruction order is assigned to instruction and performed step;
Instruction execution step: carry out the instruction repertorie corresponding with gesture instruction.
The present invention also provides a kind of man-machine interactive system based on static gesture, comprises gesture identification unit, comprises in described gesture identification unit:
Complexion model module in real time: for extract colour of skin piece from image;
Gesture geometric model module: for look like to extract gesture feature from broca scale, the instruction of definition static gesture;
Trace model module: for images of gestures is followed the tracks of;
Identification module: identify for gesture instruction;
Comprise in described real-time complexion model module:
Initial colour of skin acquisition module: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model computing module: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin judge module: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
As a further improvement on the present invention, comprise in described gesture geometric model module:
Gesture model builds module: for utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color;
Hand-type area of skin color judge module: for the gesture geometric model basis having built, judge whether this model meets the actual characteristic of hand, if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing module: for meeting User Defined gesture instruction.
As a further improvement on the present invention, described trace model module can complete the tracking to 1~2 hand of user, described trace model module is obtained input data from real-time complexion model module, obtain initial tracking window from gesture geometric model module, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model module, identification module.
As a further improvement on the present invention, in described identification module, the trace information that the geometric properties obtaining according to gesture geometric model module and trace model module obtain identifies certain gestures instruction.
As a further improvement on the present invention, this man-machine interactive system comprises:
Video acquisition unit: for gathering user's gesture data, and be transferred to core processing unit;
Core processing unit: for go out the gesture instruction of video by gesture identification element analysis, then gesture instruction order is assigned to instruction execution unit;
Instruction execution unit: for carrying out the instruction repertorie corresponding with gesture instruction.
The invention has the beneficial effects as follows: the present invention can allow people by do gesture realize man-machine mutual, assign instruction to machine.Say from interactive mode aspect, the invention provides man-machine interaction mode a kind of novelty, succinct, humanized.From network system realization aspect, the poor stability, command quantity that the gesture identification unit of using in this system has overcome traditional gesture identification effectively less not extendible, seriously rely on the problems such as PC platform.
Brief description of the drawings
Fig. 1 is static gesture instruction schematic diagram of the present invention;
Fig. 2 is man-machine interactive system theory diagram of the present invention;
Fig. 3 is the theory diagram of man-machine interactive system one embodiment of the present invention.
Embodiment
The invention discloses a kind of man-machine interaction method based on static gesture, comprise gesture identification method, comprise at described gesture identification method:
Set up real-time complexion model step: from image, extract colour of skin piece;
Set up gesture geometric model step: from broca scale picture, extract gesture feature, the instruction of definition static gesture;
Set up trace model step: images of gestures is followed the tracks of;
Identification step: identify for gesture instruction;
Comprise in described real-time complexion model step:
Initial colour of skin obtaining step: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model calculation procedure: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin determining step: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
As one embodiment of the present of invention, in this real-time complexion model step, first need to set up real-time colour of skin database, for many gaussian probabilities model provides primary data.And real-time colour of skin Database is on the basis of frame difference method and strict colour of skin data model.Just can set up the many gaussian probabilities complexion model based on brightness index according to following principle having completed after real-time colour of skin database.The principle of setting up the many gaussian probabilities complexion model based on brightness index is:
Calculate brightness index: Y=0.299 × r+0.587 × g+0.114 × b
Calculated characteristics vector:
I 1=(r+g+b)/3
I 2=r-b
I 3=(2×g-r-b)/2
I 4=0.492×(b-Y)
I 5=0.877×(r-Y)
Gaussian probability computation model:
p ( x ) = Σ k = 1 n w k · p k ( x k ) = Σ k = 1 n w k · 1 2 πσ 1 2 exp { - ( x k - μ k ) 2 2 σ k }
Comprise in described gesture geometric model step:
Gesture model construction step: utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color;
Hand-type area of skin color determining step: on the gesture geometric model basis having built, judge this model whether meet the actual characteristic of hand (as finger and palm have intersection point, long and the palm radius of finger lives forever in proportionate relationship etc.), if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing step: meet User Defined gesture instruction.
As one embodiment of the present of invention, in gesture geometric model step, first need to calculate finger tip point position by edge flexometer.The frontier point p of finger tip point for meeting the following conditions i:
p ip i-k×p ip i-k≥0
Wherein p ifor the frontier point of continuous edge.Ω is the finger tip point curvature span that meets certain curvature threshold value.
After finger tip point is determined, need to determine again position, the centre of the palm, adopt in the present invention range conversion centre of the palm localization method.Concrete principle is as follows:
Calculate range image:
Dis tan ce = ( x 1 - x 2 ) 2 + ( y 1 - y 2 ) 2
Obtain coverage image masterplate:
mask plam ( i , j ) = 255 dist ( i , j ) > ρ 0 dist ( i , j ) ≤ ρ
In coverage image masterplate, calculate barycenter and be the required centre of the palm:
P x = Σ j = 1 n Σ i = 1 m mask plam ( i , j ) × i Σ j = 1 n Σ i = 1 m mask plam ( i , j )
P y = Σ j = 1 n Σ i = 1 m mask plam ( i , j ) × j Σ j = 1 n Σ i = 1 m mask plam ( i , j )
Justify again and the crossing principle of straight line by calculating gained finger tip point and centre of the palm point, with roundly simulating palm area, simulating and point with straight line, thereby set up gesture geometric model.
Described trace model step can complete the tracking to 1~2 hand of user, described trace model step is obtained input data from real-time complexion model step, obtain initial tracking window from gesture geometric model step, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model step, identification step.
As one embodiment of the present of invention, in trace model step, by utilizing Camshift algorithm, the tracking being used for to images of gestures.Camshift algorithm needs initial ranging window, can directly utilize in the present invention region, the centre of the palm in gesture geometric model as initial ranging window, can obtain by following flow process and search for determining of window in tracing process afterwards.
Calculate zeroth order distance
M 00 = Σ x Σ y I ( x , y )
Calculate first moment
M 10 = Σ x Σ y xI ( x , y )
M 01 = Σ x Σ y yI ( x , y )
Calculate search window barycenter
x c = M 10 M 00
x y = M 01 M 00
Adjust search window size
Wide: s = M 00 / 256
Long: 1.2s
Continuous iteration by this search window finally realizes the tracking to images of gestures.
In described identification step, the trace information that the geometric properties obtaining according to gesture geometric model and trace model obtain identifies certain gestures instruction; Be specially the gesture geometric model characteristic parameter obtaining in every two field picture from gesture geometric model, the more comprehensive interframe continuous information obtaining of following the tracks of is done certain gestures judgement, i.e. gesture identification.
This man-machine interaction method comprises:
Video acquisition step: gather user's gesture data, and be transferred to core processing step;
Core processing step: analyze the gesture instruction in video by gesture identification method, then gesture instruction order is assigned to instruction and performed step;
Instruction execution step: carry out the instruction repertorie corresponding with gesture instruction.
As shown in Figure 1 to Figure 3, the invention also discloses a kind of man-machine interactive system based on static gesture, comprise gesture identification unit, comprise in described gesture identification unit:
Complexion model module in real time: for extract colour of skin piece from image;
Gesture geometric model module: for look like to extract gesture feature from broca scale, the instruction of definition static gesture;
Trace model module: for images of gestures is followed the tracks of;
Identification module: identify for gesture instruction;
Comprise in described real-time complexion model module:
Initial colour of skin acquisition module: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model computing module: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin judge module: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
Comprise in described gesture geometric model module:
Gesture model builds module: for utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color;
Hand-type area of skin color judge module: for the gesture geometric model basis having built, judge this model whether meet the actual characteristic of hand (as finger and palm have intersection point, long and the palm radius of finger lives forever in proportionate relationship etc.), if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing module: for meeting User Defined gesture instruction.
Described trace model module can complete the tracking to 1~2 hand of user, described trace model module is obtained input data from real-time complexion model module, obtain initial tracking window from gesture geometric model module, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model module, identification module, further optimized the calculating of geometric model.
In described identification module, the trace information that the geometric properties obtaining according to gesture geometric model module and trace model module obtain identifies certain gestures instruction.
This man-machine interactive system comprises:
Video acquisition unit: for gathering user's gesture data, and be transferred to core processing unit;
Core processing unit: for go out the gesture instruction of video by gesture identification element analysis, then gesture instruction order is assigned to instruction execution unit;
Instruction execution unit: for carrying out the instruction repertorie corresponding with gesture instruction.
There is a kind of relation of the mutual correction to data in trace model module and real-time complexion model module, gesture geometric model module wherein.Identification obtains hand-type characteristic from gesture geometric model module, and the interframe continuous information obtaining in conjunction with trace model is done certain gestures judgement.
When user makes after certain gestures instruction in the visual range of equipment video acquisition unit, video acquisition unit obtains video data by video acquiring module, transfer to the gesture identification unit on core processing unit, treatment scheme is, first from view data, isolate colour of skin data block, obtaining gesture colour of skin data according to the reliability of gesture geometric model from colour of skin data block, trace model module has also been obtained corresponding data and then has been determined that according to geometric model gesture colour of skin database carries out certain gestures order tracking from skin color segmentation simultaneously.Final deal with data is all aggregated into gesture command recognizer place and does certain gestures instruction identification.
Of the present invention being widely used, for example:
Scheme one, certain gestures instruction is corresponding with robot motion instruction, as the action command that corresponding robot goes ahead when making certain gestures order 1.User makes gesture instruction 1 within the scope of robot vision, robot video acquisition unit obtains the video data with certain gestures instruction 1, pass to after core processing unit the input data using the data of obtaining as gesture identification program, program output certain gestures order 1.Then according to the instruction corresponding relation defining before, robot obtains the order of going ahead, and starts to go ahead.
Scheme two, PPT on specific instruction and PC is shown to instruction correspondence, as corresponding PPT is switched to lower one page when making certain gestures 1.User makes gesture instruction 1 in the camera visual range of PC, PC video acquisition unit obtains the video data with certain gestures instruction 1, pass to after core processing unit the input data using the data of obtaining as gesture identification program, program output certain gestures order 1.Then according to the instruction corresponding relation defining before, the PPT page of current displaying is switched to lower one page by PPT application.
Scheme three, specific instruction is corresponding with the instruction on intelligent television platform, as the switching of corresponding television channel when making certain gestures 1.User makes gesture instruction 1 in the camera visual range of intelligent television, the video acquisition unit of intelligent television obtains the video data with certain gestures instruction 1, pass to after core processing unit, input data using the data of obtaining as gesture identification program, program output certain gestures order 1.Then according to the instruction corresponding relation defining before, television channel is realized and being switched.
One aspect of the present invention has been improved some defects of existing Gesture Recognition scheme, on the other hand gesture identification method is applied to concrete practical application scene, for human and computer provides a kind of more convenient, effective interactive mode, instruction issuing mode alternately.In contrast to traditional gesture identification method and the invention provides more sane model of cognition.Can effectively reduce the impact that illumination brings colour of skin threshold value to be difficult to determine.On instruction set is chosen, user defined commands interface is provided, in the case of necessary user can be according to demand self-defined gesture instruction in the reasonable scope.Contrast and traditional man-machine interactive mode on the other hand, the present invention does not need extra control terminal, only need within the scope of machine vision, make corresponding gesture instruction with naked hand and can realize and mutual, the instruction issuing of machine.
The present invention can allow people by do gesture realize man-machine mutual, assign instruction to machine.Say from interactive mode aspect, the invention provides man-machine interaction mode a kind of novelty, succinct, humanized.From network system realization aspect, the poor stability, command quantity that the gesture identification unit of using in this system has overcome traditional gesture identification effectively less not extendible, seriously rely on the problems such as PC platform.
The present invention also has as beneficial effect:
One, effectively improved man-machine interactive experience.For man-machine interactive mode provides humanized experience.Using in situation of the present invention, people can be in the situation that departing from this extra control terminal of similar telepilot, interaction, the instruction issuing of realization and machine.
Two, the real-time colour of skin database processing scheme adopting in the present invention, can effectively overcome intensity of illumination and change the impact bringing.By setting up many gaussian probabilities model, effectively realize the judgement to colour of skin data, the method is applied in complex scene, effectively reduce the erroneous judgement to class colour of skin data.
Three, by setting up gesture geometric model, can effectively make the judgement to gesture area of skin color and non-gesture area of skin color.Weaken the interference of many area of skin color in applied environment.
Four, introduce gesture trace model, can effectively strengthen the stability of gesture identification result.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. the man-machine interaction method based on static gesture, is characterized in that, comprises gesture identification method, comprises at described gesture identification method:
Set up real-time complexion model step: from image, extract colour of skin piece;
Set up gesture geometric model step: from broca scale picture, extract gesture feature, the instruction of definition static gesture;
Set up trace model step: images of gestures is followed the tracks of;
Identification step: identify for gesture instruction;
Comprise in described real-time complexion model step:
Initial colour of skin obtaining step: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model calculation procedure: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin determining step: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
2. man-machine interaction method according to claim 1, is characterized in that, comprises in described gesture geometric model step:
Gesture model construction step: utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color; Hand-type area of skin color determining step: on the gesture geometric model basis having built, judge whether this model meets the actual characteristic of hand, if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing step: meet User Defined gesture instruction.
3. man-machine interaction method according to claim 1, it is characterized in that, described trace model step can complete the tracking to 1~2 hand of user, described trace model step is obtained input data from real-time complexion model step, obtain initial tracking window from gesture geometric model step, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model step, identification step.
4. man-machine interaction method according to claim 1, is characterized in that, in described identification step, the trace information that the geometric properties obtaining according to gesture geometric model and trace model obtain identifies certain gestures instruction.
5. according to the man-machine interaction method described in claim 1 to 4 any one, it is characterized in that, this man-machine interaction method comprises:
Video acquisition step: gather user's gesture data, and be transferred to core processing step;
Core processing step: analyze the gesture instruction in video by gesture identification method, then gesture instruction order is assigned to instruction and performed step;
Instruction execution step: carry out the instruction repertorie corresponding with gesture instruction.
6. the man-machine interactive system based on static gesture, is characterized in that, comprises gesture identification unit, comprises in described gesture identification unit:
Complexion model module in real time: for extract colour of skin piece from image;
Gesture geometric model module: for look like to extract gesture feature from broca scale, the instruction of definition static gesture;
Trace model module: for images of gestures is followed the tracks of;
Identification module: identify for gesture instruction;
Comprise in described real-time complexion model module:
Initial colour of skin acquisition module: utilize that strict colour of skin threshold restriction and dynamic frame are poor obtains initial colour of skin data block;
Complexion model computing module: utilize existing colour of skin database to calculate the many Gausses complexion model based on brightness index, and in gesture identification process according to current obtained broca scale as real-time update model parameter;
Colour of skin judge module: according to the complexion model calculating, pixel is carried out to colour of skin judgement, give colour of skin judgement in the time that probability is greater than setting threshold, otherwise do non-colour of skin judgement.
7. man-machine interactive system according to claim 6, is characterized in that, comprises in described gesture geometric model module:
Gesture model builds module: for utilize line segment and circle at geometrically reconstruct gesture hand-type to all area of skin color;
Hand-type area of skin color judge module: for the gesture geometric model basis having built, judge whether this model meets the actual characteristic of hand, if rationally do the judgement of the gesture colour of skin, otherwise do non-gesture colour of skin judgement;
Static gesture instruction typing module: for meeting User Defined gesture instruction.
8. man-machine interactive system according to claim 6, it is characterized in that, described trace model module can complete the tracking to 1~2 hand of user, described trace model module is obtained input data from real-time complexion model module, obtain initial tracking window from gesture geometric model module, finally complete the tracking to certain gestures colour of skin piece, in tracing process, utilize inter-frame information to provide gesture colour of skin positional information for gesture geometric model module, identification module.
9. man-machine interactive system according to claim 6, is characterized in that, in described identification module, the trace information that the geometric properties obtaining according to gesture geometric model module and trace model module obtain identifies certain gestures instruction.
10. according to the man-machine interactive system described in claim 6 to 9 any one, it is characterized in that, this man-machine interactive system comprises:
Video acquisition unit: for gathering user's gesture data, and be transferred to core processing unit;
Core processing unit: for go out the gesture instruction of video by gesture identification element analysis, then gesture instruction order is assigned to instruction execution unit;
Instruction execution unit: for carrying out the instruction repertorie corresponding with gesture instruction.
CN201410371319.5A 2014-07-30 2014-07-30 A kind of man-machine interaction method and system based on static gesture Active CN104123008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410371319.5A CN104123008B (en) 2014-07-30 2014-07-30 A kind of man-machine interaction method and system based on static gesture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410371319.5A CN104123008B (en) 2014-07-30 2014-07-30 A kind of man-machine interaction method and system based on static gesture

Publications (2)

Publication Number Publication Date
CN104123008A true CN104123008A (en) 2014-10-29
CN104123008B CN104123008B (en) 2017-11-03

Family

ID=51768445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410371319.5A Active CN104123008B (en) 2014-07-30 2014-07-30 A kind of man-machine interaction method and system based on static gesture

Country Status (1)

Country Link
CN (1) CN104123008B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020433A (en) * 2015-12-09 2016-10-12 展视网(北京)科技有限公司 3D vehicle terminal man-machine interactive system and interaction method
CN108900820A (en) * 2018-05-14 2018-11-27 河南大学 A kind of control method and device of projector
CN109584507A (en) * 2018-11-12 2019-04-05 深圳佑驾创新科技有限公司 Driver behavior modeling method, apparatus, system, the vehicles and storage medium
CN111967404A (en) * 2020-08-20 2020-11-20 苏州凝眸物联科技有限公司 Automatic snapshot method for specific scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008038096A1 (en) * 2006-09-28 2008-04-03 Nokia Corporation Improved user interface
US20110304541A1 (en) * 2010-06-11 2011-12-15 Navneet Dalal Method and system for detecting gestures
CN102520790A (en) * 2011-11-23 2012-06-27 中兴通讯股份有限公司 Character input method based on image sensing module, device and terminal
US20140106735A1 (en) * 2012-10-12 2014-04-17 Crestron Electronics, Inc. User Identification and Location Determination in Control Applications
CN103745193A (en) * 2013-12-17 2014-04-23 小米科技有限责任公司 Skin color detection method and skin color detection device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008038096A1 (en) * 2006-09-28 2008-04-03 Nokia Corporation Improved user interface
US20110304541A1 (en) * 2010-06-11 2011-12-15 Navneet Dalal Method and system for detecting gestures
CN102520790A (en) * 2011-11-23 2012-06-27 中兴通讯股份有限公司 Character input method based on image sensing module, device and terminal
US20140106735A1 (en) * 2012-10-12 2014-04-17 Crestron Electronics, Inc. User Identification and Location Determination in Control Applications
CN103745193A (en) * 2013-12-17 2014-04-23 小米科技有限责任公司 Skin color detection method and skin color detection device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王辉: "基于视觉的实时手势跟踪与识别及其在人机交互中的应用", 《中国优秀硕士学位论文全文数据库》 *
许杏: "基于隐马尔可夫模型的手势识别研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106020433A (en) * 2015-12-09 2016-10-12 展视网(北京)科技有限公司 3D vehicle terminal man-machine interactive system and interaction method
CN108900820A (en) * 2018-05-14 2018-11-27 河南大学 A kind of control method and device of projector
CN109584507A (en) * 2018-11-12 2019-04-05 深圳佑驾创新科技有限公司 Driver behavior modeling method, apparatus, system, the vehicles and storage medium
CN111967404A (en) * 2020-08-20 2020-11-20 苏州凝眸物联科技有限公司 Automatic snapshot method for specific scene
CN111967404B (en) * 2020-08-20 2024-05-31 苏州凝眸物联科技有限公司 Automatic snapshot method for specific scene

Also Published As

Publication number Publication date
CN104123008B (en) 2017-11-03

Similar Documents

Publication Publication Date Title
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
CN102854983B (en) A kind of man-machine interaction method based on gesture identification
CN110569817B (en) System and method for realizing gesture recognition based on vision
CN102200834B (en) Television control-oriented finger-mouse interaction method
CN102831404B (en) Gesture detecting method and system
CN110796018B (en) Hand motion recognition method based on depth image and color image
CN104616028B (en) Human body limb gesture actions recognition methods based on space segmentation study
CN105718878A (en) Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN104298354A (en) Man-machine interaction gesture recognition method
CN102831439A (en) Gesture tracking method and gesture tracking system
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN103186230B (en) Man-machine interaction method based on colour recognition with tracking
CN109800676A (en) Gesture identification method and system based on depth information
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
CN104866824A (en) Manual alphabet identification method based on Leap Motion
CN104123008A (en) Man-machine interaction method and system based on static gestures
CN103034851B (en) The hand tracking means based on complexion model of self study and method
Xu et al. Robust hand gesture recognition based on RGB-D Data for natural human–computer interaction
CN103426000B (en) A kind of static gesture Fingertip Detection
Yu et al. Hand Gesture Recognition Based on Faster-RCNN Deep Learning.
CN105261038A (en) Bidirectional optical flow and perceptual hash based fingertip tracking method
Gajjar et al. Hand gesture real time paint tool-box: Machine learning approach
CN107894834A (en) Gesture identification method and system are controlled under augmented reality environment
Rehman et al. Two hand gesture based 3d navigation in virtual environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant