CN103268153B - Based on the man-machine interactive system of computer vision and exchange method under demo environment - Google Patents

Based on the man-machine interactive system of computer vision and exchange method under demo environment Download PDF

Info

Publication number
CN103268153B
CN103268153B CN201310212362.2A CN201310212362A CN103268153B CN 103268153 B CN103268153 B CN 103268153B CN 201310212362 A CN201310212362 A CN 201310212362A CN 103268153 B CN103268153 B CN 103268153B
Authority
CN
China
Prior art keywords
gesture
human body
computer
effect display
presentation screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310212362.2A
Other languages
Chinese (zh)
Other versions
CN103268153A (en
Inventor
李梦天
庞明
孟成林
姜承祥
路通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201310212362.2A priority Critical patent/CN103268153B/en
Publication of CN103268153A publication Critical patent/CN103268153A/en
Application granted granted Critical
Publication of CN103268153B publication Critical patent/CN103268153B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses the man-machine interactive system based on computer vision under a kind of demo environment, this man-machine interactive system includes presentation screen, computer, vision sensor and human body, human body and presentation screen and is positioned at vision sensor within sweep of the eye;Human body is positioned at presentation screen front;Human body back or human body sidepiece are relative with presentation screen;Computer comprises gesture recognition module and effect display module.This man-machine interactive system can allow speaker represent demo content towards spectators on the one hand, switching effect between display content can dynamically be determined by speaker's gesture with other animation effects on the other hand, decrease the display effect early-stage preparations time, improve the work efficiency of speaker.Meanwhile, the invention also discloses the exchange method of this interactive system, the method is simple, is controlled the switching of demo content by the gesture motion of speaker.

Description

Based on the man-machine interactive system of computer vision and exchange method under demo environment
Technical field
The invention belongs to field of human-computer interaction, it particularly relates to the man-machine interactive system based on computer vision under demo environment and exchange method.
Background technology
Demonstration behavior refers to by screen or Projection Display demo content, introduces behavior by the one of the form of speech.Demo environment refers to the environment carrying out demonstration behavior.At present, along with quickly popularizing of all kinds of video capturing devices, the user behavior analysis of view-based access control model is also more and more prominent with the importance of natural interaction technology, and can be widely applied to the occasions such as intelligent television, body feeling interaction, teaching demonstration.One typical application scenarios is: the demo system commonly used at present generally to use such as control equipment such as keyboard, mouse and Digiplexs, but still suffers from many restrictions on the naturalization of man-machine interaction, convenience.If for this typical problem of man-machine interaction, one more naturally interactive mode can be developed, it is clear that have a great attraction and practical value.Man-machine interaction based on computer vision can solve problem above, and then existing this type of is generally desirable that effector is in the face of screen alternately, and health can not near object after one's death.Obviously, such space requirement is placed on the speaker in demo environment is inappropriate.Additionally, in order to avoid gesture judge by accident, effector with, especially hand, receive very big constraint.
Summary of the invention
Technical problem:The technical problem to be solved is: provide the man-machine interactive system based on computer vision under demo environment, this man-machine interactive system typical hands gesture by vision means identification speaker, and it is applied to system demonstration, thus from visual identity angle, help natural interaction between speaker and demonstration equipment, speaker can be allowed to represent demo content towards spectators on the one hand, switching effect between display content can dynamically be determined by speaker's gesture with other animation effects on the other hand, decrease the display effect early-stage preparations time, improve the work efficiency of speaker;Meanwhile, additionally providing the exchange method of this interactive system, the method is simple, is controlled the switching of demo content by the gesture motion of speaker.
Technical scheme:For solving above-mentioned technical problem, the technical solution used in the present invention is:
Based on the man-machine interactive system of computer vision under a kind of demo environment, this man-machine interactive system includes presentation screen, computer, vision sensor and human body, human body and presentation screen and is positioned at vision sensor within sweep of the eye;Human body is positioned at presentation screen front;Human body back or human body sidepiece are relative with presentation screen;Computer comprises gesture recognition module and effect display module;
Gesture recognition module, for controlling being connected and disconnected from of computer and vision sensor, obtains data from vision sensor, and the data received is analyzed, produce corresponding gesture control instruction, gesture control instruction passed to and effect display module;
The content that user carries out demonstrating is read in or drawn to effect display module, for setting up and graphing interface, it is provided that the gesture instruction set that user selects, and accepts the gesture control instruction that gesture recognition module sends, and shows the demo content that gesture control instruction is corresponding;
Demo content is plotted in the presentation screen showing spectators to watch by the effect display module in computer, vision sensor catches the information within the vision being positioned at vision sensor as visual information, visual information is passed to the gesture recognition module of computer by vision sensor, gesture recognition module accepts and analyzes this visual information, then gesture control instruction corresponding with this visual information is generated, this gesture control instruction is passed to effect display module by gesture recognition module, effect display module is according to the gesture control instruction received, demo content in switching presentation screen.
Further, described effect display module is additionally operable to the control instruction accepting to be inputted by computer auxiliary, and shows the demo content that this control instruction is corresponding.
Based on the exchange method of the man-machine interactive system of computer vision under above-mentioned demo environment, this exchange method comprises the following steps:
The first step: arrange the standard movement mode of human body gesture control action, then by discrimination standard corresponding to this standard movement mode and this standard movement mode, is stored in gesture recognition module;Distribute gesture instruction to the standard movement mode of every kind of gesture control action, demonstrating effect corresponding to every kind of gesture instruction and this gesture instruction is made instruction effect comparison table, this instruction effect comparison table is stored in effect display module;
Second step: installation system: install vision sensor, make human body and presentation screen be positioned at vision sensor within sweep of the eye;The signal input part of the signal output part of vision sensor and the gesture recognition module of computer is communicatively coupled, the signal output part of gesture recognition module and the signal input part of effect display module connect, and the signal output part of effect display module and the signal input part of presentation screen connect;The animation effect of presentation screen display demo content and switching;
3rd step: man-machine interaction: human body makes gesture motion in presentation screen front, the information within the vision being positioned at vision sensor caught by vision sensor, and pass information to gesture recognition module, the information of vision sensor transmission is persistently analyzed by gesture recognition module, and judge whether the human body gesture motion comprised in this information belongs to the standard movement mode of the human body gesture control action arranged in the first step, if, effect display module is passed to after then this information being converted into gesture instruction, effect display module receives the information of gesture recognition module transmission, according to instruction effect comparison table, the content of demonstration in presentation screen is made corresponding change;If it is not, then gesture recognition module record or do not record this result of calculation, gesture recognition module not with effect display module communication or notice effect display module currently without effective gesture, demo content is not produced operation by effect display module.
Beneficial effect:Compared with prior art, the method have the advantages that
(1) the switching effect between speech display content is dynamically determined by speaker's gesture with other animation effects, decreases the display effect early-stage preparations time, improves the work efficiency of speaker.Environment of giving a lecture in prior art is generally made up of speaker, screen, computer, and speaker needs switching demo content on computer in speech process, or realizes control by extra members such as remote controllers.And the content switched and switching mode also realize setting, it is impossible to need dynamic change according to speech.The man-machine interactive system of the present invention includes presentation screen, computer, vision sensor and human body.Human body and speaker.Human body is positioned at presentation screen front, and human body back or human body sidepiece relative with presentation screen.Vision sensor catches the information within the vision being positioned at vision sensor as visual information, visual information is passed to gesture recognition module by vision sensor, gesture recognition module accepts and analyzes this visual information, then gesture control instruction corresponding with this visual information is generated, effect display module is according to the gesture control instruction received, the demo content in switching presentation screen.So, human body is while speech, by the gesture control action of self, it is achieved that the content switching of presentation screen.In the process, human body, without stopping speech, carries out the switching of speech content.So, the interactive system of the present invention improves the work efficiency of speaker.
(2) be conducive to keeping the continuity of human body speech.In the interactive system of the present invention, human body is when switching speech content, it is not necessary to stop speech, only by gesture control action, need to can realize the switching of presentation screen content.Human body is while when giving a lecture, carrying out the switching of speech content.This continuity being conducive to keeping human body speech.In the system of the present invention, human body and each equipment room have the spatial relationship of applicable demo environment.Control gesture does not retrain human body and does the body language that normal speech needs.Take into account the existence of spectators in demo environment, the switching effect of demo content is kept concordance with gesture motion by the present invention, allows the audience's feeling arrive the smoothness of speaker's switch contents, helps speaker to obtain bandwagon effect of better giving a lecture.
(3) speaker represents demo content towards spectators.In the man-machine interaction of prior art, human body is facing generally towards screen just can carry out man-machine interaction.And man-machine interactive system in the present invention and in method, human body is when speech, towards spectators, when carrying out demo content switching, and still can towards spectators.The man-machine interaction of the present invention, it is not necessary to human body screen-oriented.The existing human body interactive system based on computer vision is not often because can completely identifies whole human body and cannot be carried out some gestures and judge.The technical scheme of this patent allows human body and background apart from closer.Palm can be flattened on screen by user and be operated by the gesture in the system of the present invention, is equivalent to the zero distance with background and contacts.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of exchange method in the present invention.
Fig. 2 is the human body gesture motion schematic diagram of embodiment 1 in the present invention.
Fig. 3 is the human body gesture motion schematic diagram of embodiment 2 in the present invention.
Fig. 4 is the human body gesture motion schematic diagram of embodiment 3 in the present invention.
Detailed description of the invention
Below in conjunction with embodiment, technical scheme is described in detail.
Based on the man-machine interactive system of computer vision under the demo environment of the present invention, this man-machine interactive system includes presentation screen, computer, vision sensor and human body.Vision sensor preferably vision sensor with depth data, such as the KinectForWindows of Microsoft.The field range of vision sensor is the region of the cone being summit with vision sensor, and this field range covers all or part of presentation screen.Human body and presentation screen are positioned at vision sensor within sweep of the eye;Human body is positioned at presentation screen front;Human body back or human body sidepiece are relative with presentation screen;Computer comprises gesture recognition module and effect display module.Gesture recognition module, for controlling being connected and disconnected from of computer and vision sensor, obtains data from vision sensor, and the data received is analyzed, produce corresponding gesture control instruction, gesture control instruction passed to and effect display module.The content that user carries out demonstrating is read in or drawn to effect display module, for setting up and graphing interface, it is provided that the gesture instruction set that user selects, and accepts the gesture control instruction that gesture recognition module sends, and shows the demo content that gesture control instruction is corresponding.Effect display module for drawing the demo content corresponding with showing gesture control instruction, manipulates other demowares according to control instruction, such as the PowerPoint of Microsoft, and then switching demo content.
The work process of said system is: demo content is plotted in the presentation screen showing spectators to watch by the effect display module in computer, vision sensor catches the information within the vision being positioned at vision sensor as visual information, visual information is passed to the gesture recognition module of computer by vision sensor, gesture recognition module accepts and analyzes this visual information, then gesture control instruction corresponding with this visual information is generated, this gesture control instruction is passed to effect display module by gesture recognition module, effect display module is according to the gesture control instruction received, demo content in switching presentation screen.
Projector is also included in above-mentioned man-machine interactive system.Presentation screen is projection screen.Utilize projection screen and projector to realize transmission and the display of demo content.The signal output part of the effect display module of computer is connected with the signal input part of projector, and the signal output part of projector is relative with projection screen.
Certainly, presentation screen can be only display screen.The signal input part of this display screen is connected with the signal output part of the effect display module of computer.Here display panel type can include LED, CRT, plasma.The demo content that presentation screen is shown is content of multimedia, including the one in word, picture, video and music or combination in any.
Further, described effect display module is additionally operable to the control instruction accepting to be inputted by computer auxiliary, and shows the demo content that this control instruction is corresponding.Computer auxiliary includes the one in keyboard, mouse, TrackPoint, touch pad and remote controller or combination in any.This makes the switching of demo content in the interactive system of the present invention that human body gesture not only can be relied on to realize, and the control instruction inputted by computer auxiliary can also be realized.So, be conducive to expanding the control mode of demo content switching.
As it is shown in figure 1, based on the exchange method of the man-machine interactive system of computer vision under above-mentioned demo environment, comprise the following steps:
The first step: arrange the standard movement mode of human body gesture control action, then by discrimination standard corresponding to this standard movement mode and this standard movement mode, is stored in gesture recognition module;Distribute gesture instruction to the standard movement mode of every kind of gesture control action, demonstrating effect corresponding to every kind of gesture instruction and this gesture instruction is made instruction effect comparison table, this instruction effect comparison table is stored in effect display module;
Second step: installation system: install vision sensor, make human body and presentation screen be positioned at vision sensor within sweep of the eye;The signal input part of the signal output part of vision sensor and the gesture recognition module of computer is communicatively coupled, the signal output part of gesture recognition module and the signal input part of effect display module connect, and the signal output part of effect display module and the signal input part of presentation screen connect;The animation effect of presentation screen display demo content and switching;
3rd step: man-machine interaction: human body makes gesture motion in presentation screen front, the information within the vision being positioned at vision sensor caught by vision sensor, and pass information to gesture recognition module, the information of vision sensor transmission is persistently analyzed by gesture recognition module, and judge whether the human body gesture motion comprised in this information belongs to the motion mode of the human body gesture control action arranged in the first step, if, effect display module is passed to after then this information being converted into gesture instruction, effect display module receives the information of gesture recognition module transmission, according to instruction effect comparison table, the content of demonstration in presentation screen is made corresponding change;If it is not, then gesture recognition module record or do not record this result of calculation, gesture recognition module not with effect display module communication or notice effect display module currently without effective gesture, demo content is not produced operation by effect display module.
Further, in the described first step, the motion mode arranging human body gesture control action can have multiple.But preferred in this patent, it is moved to the left, moves right, moves up, moves down four kinds of motion modes.When arranging this four kinds of motion modes, arranging gesture in presentation screen front judges that region, gesture judge that region is arranged in the field range of vision sensor simultaneously.So, in the 3rd step man-machine interaction, the process that the information that vision sensor is transmitted by gesture recognition module is analyzed comprises the following steps:
Step 101) according to computer vision algorithms make, calculate human hands position.
Step 102) analyze human hands position whether judge in region in gesture: set the region within the n rice of presentation screen front and judge region as gesture, then calculating the hand of human body to the distance of presentation screen is h rice, if h≤n, then human hands is positioned at gesture and judges region, enter step 103), if h is > n, then human hands is positioned at gesture and judges outside region, the standard movement mode of the human body gesture control action that this human hands action is not belonging in the first step to arrange.N is preferably 0.05 0.5.
Step 103) set threshold value that hand moves up as D1, the threshold value that moves down as D2, the threshold value that is moved to the left as D3, the threshold value that moves right as D4, then judge the gesture motion track comprised in human hands action from origin-to-destination respectively up and down, the distance that moves of left and right four direction whether reach the threshold value of respective direction, if reaching threshold value, then this hand motion belongs to the motion mode of the human body gesture control action arranged in the first step;Without reaching threshold value, then the motion mode of human body gesture control action that this hand motion is not belonging in the first step to arrange;If two or more directions all reach threshold value, then using the direction corresponding for the threshold value reached the at first standard movement mode as the human body gesture control action arranged in the first step.As preferably, D1=D2=0.15H, D3=D4=0.12H, wherein, H represents Human Height.
This patent is calculated human hands position computer vision algorithms make.Computer vision algorithms make belongs to prior art.Such as paper " Real-timehumanposerecognitioninpartsfromsingledepthimage s " (source CVPR'11Proceedingsofthe2011IEEEConferenceonComputerVisio nandPatternRecognition;Pages1297-1304) computer vision algorithms make disclosed in.Above-mentioned steps 101 is described below) the middle a kind of method calculating human hands position:
(1011) utilize the algorithm in computer vision using vision sensor data for input as calculating, draw each joint position information of the human body comprising human hands, take out hand three-dimensional position therein for subsequent analysis and calculating.Here algorithm can be existing computer vision algorithms make, provides in such as KinectSDKforWindows.
(1012) if algorithm cannot calculate human hands positional information for current data in (1011), or when the result drawn has relatively low confidence level, then abandon the result of calculation in (1011), enter (1013), otherwise enter the step 102 of this patent) in, analyze the position of human hands whether judging in region.
(1013) algorithm in computer vision is utilized to isolate position of human body in the background of vision sensor data.Here algorithm can be existing computer vision algorithms make, provides in such as KinectSDKforWindows.Analyzing in (1012) position of the most side of human body the right and left of obtaining, 2, the left and right at the most edge after namely rejecting noise spot in horizontal direction is as hand position (being preferably left).Take approximate as human hands locus of the locus of this point, enter the step 102 of this patent) in, analyze the position of human hands whether judging in region.
Below being only a kind of concrete algorithm, those skilled in the art can adopt other algorithms existing, as long as the position of human hands can be calculated.
In the human-computer interaction device of prior art, human body is facing generally towards screen, just can carry out man-machine interaction.And in the present invention, human body screen dorsad or human body sidepiece are towards screen.In speech process, when speaker controls speech content switching by gesture, it is not necessary to screen-oriented.Thus facilitate speaker while speech, by gesture motion, control screen and show the switching showing content.The man-machine interaction scene of the present invention is used for demo environment, and speaker shows doing to a group spectators.Speaker, while speech, helps to illustrate with a lot of body languages, and the switching of the demo content used gesture in action control presentation screen.
Enumerate embodiment below.
Embodiment 1: window
As in figure 2 it is shown, adopt interactive system and the exchange method of the present invention, carry out fenestration procedure.The standard movement mode of the human body gesture control action windowed is: speaker, along the direction parallel with presentation screen, reaches out one's hands, and both hands keep certain interval at vertical direction, and both hands are from presentation screen 5-100 centimetre, and then both hands are opened respectively up and down.
The information that vision sensor is caught by gesture recognition module constantly is calculated and analyzes.Computer vision algorithms make is utilized to draw human hands position and trunk position.Set hand and differ d rice in the horizontal direction with trunk as the state of stretching out.D is preferably 1.When gesture recognition module identifies: human body first stretches out first hands;Stretch out second hands again, and second hands and first hands is equidirectional stretches out;After both hands open the these three stage up and down, gesture recognition module sends the gesture instruction of corresponding action of windowing to effect display module.Effect display module receives the information of gesture recognition module transmission, according to instruction effect comparison table, control the display effect of presentation screen, that is: when both hands open up and down, current presentation content is also cut in the horizontal direction up and down, respectively to the top of screen with move below, now presentation screen shows ensuing demo content, when the demo content being cut open before is moved fully to presentation screen edge, finishing switching.
Embodiment 2: upper and lower, left and right push pull maneuver
For the base control action (including upper and lower, left and right four direction to move) play in demo system process, wherein during the correspondence broadcasting demo content respectively of upper and lower, left and right, the moving direction of switching demo content.Upper and left representation is switched to next demo content, and right and lower expression is switched to a demo content.
As it is shown on figure 3, adopt interactive system and the exchange method of the present invention, carry out upper and lower, left and right push pull maneuver respectively.Speaker is when carrying out upper and lower, left and right push pull maneuver, and health is tried one's best parallel with curtain, with the distance that curtain keeps more than 20cm.When switching the page, the gesture putting his hand into distance screen ten centimetres interior judges in region, carries out upper and lower, left and right push pull maneuver.
The standard movement mode of human body gesture control action is: upper shifting: hand is put into gesture and judged region, and human hands moves up, and after mobile, hand removal gesture judges region;Moving down: hand is put into gesture and judged region, and human hands moves down, after mobile, hand removal gesture judges region;Moving to left: hand is put into gesture and judged region, and human hands is moved to the left, after mobile, hand removal gesture judges region;Moving to right: hand is put into gesture and judged region, and human hands moves right, after mobile, hand removal gesture judges region.
The information that vision sensor is caught by gesture recognition module constantly is calculated and analyzes.After gesture recognition module judges the motion mode that the human body gesture motion comprised in this information belongs to the human body gesture control action arranged in advance, gesture instruction corresponding for this hand motion is passed to effect display module, effect display module receives the gesture instruction of gesture recognition module transmission, according to instruction effect comparison table, the content of presentation screen is carried out corresponding handover operation.
The display effect of presentation screen is: the direction moved according to hand is (if hand moves sideling, then be included into according to angle just go up, just under, in four direction just left, just right immediate one), current presentation content moves towards the screen edge in hand direction, next demo content starts from relative side to move into simultaneously, until next demo content is completely covered whole screen position.When hand, upwards or when promoting to the left, the content of presentation screen is switched to next lantern slide.When hand, downwards or when promoting to the right, the content of presentation screen is switched to a pictures.
Human hands puts into the gesture in screen front when judging region, if when doing downward actuation, hand should be put into gesture when highest point and judge region, and should first not put into and judge district, then hand is lifted to highest point.Finish an action, after page layout switch completes, just can carry out next action.
Embodiment 3: music is opened
As shown in Figure 4, adopt interactive system and the exchange method of the present invention, carry out music open operation.
The standard movement mode of music opener's body gesture control action is: demonstrator stands on vision sensor dead ahead, and both hands naturally droop, and human body both hands, from the position of distance front at least 10cm, slowly lift simultaneously, until shoulder position.
The information that vision sensor is caught by gesture recognition module constantly is calculated and analyzes.Computer vision algorithms make is utilized to draw human hands position and trunk position.Set hand and differ 10cm in the horizontal direction with trunk as the state of stretching out.When finding human body successively experience both hands naturally droop, front is stretched out and lifts simultaneously, the height of both hands more than or equal to the height three phases of shoulder after, then send, to effect display module, the gesture instruction that corresponding music is opened.After effect display module receives instruction, consult the effect into opening music according to instruction effect comparison table, presentation screen is carried out music open operation.The display effect of presentation screen is: play predefined audio file.
Embodiment 4: music stops
The standard movement mode of the human body gesture control action that music stops is: demonstrator stands on vision sensor dead ahead, both hands height is more than or equal to shoulder height, both hands, from the position of distance front at least 10cm, slowly fall simultaneously, until the position that both hands naturally droop.
The information that vision sensor is caught by gesture recognition module constantly is calculated and analyzes.Computer vision algorithms make is utilized to draw human hands position and trunk position.Set hand and differ 10cm in the horizontal direction with trunk as the state of stretching out.Successively experience after the height of both hands stretches out more than or equal to the height of shoulder, front and fall simultaneously and naturally droop three phases with both hands when gesture recognition module identifies human body, then gesture recognition module sends, to effect display module, the gesture instruction that corresponding music is closed.After effect display module receives gesture instruction, consult the effect stopped into music according to instruction effect comparison table.Presentation screen is carried out music and stops operation by effect display module.The display effect of presentation screen is: if currently there being audio frequency to play, then terminate to play.

Claims (10)

1. based on the man-machine interactive system of computer vision under a demo environment, it is characterised in that this man-machine interactive system includes presentation screen, computer, vision sensor and human body, human body and presentation screen and is positioned at vision sensor within sweep of the eye;Human body is positioned at presentation screen front;Human body back or human body sidepiece are relative with presentation screen;Computer comprises gesture recognition module and effect display module;
Gesture recognition module, for controlling being connected and disconnected from of computer and vision sensor, obtains data from vision sensor, and the data received is analyzed, produce corresponding gesture control instruction, gesture control instruction is passed to effect display module;
The content that user carries out demonstrating is read in or drawn to effect display module, for setting up and graphing interface, it is provided that the gesture instruction set that user selects, and receives the gesture control instruction that gesture recognition module sends, and shows the demo content that gesture control instruction is corresponding;
Demo content is plotted in the presentation screen showing spectators to watch by the effect display module in computer, vision sensor catches the information within the vision being positioned at vision sensor as visual information, visual information is passed to the gesture recognition module of computer by vision sensor, gesture recognition module receives and analyzes this visual information, then gesture control instruction corresponding with this visual information is generated, this gesture control instruction is passed to effect display module by gesture recognition module, effect display module is according to the gesture control instruction received, demo content in switching presentation screen.
2. based on the man-machine interactive system of computer vision under the demo environment described in claim 1, it is characterized in that, also include projector, described presentation screen is projection screen, the signal output part of the effect display module of computer is connected with the signal input part of projector, and the signal output part of projector is relative with projection screen.
3., based on the man-machine interactive system of computer vision under the demo environment described in claim 1, it is characterised in that described presentation screen is display screen, the signal input part of this display screen is connected with the signal output part of the effect display module of computer.
4. based on the man-machine interactive system of computer vision under the demo environment described in claim 1, it is characterised in that described effect display module is additionally operable to receive the control instruction inputted by computer auxiliary, and shows the demo content that this control instruction is corresponding.
5. based on the man-machine interactive system of computer vision under the demo environment described in claim 4, it is characterised in that described computer auxiliary includes the one in keyboard, mouse, TrackPoint, touch pad and remote controller or combination in any.
6. based on the man-machine interactive system of computer vision under the demo environment described in claim 1, it is characterised in that described vision sensor is the vision sensor with depth data.
7. based on the man-machine interactive system of computer vision under the demo environment described in claim 1, it is characterised in that described demo content includes the one in word, picture, video and music or combination in any.
8. based on the exchange method of the man-machine interactive system of computer vision under the demo environment described in a claim 1, it is characterised in that this exchange method comprises the following steps:
The first step: arrange the standard movement mode of human body gesture control action, then by discrimination standard corresponding to this standard movement mode and this standard movement mode, is stored in gesture recognition module;Distribute gesture instruction to the standard movement mode of every kind of gesture control action, demonstrating effect corresponding to every kind of gesture instruction and this gesture instruction is made instruction effect comparison table, this instruction effect comparison table is stored in effect display module;
Second step: installation system: install vision sensor, make human body and presentation screen be positioned at vision sensor within sweep of the eye;The signal input part of the signal output part of vision sensor and the gesture recognition module of computer is communicatively coupled, the signal output part of gesture recognition module and the signal input part of effect display module connect, and the signal output part of effect display module and the signal input part of presentation screen connect;The animation effect of presentation screen display demo content and switching;
3rd step: man-machine interaction: human body makes gesture motion in presentation screen front, the information within the vision being positioned at vision sensor caught by vision sensor, and pass information to gesture recognition module, the information of vision sensor transmission is persistently analyzed by gesture recognition module, and judge whether the human body gesture motion comprised in this information belongs to the standard movement mode of the human body gesture control action arranged in the first step, if, effect display module is passed to after then this information being converted into gesture instruction, effect display module receives the information of gesture recognition module transmission, according to instruction effect comparison table, the content of demonstration in presentation screen is made corresponding change;If it is not, then gesture recognition module record or do not record this result of calculation, gesture recognition module not with effect display module communication or notice effect display module currently without effective gesture, demo content is not produced operation by effect display module.
9. based on the exchange method of the man-machine interactive system of computer vision under the demo environment described in claim 8, it is characterized in that, in the described first step, the motion mode of human body gesture control action is set for being moved to the left, move right, move up, moving down, arranging gesture in presentation screen front judges that region, gesture judge that region is arranged in the field range of vision sensor simultaneously.
10. based on the exchange method of the man-machine interactive system of computer vision under the demo environment described in claim 9, it is characterised in that in the 3rd described step, the process that the information that vision sensor is transmitted by gesture recognition module is analyzed comprises the following steps:
Step 101) according to computer vision algorithms make, calculate human hands position;
Step 102) analyze human hands position whether judge in region in gesture: set the region within the n rice of presentation screen front and judge region as gesture, then calculating the hand of human body to the distance of presentation screen is h rice, if h≤n, then human hands is positioned at gesture and judges region, enter step 103), if h is > n, then human hands is positioned at gesture and judges outside region, the standard movement mode of the human body gesture control action that this human hands action is not belonging in the first step to arrange;
Step 103) set threshold value that hand moves up as D1, the threshold value that moves down as D2, the threshold value that is moved to the left as D3, the threshold value that moves right as D4, then judge the gesture motion track comprised in human hands action from origin-to-destination respectively up and down, the distance that moves of left and right four direction whether reach the threshold value of respective direction, if reaching threshold value, then this hand motion belongs to the motion mode of the human body gesture control action arranged in the first step;Without reaching threshold value, then the motion mode of human body gesture control action that this hand motion is not belonging in the first step to arrange;If two or more directions all reach threshold value, then using the direction corresponding for the threshold value reached the at first standard movement mode as the human body gesture control action arranged in the first step.
CN201310212362.2A 2013-05-31 2013-05-31 Based on the man-machine interactive system of computer vision and exchange method under demo environment Expired - Fee Related CN103268153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310212362.2A CN103268153B (en) 2013-05-31 2013-05-31 Based on the man-machine interactive system of computer vision and exchange method under demo environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310212362.2A CN103268153B (en) 2013-05-31 2013-05-31 Based on the man-machine interactive system of computer vision and exchange method under demo environment

Publications (2)

Publication Number Publication Date
CN103268153A CN103268153A (en) 2013-08-28
CN103268153B true CN103268153B (en) 2016-07-06

Family

ID=49011788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310212362.2A Expired - Fee Related CN103268153B (en) 2013-05-31 2013-05-31 Based on the man-machine interactive system of computer vision and exchange method under demo environment

Country Status (1)

Country Link
CN (1) CN103268153B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104460972A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Human-computer interaction system based on Kinect
CN103677992B (en) * 2013-12-20 2017-02-22 深圳泰山在线科技有限公司 Method and system for switching page in motion sensing mode
CN104461524A (en) * 2014-11-27 2015-03-25 沈阳工业大学 Song requesting method based on Kinect
CN113768497A (en) * 2015-05-04 2021-12-10 原相科技股份有限公司 Action recognition system and method thereof
CN104834383A (en) * 2015-05-26 2015-08-12 联想(北京)有限公司 Input method and electronic device
CN104952289B (en) * 2015-06-16 2019-09-03 浙江师范大学 A kind of intelligence body-sensing teaching aid and its application method
CN105549747A (en) * 2016-01-29 2016-05-04 合肥工业大学 Wireless gesture interaction based specially-shaped particle type LED display system
CN106125928A (en) * 2016-06-24 2016-11-16 同济大学 PPT based on Kinect demonstrates aid system
CN107766842B (en) * 2017-11-10 2020-07-28 济南大学 Gesture recognition method and application thereof
CN108241434B (en) * 2018-01-03 2020-01-14 Oppo广东移动通信有限公司 Man-machine interaction method, device and medium based on depth of field information and mobile terminal
CN109739373B (en) * 2018-12-19 2022-03-01 重庆工业职业技术学院 Demonstration equipment control method and system based on motion trail
CN109857325A (en) * 2019-01-31 2019-06-07 北京字节跳动网络技术有限公司 Display interface switching method, electronic equipment and computer readable storage medium
CN109947247B (en) * 2019-03-14 2022-07-05 海南师范大学 Somatosensory interaction display system and method
CN112261395B (en) * 2020-10-22 2022-08-16 Nsi产品(香港)有限公司 Image replacement method and device, intelligent terminal and storage medium
CN113719810B (en) * 2021-06-07 2023-08-04 西安理工大学 Man-machine interaction lamp light device based on visual identification and intelligent control
CN113986066A (en) * 2021-10-27 2022-01-28 深圳市华胜软件技术有限公司 Intelligent mobile terminal, control method and intelligent mobile system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101943947A (en) * 2010-09-27 2011-01-12 鸿富锦精密工业(深圳)有限公司 Interactive display system
CN201945947U (en) * 2011-01-21 2011-08-24 中科芯集成电路股份有限公司 Multifunctional gesture interactive system
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101943947A (en) * 2010-09-27 2011-01-12 鸿富锦精密工业(深圳)有限公司 Interactive display system
CN201945947U (en) * 2011-01-21 2011-08-24 中科芯集成电路股份有限公司 Multifunctional gesture interactive system
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method

Also Published As

Publication number Publication date
CN103268153A (en) 2013-08-28

Similar Documents

Publication Publication Date Title
CN103268153B (en) Based on the man-machine interactive system of computer vision and exchange method under demo environment
CN201535853U (en) Interactive type sand table system
CN106598226B (en) A kind of unmanned plane man-machine interaction method based on binocular vision and deep learning
CN104410883B (en) The mobile wearable contactless interactive system of one kind and method
US20190340944A1 (en) Multimedia Interactive Teaching System and Method
RU2422878C1 (en) Method of controlling television using multimodal interface
CN102253713B (en) Towards 3 D stereoscopic image display system
CN106325517A (en) Target object trigger method and system and wearable equipment based on virtual reality
CN103853464B (en) Kinect-based railway hand signal identification method
Sood et al. AAWAAZ: A communication system for deaf and dumb
CN104571823A (en) Non-contact virtual human-computer interaction method based on smart television set
US20210072818A1 (en) Interaction method, device, system, electronic device and storage medium
CN103778808A (en) Multimedia platform for artistic designing majors
KR20140028096A (en) Apparatus and method for providing haptic on vitrual video
CN106125928A (en) PPT based on Kinect demonstrates aid system
CN108646578B (en) Medium-free aerial projection virtual picture and reality interaction method
CN209895305U (en) Gesture interaction system
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof
CN203085064U (en) Virtual starry sky teaching device
CN113109943B (en) XR-based simulation multi-person interaction system
CN106933122A (en) Train display intelligent interactive method and system
CN202957861U (en) Interaction mobile phone
CN107831952A (en) A kind of Fei Jubian projects Integral electronic touch-control blank
CN112363666A (en) Window adjusting method and device for electronic whiteboard
CN112612358A (en) Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160706

Termination date: 20180531

CF01 Termination of patent right due to non-payment of annual fee