CN104238418A - Interactive reality system and method - Google Patents
Interactive reality system and method Download PDFInfo
- Publication number
- CN104238418A CN104238418A CN201410468689.0A CN201410468689A CN104238418A CN 104238418 A CN104238418 A CN 104238418A CN 201410468689 A CN201410468689 A CN 201410468689A CN 104238418 A CN104238418 A CN 104238418A
- Authority
- CN
- China
- Prior art keywords
- robot
- real world
- display screen
- computing machine
- touching display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Manipulator (AREA)
Abstract
The invention discloses an interactive reality system and method. The system is formed by a real world (1), a camera (2) and a computer (3), and the real world (1) comprises robots and a scene. The camera (2) transmits images of the real world (1) to the computer (3) in real time through a WiFi module, and the images are displayed on a multi-point touch display screen of the computer (3). When an operator directly drags the robot images on the display screen with fingers, the computer (3) sends a moving remote control instruction to the robots through a WiFi module, and the robots complete advancing, retreating, turning, stopping and other work along with dragging of the fingers. The operator can mark a plurality of robot images on the display screen of the computer (3), and all the marked robots can do the identical work by dragging any robot, and therefore a plurality of robots can be remotely controlled at the same time. Compared with an existing system, the robots can be remotely controlled more naturally, easily and conveniently.
Description
Technical field
The present invention relates to the system and method for multiple robot interactive in human and computer people interactive mode, particularly a kind of people and real world, belong to field of human-computer interaction.
Background technology
General robot all has distant control function.By telechiric device, in visual range, remote-controlled robot carries out operation, also can carry out operation by the robot beyond video image remote-controlled visible scope.Remote-controlled robot is directly controlled by the mankind, and the wisdom of humanity and robot autonomous ability can be combined, high-level efficiency, high-quality fulfil assignment, and this kind of robot remains the main flow of future robot.Remote-controlled robot is primarily of telechiric device, video monitoring apparatus, autonomous robot etc. three part composition.Telechiric device comprises direction cross key and function key, and the telechiric device also had comprises directional lever (or bearing circle) and function key.Operator is by video monitoring apparatus observation site, and send instruction by telechiric device to robot, long-distance remote control robot fulfils assignment.In recent years, there is touch-screen remote control, such as iPad robot, iPad model plane, iPad submarine etc., touch screen display corresponding direction cross key (or bearing circle) and function key icon at iPad, fulfils assignment by touching icon remote-controlled robot.
Although the very hommization of the telechiric device of current design, but still operator is needed to have good straighforward operation technical ability; Especially when the multiple robot of remote control, need many people to operate the telechiric device of multiple correspondence, produce the control problems such as synchronous, the cooperation and competition between many people.
Summary of the invention
The present invention is directed to the problems referred to above, propose a kind of mutual reality system (Interactive Reality Systems, IRS) and method.
Basic thought of the present invention sets up robot in real world and the corresponding relation between its image on computing machine multiple point touching display screen by video camera, robot graphics on the direct touch display screen of operator, just can realize the remote control to robot in real world, claim this remote control mode for touching image remote control (Tele-Control by Touching Images, TCTI).The remote control of touch image is not that icon is good in the remote control on touch display screen, but the robot graphics on direct touch display screen.
The object of the invention is to set up a kind of mutual reality system, operator, with the multiple point touching display screen of finger touch computing machine, directly drags the robot graphics in display screen, just can complete the remote operations such as robot advance, retrogressing, turning, stopping; Operator can mark multiple robot graphics on a display screen, drags any one robot graphics, and other robot graphics is also dragged together, completes the remote control of multiple robot in real world.Another object of the present invention sets up a kind of mutual practical method, the image of real world is passed to the computing machine beyond certain distance by use video camera, and be presented on the multiple point touching display screen of computing machine, by directly touching the image of real world, realize the mutual exchange and effect of people and real world.
The object of the invention is to be achieved through the following technical solutions.
1. a mutual reality system, by real world (1), composition such as part such as three, video camera (2) and computing machine (3) etc., real world includes robot and scene in (1), robot, video camera (2) and computing machine (3) all include WiFi module, computing machine (3) also includes multiple point touching display screen, computing module and control module, robot is for completing its job task in scene in described real world (1), video camera (2) is for completing the Image Acquisition of real world (1), and the task of computing machine (3) is passed in real time by WiFi module, it is characterized in that, the corresponding relation of the computing module of described computing machine (3) robot graphics in the robot that sets up real world (1) and multiple point touching display screen, and there is in multiple point touching display screen display the image of corresponding relation, and identifying operation person touches the action of image on multiple point touching display screen, the action touching image is converted to the instruction that corresponding robot can identify by the control module of computing machine (3), the robot corresponding in real world (1) by WiFi module sends, it is made to fulfil assignment task according to instruction
A kind of mutual practical method, it is characterized in that, real world (1) image is passed to computing machine (3) by WiFi module by video camera (2) in real time, the computing module of computing machine (3) carries out real-time identification and location Calculation to the robot in image, operator directly drags the robot graphics on multiple point touching display screen with finger, remote-controlled robot does corresponding action at real world (1); Moved in circles by this kind of method, operator can be fulfiled assignment task by the multiple point touching display screen robot controlled in real time in real world (1).
Described method also comprises the foundation of the corresponding relation of robot graphics in multiple robot and multiple point touching display screen in real world (1), and this process of establishing can be realized by any one in following two kinds of modes:
One, the control module of computing machine (3) sends the telecommand of special exercise to the some robots in real world (1) by WiFi module, operator marks that robot doing corresponding sports in multiple point touching display screen, this robot graphics stores by the computing module of computing machine (3), for follow-up identification and location, complete the foundation of corresponding relation;
Two, the control module of computing machine (3) sends the telecommand of special exercise to the some robots in real world (1) by WiFi module, the computing module of computing machine (3) automatically detects that robot doing corresponding actions and to go forward side by side row labels, this robot graphics is stored, for follow-up identification and location, complete the foundation of corresponding relation.
Operator can mark multiple robot graphics on multiple point touching display screen, drags any one robot graphics, and all robots be labeled all do identical action at real world (1), realize the remote control of multiple stage robot.
Operator marks the scene image of real world (1) on multiple point touching display screen, the specific point of mark or region, when the robot of real world (1) and the distance in these points or region are lower than threshold value L, control module sends dependent instruction from the robot that trend is corresponding, and remote-controlled robot does corresponding action.
Tool of the present invention has the following advantages:
1, remote control is directly perceived.See robot graphics on the touchscreen, directly drag robot graphics, can see whether robot moves according to the intention of operating personnel.
2, remote control nature.Operating personnel drag robot graphics with finger and carry out according to natural mode, and such as, dragging forward is move ahead, and dragging left is turn left.
3, the multiple robot of remote control be more prone to, convenient.By " in groups " function key, divide multiple robot into one group, any one robot of remote control equals remote control Zheng Zu robot and does identical operation.
4, virtual tag can be carried out to real world, for robot autonomous operation provides information.On the image of real world (1), by setting-out with draw a circle and arrange barrier, forbidden zone or workspace, realizing the mark to real world (1), without the need to arranging material object to real world scene, increasing work efficiency.Particularly robot can arrive and the scene that personnel cannot arrive, and this labeling method is more effective.
5, more general distant control function, hardware system is constant, only changes software, just can change function, and easily, cost is low in upgrading.
Accompanying drawing explanation
Fig. 1 is that the present invention forms structural representation.
Fig. 2 is embodiment of the present invention schematic diagram.
Fig. 3 is that in the embodiment of the present invention, schematic diagram is good for by multiple point touching display screen functions menu.
Fig. 4 is remote control individual machine people schematic diagram in the embodiment of the present invention.
Fig. 5 is remote control multiple robots schematic diagram in the embodiment of the present invention.
Fig. 6 carries out mark schematic diagram to scene in the embodiment of the present invention on multiple point touching display screen.
Reference numeral: real world-1; Video camera-2; Computing machine-3.
Embodiment
Below in conjunction with drawings and Examples, the course of work of the present invention is described further.
A kind of mutual reality system, is made up of part such as three, real world (1), video camera (2) and computing machine (3) etc., as shown in Figure 1.Real world includes robot and scene in (1), and there is WiFi module in robot; Robot has the capacity of will such as perception and location, has automatic obstacle avoiding function.Video camera (2) includes WiFi module, and computing machine (3) includes computing module, WiFi module, control module and multiple point touching display screen, has powerful calculating and storage capacity.The robot of real world (1), video camera (2), computing machine (3) directly pass through WiFi model calling.Real world (1) image is passed to computing machine (3) by WiFi module by video camera (2) in real time, the computing module of computing machine (3) carries out real-time location Calculation to the robot in image, set up the corresponding relation of robot image on the robot of real world (1) and multiple point touching display screen, and in multiple point touching display screen display, there is the image of this corresponding relation.Robot graphics on operator's touch display screen, this touch action of computing module identification of computing machine (3), and generate steering order by control module, sent by the robot of WiFi module to correspondence, complete the remote control to robot in real world (1).
Video camera (2) is arranged on a fixed position in real world (1), obtains the image of real world (1), and passes to computing machine (3) in real time by WiFi module.In real world (1), install multiple fixed cameras, the multi aspect images that can complete real world (1) obtains task.When using multiple video camera (2), computing machine (3) needs split screen to show, or uses corresponding multiple stage computing machine (3).
Robot in the image of the real world (1) that the computing module of computing machine (3) can be taken video camera (2) detects in real time, follows the tracks of, identifies and locates, this locating information becomes corresponding relation with multiple point touching matrix-like, and this corresponding relation makes computing machine accurately sense operation person whether can touch corresponding robot graphics.Different touch action represents different telecommands, selects natural drag action, and robot can move along with the dragging of finger, can accomplish that robot follows finger and moves.For the robot of movement, point the robot graphics pinned on multiple point touching display screen, the robot in real world (1) just stops mobile under the control of the control module of computing machine (3).
Operator can mark the image of real world (1) on multiple point touching display screen, the specific point of mark or region, and gives specific implication, such as prohibited area, sand ground region, destination etc.Without the need to arranging material object in real world (1).As the threshold value L that the distance in robot and these points or region presets lower than, the control module of computing machine (3) sends instruction automatically, robot corresponding to remote control does corresponding action, such as run into prohibited area, robot is without the need to autonomous perception, control module sends instruction directly to this robot, and this robot of remote control is away from prohibited area.Now, operator also cannot enter forbidden zone by dragging this robot.
Operator can also mark multiple robot graphics on multiple point touching display screen, after mark, drags any one robot graphics, and all robots be labeled all do identical action at real world (1), realize the remote control of multiple robot.
The present invention also needs to set up the corresponding relation between robot graphics that in real world (1), robot and computing machine (3) show.If only have a robot, the robot in image is exactly that robot in real world, without the need to setting up remote control corresponding relation.But when having multiple robot in real world (1), the multiple point touching display screen of computing machine (3) will show multiple robot graphics, now need to set up the corresponding relation of robot graphics in robot in real world (1) and multiple point touching display screen, be called remote control corresponding relation.Had remote control corresponding relation, then, when dragging some robot graphics in multiple point touching display screen, the control module of computing machine (3) just can send telecommand by WiFi module to this robot, and corresponding robot does corresponding action.
Here sets up two kinds of technology of remote control corresponding relation.
The first is the corresponding technology of interactive remote control.The control module of computing machine (3) sends the telecommand of special exercise to a robot in real world (1) by WiFi module, operator marks the robot doing corresponding sports in multiple point touching display screen, completes the foundation of remote control corresponding relation.This technology is also roll-call technology, and point is to whose name, and who just provides action, and then user marks it, and image stores by the computing module of computing machine (3), for follow-up robot identification and location provide Template Information.
The second is the corresponding technology of image recognition remote control.Label in robot, such as uses the combination of different colours as label, just as football player carries number on the body, is identified by computing machine to label, set up remote control corresponding relation.Also computing machine automatic roll-calling technology can be adopted, namely the control module of computing machine (3) sends specific action instruction automatically to some robots, the computing module of computing machine (3) detects the robot doing specific action automatically, row labels of going forward side by side and its image being stored, for follow-up robot identification and location provide Template Information.
Below in conjunction with a specific embodiment, foregoing is described further.
Fig. 2 is embodiment of the present invention schematic diagram, has 6 robots in real world (1), is divided into white and black 2 teams and is poised for battle, every team 3.In real world (1), fixedly mount a video camera (2), the image of real world (1) is passed to computing machine (3) by WiFi module by video camera (2) in real time, and directly shows on multiple point touching display screen.
Robot of white and black two teams carrys out remote control by two users as leader respectively.Black finger represents Hei Dui robot leader, and white finger represents Bai Dui robot leader.
The multiple point touching display screen of computing machine (3) there is function menu be good for, as shown in Figure 3.By function menu be good for, menu launch, there are 5 function keys: deletion, in groups, obstacle, correspondence, confirmation.
Present system has two states, and one arranges state, and one is remote state.Menu launches, and is in the state of setting, now can sets up remote control " correspondence " relation, can arrange " obstacle ", also can " in groups " robot etc.
When native system is started working, be good for by function menu, menu launches, and is good for by " correspondence ", then adopt interactive corresponding technology, set up the corresponding relation in 6 robots and computing machine (1) multiple point touching display screen between 6 robot graphics in real world (1).Can repeat to set up corresponding relation.Be good for by the confirmation in menu, menu is regained, and system enters remote state.
Shown in Fig. 4 is the robot team member image that white and black two teams leader uses finger touch oneself respectively, leader's finger drags the image of any one robot, corresponding robot moves along with the movement of finger, can complete such as advance, retreat, turn left, the motion such as right-hand rotation, achieve the object by touch display screen remote-controlled robot.
When the multiple robot of remote control does same motion or moves to same position simultaneously, leader presses function menu and is good for, menu launches, be good for by " in groups ", then these robot graphics enclosed on multiple point touching display screen, form remote control region, leader's finger drags any one robot graphics in remote control region, all robot graphics then in remote control region are dragged, and achieve the object by touching robot graphics's energy multiple robot of remote control, as shown in Figure 5.
The present invention not only can carry out remote control and setting to the robot in real world (1), can also arrange scene.Robot in real world (1) has perception, location and barrier avoiding function, also can identify and understand scene, such as barrier, specific objective etc., but identifies and understand have certain limitation, such as discrimination, computing time etc.In mutual reality system, because operator directly sees robot in real world (1) and scene by the multiple point touching display screen of computing machine (3), directly can carry out interactive markup to scene, the specific point of mark and region.The scene directly arriving real world without the need to personnel arranges physical identification, can effectively increase work efficiency, and particularly robot can arrive and the scene that personnel cannot arrive, and this labeling method is more effective.
Fig. 6 is the inhibit wire that arranges in scene of the present embodiment and prohibited area.Be good for by function menu, menu launches, and is good for by " obstacle " wherein, then setting-out in the scene image on multiple point touching display screen, such as, robot moves on road, can rule on Lu Ya or limit, ditch, this is Lu Ya or ditch to inform robot, can not cross the border; Also some for scene image specific regions can be enclosed, as the Delta Region in Fig. 6, represent that this region is barrier or trap, robot can not enter into, such as pond or sand ground etc.When robot is close to these points or region, the control module of computing machine (3) can send instruction to robot automatically, carrys out remote-controlled robot and does corresponding action, now, even if operator's maloperation, also cannot drag robot and enter these barriers.
Those skilled in the art understand, the function that native system can complete not only comprises described in embodiment, can also arrange more function by function menu is strong, such as arranging some machine people is team leader, arranges the soldier etc. that a machine people is another robot.
Above-described specific descriptions; the object of inventing, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; for explaining the present invention, the protection domain be not intended to limit the present invention, within the spirit and principles in the present invention all; any amendment of making, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (5)
1. a mutual reality system, by real world (1), composition such as part such as three, video camera (2) and computing machine (3) etc., real world includes robot and scene in (1), robot, video camera (2) and computing machine (3) all include WiFi module, computing machine (3) also includes multiple point touching display screen, computing module and control module, robot is for completing its job task in scene in described real world (1), video camera (2) is for completing the Image Acquisition of real world (1), and the task of computing machine (3) is passed in real time by WiFi module, it is characterized in that, the corresponding relation of the computing module of described computing machine (3) robot graphics in the robot that sets up real world (1) and multiple point touching display screen, and there is in multiple point touching display screen display the image of corresponding relation, and identifying operation person touches the action of image on multiple point touching display screen, the action touching image is converted to the instruction that corresponding robot can identify by the control module of computing machine (3), the robot corresponding in real world (1) by WiFi module sends, it is made to fulfil assignment task according to instruction.
2. a mutual practical method, it is characterized in that, real world (1) image is passed to computing machine (3) by WiFi module by video camera (2) in real time, the computing module of computing machine (3) carries out real-time identification and location Calculation to the robot in image, operator directly drags the robot graphics on multiple point touching display screen with finger, remote-controlled robot does corresponding action at real world (1); Moved in circles by this kind of method, the robot that operator can control real world in real time by multiple point touching display screen fulfils assignment task.
3. the mutual practical method of one according to claim 2, it is characterized in that, described method also comprises the foundation of the corresponding relation of robot graphics in multiple robot and multiple point touching display screen in real world (1), and this process of establishing can be realized by any one in following two kinds of modes:
One, the control module of computing machine (3) sends the telecommand of special exercise to the some robots in real world (1) by WiFi module, operator marks that robot graphics doing corresponding sports in multiple point touching display screen, this robot graphics stores by the computing module of computing machine (3), for follow-up identification and location, complete the foundation of corresponding relation;
Two, the control module of computing machine (3) sends the telecommand of special exercise to the some robots in real world (1) by WiFi module, the computing module of computing machine (3) automatically detects that robot doing corresponding actions and to go forward side by side row labels, and robot graphics is stored, for follow-up identification and location, complete the foundation of corresponding relation.
4. the mutual practical method of one according to claim 2, it is characterized in that, operator can mark multiple robot graphics on multiple point touching display screen, drag any one robot graphics, all robots be labeled all do identical action at real world (1), realize the remote control of multiple stage robot.
5. according to the arbitrary described mutual practical method of one of claim 2-4, it is characterized in that, operator marks the scene image of real world (1) on multiple point touching display screen, the specific point of mark or region, when the robot of real world (1) and the distance in these points or region are lower than threshold value L, control module sends dependent instruction from the robot that trend is corresponding, and remote-controlled robot does corresponding action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410468689.0A CN104238418A (en) | 2014-07-02 | 2014-09-15 | Interactive reality system and method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410312141.7 | 2014-07-02 | ||
CN201410312141 | 2014-07-02 | ||
CN201410468689.0A CN104238418A (en) | 2014-07-02 | 2014-09-15 | Interactive reality system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104238418A true CN104238418A (en) | 2014-12-24 |
Family
ID=52226734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410468689.0A Pending CN104238418A (en) | 2014-07-02 | 2014-09-15 | Interactive reality system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104238418A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105700481A (en) * | 2016-03-23 | 2016-06-22 | 北京光年无限科技有限公司 | Intelligent robot motion generation method and system |
CN106444784A (en) * | 2016-11-17 | 2017-02-22 | 北京酷思倍科技有限公司 | Virtual and real space crossing device for robot competition |
CN107351058A (en) * | 2017-06-08 | 2017-11-17 | 华南理工大学 | Robot teaching method based on augmented reality |
CN107666987A (en) * | 2015-07-02 | 2018-02-06 | 埃森哲环球服务有限公司 | Robotic process automates |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3841439B2 (en) * | 1996-07-24 | 2006-11-01 | ファナック株式会社 | Robot jog feed method |
CN101154110A (en) * | 2006-09-29 | 2008-04-02 | 三星电子株式会社 | Method, apparatus, and medium for controlling mobile device based on image of real space including the mobile device |
CN101995877A (en) * | 2009-08-31 | 2011-03-30 | 北京汉库机器人技术有限公司 | Wireless sensor network-based humanoid robot communication control method and system |
JP2012171024A (en) * | 2011-02-17 | 2012-09-10 | Japan Science & Technology Agency | Robot system |
US8386079B1 (en) * | 2011-10-28 | 2013-02-26 | Google Inc. | Systems and methods for determining semantic information associated with objects |
DE102012103030B3 (en) * | 2012-04-05 | 2013-05-23 | Reis Group Holding Gmbh & Co. Kg | Method for operating an industrial robot |
CN103149935A (en) * | 2013-03-12 | 2013-06-12 | 西北农林科技大学 | Remote wireless touch screen control system of mountain farming robot |
JP5213183B2 (en) * | 2009-07-14 | 2013-06-19 | 独立行政法人科学技術振興機構 | Robot control system and robot control program |
CN103235595A (en) * | 2013-04-27 | 2013-08-07 | 湖南科技大学 | Control system and control method of outdoor micro ground swarm robot |
CN103324192A (en) * | 2012-03-23 | 2013-09-25 | 苏州宝时得电动工具有限公司 | Boundary setting method and boundary setting system |
-
2014
- 2014-09-15 CN CN201410468689.0A patent/CN104238418A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3841439B2 (en) * | 1996-07-24 | 2006-11-01 | ファナック株式会社 | Robot jog feed method |
CN101154110A (en) * | 2006-09-29 | 2008-04-02 | 三星电子株式会社 | Method, apparatus, and medium for controlling mobile device based on image of real space including the mobile device |
JP5213183B2 (en) * | 2009-07-14 | 2013-06-19 | 独立行政法人科学技術振興機構 | Robot control system and robot control program |
CN101995877A (en) * | 2009-08-31 | 2011-03-30 | 北京汉库机器人技术有限公司 | Wireless sensor network-based humanoid robot communication control method and system |
JP2012171024A (en) * | 2011-02-17 | 2012-09-10 | Japan Science & Technology Agency | Robot system |
US8386079B1 (en) * | 2011-10-28 | 2013-02-26 | Google Inc. | Systems and methods for determining semantic information associated with objects |
CN103324192A (en) * | 2012-03-23 | 2013-09-25 | 苏州宝时得电动工具有限公司 | Boundary setting method and boundary setting system |
DE102012103030B3 (en) * | 2012-04-05 | 2013-05-23 | Reis Group Holding Gmbh & Co. Kg | Method for operating an industrial robot |
CN103149935A (en) * | 2013-03-12 | 2013-06-12 | 西北农林科技大学 | Remote wireless touch screen control system of mountain farming robot |
CN103235595A (en) * | 2013-04-27 | 2013-08-07 | 湖南科技大学 | Control system and control method of outdoor micro ground swarm robot |
Non-Patent Citations (2)
Title |
---|
张国华, 等: "基于多点触摸的交互手势分析与设计", 《计算机应用研究》 * |
黄武新, 等: "基于网络和视觉的多机械人研究平台", 《机电一体化》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107666987A (en) * | 2015-07-02 | 2018-02-06 | 埃森哲环球服务有限公司 | Robotic process automates |
CN107666987B (en) * | 2015-07-02 | 2020-10-16 | 埃森哲环球服务有限公司 | Robot process automation |
CN105700481A (en) * | 2016-03-23 | 2016-06-22 | 北京光年无限科技有限公司 | Intelligent robot motion generation method and system |
CN106444784A (en) * | 2016-11-17 | 2017-02-22 | 北京酷思倍科技有限公司 | Virtual and real space crossing device for robot competition |
CN106444784B (en) * | 2016-11-17 | 2023-09-29 | 北京酷思倍科技有限公司 | Virtual-real space crossing device for robot competition |
CN107351058A (en) * | 2017-06-08 | 2017-11-17 | 华南理工大学 | Robot teaching method based on augmented reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106200679B (en) | Single operation person's multiple no-manned plane mixing Active Control Method based on multi-modal natural interaction | |
Hashimoto et al. | Touchme: An augmented reality based remote robot manipulation | |
Krupke et al. | Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction | |
CN102789327B (en) | Method for controlling mobile robot on basis of hand signals | |
EP2122597B1 (en) | Augmented reality-based system and method providing status and control of unmanned vehicles | |
CN102520723B (en) | Wheelchair indoor global video monitor navigation system based on suspended wireless transmission camera | |
CN105681747A (en) | Telepresence interaction wheelchair | |
Song et al. | GaFinC: Gaze and Finger Control interface for 3D model manipulation in CAD application | |
CN108161904A (en) | Robot on-line teaching device based on augmented reality, system, method, equipment | |
CN106681354B (en) | The flight control method and device of unmanned plane | |
CN104238418A (en) | Interactive reality system and method | |
CN104267904A (en) | Touch screen virtual unit control method and mobile terminal | |
CN107223199A (en) | Air navigation aid and equipment based on three-dimensional map | |
CN110497405B (en) | Force feedback man-machine cooperation anti-collision detection method and module for driving and controlling integrated control system | |
KR20110022057A (en) | Gesture-based control system for vehicle interfaces | |
CN106774872A (en) | Virtual reality system, virtual reality exchange method and device | |
CN102350700A (en) | Method for controlling robot based on visual sense | |
CN107122045A (en) | A kind of virtual man-machine teaching system and method based on mixed reality technology | |
CN204366968U (en) | Based on the multiple degrees of freedom anthropomorphic robot of said three-dimensional body sense video camera | |
CN103257707A (en) | Three-dimensional roaming method utilizing eye gaze tracking and conventional mouse control device | |
CN103488296A (en) | Somatosensory interaction gesture control method and somatosensory interaction gesture control device | |
CN103853464A (en) | Kinect-based railway hand signal identification method | |
US20240103621A1 (en) | Head mounted display apparatus | |
CN106774824A (en) | Virtual reality exchange method and device | |
CN105979330A (en) | Somatosensory button location method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20141224 |