CN100389013C - Reconstruction of human emulated robot working scene based on multiple information integration - Google Patents
Reconstruction of human emulated robot working scene based on multiple information integration Download PDFInfo
- Publication number
- CN100389013C CN100389013C CNB2005100599150A CN200510059915A CN100389013C CN 100389013 C CN100389013 C CN 100389013C CN B2005100599150 A CNB2005100599150 A CN B2005100599150A CN 200510059915 A CN200510059915 A CN 200510059915A CN 100389013 C CN100389013 C CN 100389013C
- Authority
- CN
- China
- Prior art keywords
- robot
- scene
- data
- model
- operator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Abstract
The present invention relates to an apish robot operation scene, which is a real-time video image combined with control orders of an operator and feedback information. An apish robot mode and an operation environment mode thereof of an operation site are displayed in the scene in real time, and the scene can receive position information of the operation site of an apish robot and sensing information in the process of operation of the robot. The information driving apish robot mode and the motion of the environment mode thereof are used for displaying the real-time video image, and meanwhile, the scene carries out prediction emulation to generate operation data of the apish robot and position data of each mode under the ideal condition according to orders emitted by the operator. When the real-time feedback information is missing or can not be obtained, the data generated by the prediction emulation drive the modes. The operator can freely change a visual angle so as to realize observation of a full visual angle.
Description
Affiliated technical field:
The invention belongs to the robot field, be mainly used in human emulated robot working scene is carried out three-dimensionalreconstruction.It is applicable to teleoperation of robot control, can show the 3-D view of the object in human emulated robot and the working scene thereof in real time, for the distant operation of human emulated robot provides approach sense of vision.
Background technology:
Human emulated robot, but a kind of exactly robot with the elemental motion of human appearance feature simulating human.Distant operation is an important technology of robot application.Pass through remote-controlled operation platform, operating personnel can monitor and control distant place robot and finish various job tasks, thus make human emulated robot can replace human that can't touch at some in addition some jeopardize under the environment of human health or life security and finish various tasks.
The image of operating environment shows it is a key technology of distant operation control.The image of operating environment shows at present, the three-dimensional tridimensional virtual scene dual mode that mainly adopts on-the-spot two-dimensional video image and order based on the operator.
On-the-spot two-dimensional video image mode is at a plurality of video cameras of the on-the-spot installation of robot manipulating task, and robot and operation field thereof are carried out video camera.Video image is transferred to operator's end through network, shows on computers.Such image is the real scene of robot manipulating task, can truly reflect the situation of robot manipulating task.But have following shortcoming: 1) can not provide three-dimensional information, therefore actual three-dimensional position operation is difficult to realize; 2) visual angle is limited, and at the field erected video camera of robot manipulating task, fixed-site can't provide comprehensive observation visual angle; 3) than long time delay, the video image file data volume of transmitting in network is bigger, and under the limited situation of the network bandwidth, there is bigger time delay in transmission course.
Basic principle based on the three-dimensional tridimensional virtual scene of operator order is, the various operational orders that the operator sends to robot, suppose the correct execution of robot and produce corresponding motion, in order to describe this motion, on the computer of operator's end, use 3 d modeling software, set up robot motion model and three-dimensional virtual image, produce the effect of prediction emulation based on operator's instruction.Such image is the prediction to robot fill order process, can offer the working scene of operator's image.In addition, owing to use 3 d modeling software to make, the operator can change viewing angle very easily.But have following shortcoming: such system offers the operator is prediction and emulation to robot and operation field thereof, can not truly reflect the operation situation of robot.
Summary of the invention:
The sensor detection information of binding operation person control command of the present invention, human emulated robot self and the 3 D stereo scene of an operating environment of environment measuring information architecture.
The technical solution used in the present invention is:
The human emulated robot working scene that the present invention makes up is the real time video image of a kind of binding operation order and feedback information.Scene can show the environmental model at human emulated robot model and robot manipulating task scene in real time.
Scene can receive the operating heat transfer agent of positional information and robot at human emulated robot working scene.Use the motion of above information-driven human emulated robot model and environmental model thereof to show the real-time video animated image.Simultaneously scene receives the order that the operator sends and also predicts, generate ideally the robot service data and the position data of each model.When real-time feedback information lacked or can't obtain, the data of using forecasting institute to generate were come driving model.
The operator can change the visual angle arbitrarily, realizes the observation at full visual angle.
Its main technical schemes is:
Use 3 d modeling software, make the threedimensional model of each type objects of operation field of human emulated robot, model has the resemblance identical with actual object.Set up data processing module simultaneously with Model Matching.Data processing module can receive multiple information, and the process matching treatment drives the model sport in the working scene.
The model of human emulated robot working scene mainly contains two kinds: environmental model and human emulated robot model.
(1) the use 3 d modeling software is made the environmental model in the known robot manipulating task place of structure.Environmental model has resemblance identical with actual object and position relation.Can realize full view.
(2) use 3 d modeling software to make the human emulated robot model.Model has profile geometric properties identical with robot and free degree setting, satisfies anthropomorphic robot multi-connecting-rod mechanism kinematical constraint condition.Human emulated robot model receiving position data are located in scene.The angle that the joint angles data that model receives each free degree drive between each connecting rod changes, the motion of performance human emulated robot.
At the on-the-spot installation site of robot manipulating task sensor, the three-dimensional coordinate data that measures is exported in the position of robot measurement and operative goals thereof in real time.Each joint of robot health is equipped with angular transducer, in real time the relative angle data between the connecting rod that connected of each joint of robot measurement health.
Data processing module is mainly realized following function:
(1) receives the position data of robot, the robot model is located in operation site surrounding model at its operating environment.
(2) receive relative angle data between the connecting rod that each joint of the machine person connected, it is matched the robot model, the motion of robot is expressed in motion between each connecting rod of driving model.
(3) receive the order that the operator sends, carry out command interpretation.According to the ideal movements track of order generation robot manipulating task, the service data of robot is predicted.The movement locus that can generate comprises robot joint angles data variation track in the command execution process, the position data variation track of robot.In the ideal case, these data are identical with the on-the-spot feedback data of robot manipulating task.
Scene is in the process of work, and the data that receive from the robot working site in real time under the data processing module normal condition drive all kinds of model sports.Under the situation of the interim disappearance of data, the prediction data that the simulation and prediction of using the operator to give an order generates drives the motion of each class model at the scene, keeps the continuity of model sport.
The invention has the beneficial effects as follows:
1. merge the animated image of the real-time display device people of multiple information operation field.On-the-spot feedback data is used in the driving of each model in the scene, and under the situation of the temporary transient disappearance of feedback data, the utility command prediction data shows, keeps the continuity that shows at the scene.
2. full view image.The operator can change observation visual angle arbitrarily, observes the details of scene.
Description of drawings:
Fig. 1 is based on the human emulated robot working scene fundamental diagram of many information fusion
The specific embodiment:
The whole human emulated robot operative scenario course of work is as follows:
The first step, robot brings into operation, distant operation control beginning.Start computer program, show the model of place of having set up.Use initialization data to determine robot model and Action Target model initial position thereof, determine the initial angle between robot model's the connecting rod.What this step generated is the initialization interface of scene.
In second step, the contextual data processing module receives the operational order that distant operator sends in real time, explains the generation forecast track data.The use prediction data drives each model in the virtual scene, to constitute three-dimensional virtual scene.The scene that this step generated can show, the ideal movements image that the executable operations person of robot gives an order.
In the 3rd step, the sensor of robot itself is measured each joint angle data in real time, is transferred to the data processing module of scene through remote-controlled operation platform.Data processing module is stopped using prediction joint angles data, changes into and uses true joint angle data-driven model to express the motion of robot self.At this moment, the robot model in the three-dimensional virtual scene can express the running status of robot at operation field really.
In the 4th step, the environmental detection sensor at robot manipulating task scene brings into operation, and gets access to the position data of robot and operative goals thereof.Above data are transferred to the data processing module of scene through remote-controlled operation platform.Data module uses above data to position as robot model and environmental model thereof.At this moment, three-dimensional scenic can show the position relation of robot in the operation field and environment thereof really.
Use robot self-sensor device feedback data and environment measuring position sensor feedback data simultaneously, come the three-dimensional virtual scene of the model of driven machine people and operating environment thereof, running status that can the truly expressed robot and the positional information in environment thereof.Wherein, under the situation of the temporary transient disappearance of feedback, the contextual data processing module can be selected automatically to predict the track data that generates by order, drive each class model at the scene.After getting access to feedback data again, switch to three-dimensional virtual scene by the True Data driving model.
Claims (1)
1. the distant operation task scene reconstruction of the human emulated robot based on many information fusion method, can make the operator monitor real machine people's practical operation situation, it is characterized in that: show the environmental model in human emulated robot and the operation place thereof in the real time video image; Each joint angle data of the real-time robot measurement of the sensor of robot itself, the environmental detection sensor at robot manipulating task scene gets access to the position data of robot and operative goals thereof, use above data to come driving model to show real time video image, can express the position relation of robot in the running status of robot and the operation field and environment thereof really; The operator can change the visual angle arbitrarily to observe the details of model; Scene can also receive the order that the operator sends, make an explanation, the generation forecast data, at the scene under the situation of the temporary transient disappearance of real time data, use the model sport in the prediction data driving scene, after getting access to feedback data again, switch to three-dimensional virtual scene by the True Data driving model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1840298A CN1840298A (en) | 2006-10-04 |
CN100389013C true CN100389013C (en) | 2008-05-21 |
Family
ID=37029581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100599150A Expired - Fee Related CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100389013C (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101434066B (en) * | 2008-10-20 | 2012-11-21 | 北京理工大学 | Method and platform for predicating teleoperation of robot |
CN101844353B (en) * | 2010-04-14 | 2011-08-10 | 华中科技大学 | Teleoperation task planning and simulation method for mechanical arm/dexterous hand system |
US9879994B2 (en) * | 2011-06-15 | 2018-01-30 | Trimble Inc. | Method of placing a total station in a building |
JP5892361B2 (en) * | 2011-08-02 | 2016-03-23 | ソニー株式会社 | Control device, control method, program, and robot control system |
CN107004298B (en) * | 2016-04-25 | 2020-11-10 | 深圳前海达闼云端智能科技有限公司 | Method and device for establishing three-dimensional model of robot and electronic equipment |
CN112388678B (en) * | 2020-11-04 | 2023-04-18 | 公安部第三研究所 | Behavior detection robot based on low-power-consumption pattern recognition technology |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09300272A (en) * | 1996-05-14 | 1997-11-25 | Nippon Telegr & Teleph Corp <Ntt> | Robot control method and device thereof |
US5983166A (en) * | 1995-09-28 | 1999-11-09 | Komatsu Ltd. | Structure measurement system |
JP2000117672A (en) * | 1998-10-16 | 2000-04-25 | Xerox Corp | Dynamic reconstitution method and device for switch connection between module |
CN1309598A (en) * | 1998-07-15 | 2001-08-22 | Ce核电力有限公司 | Visual tube position verification system |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
CN1417006A (en) * | 2001-11-09 | 2003-05-14 | 中国科学院自动化研究所 | Vision controlling platform for opened industrial robot |
CN1472047A (en) * | 2003-06-26 | 2004-02-04 | 上海交通大学 | Open-type network robot universal control systems |
CN1573628A (en) * | 2003-05-29 | 2005-02-02 | 发那科株式会社 | Robot system |
-
2005
- 2005-04-01 CN CNB2005100599150A patent/CN100389013C/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983166A (en) * | 1995-09-28 | 1999-11-09 | Komatsu Ltd. | Structure measurement system |
JPH09300272A (en) * | 1996-05-14 | 1997-11-25 | Nippon Telegr & Teleph Corp <Ntt> | Robot control method and device thereof |
CN1309598A (en) * | 1998-07-15 | 2001-08-22 | Ce核电力有限公司 | Visual tube position verification system |
JP2000117672A (en) * | 1998-10-16 | 2000-04-25 | Xerox Corp | Dynamic reconstitution method and device for switch connection between module |
US6233502B1 (en) * | 1998-10-16 | 2001-05-15 | Xerox Corporation | Fault tolerant connection system for transiently connectable modular elements |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
CN1417006A (en) * | 2001-11-09 | 2003-05-14 | 中国科学院自动化研究所 | Vision controlling platform for opened industrial robot |
CN1573628A (en) * | 2003-05-29 | 2005-02-02 | 发那科株式会社 | Robot system |
CN1472047A (en) * | 2003-06-26 | 2004-02-04 | 上海交通大学 | Open-type network robot universal control systems |
Non-Patent Citations (2)
Title |
---|
基于虚拟现实技术的机器人外科手术模拟与培训系统研究. 吕洪波,王田苗,刘达,胡磊,唐泽圣,申皓,田增民.高技术通讯,第11期. 2001 |
基于虚拟现实技术的机器人外科手术模拟与培训系统研究. 吕洪波,王田苗,刘达,胡磊,唐泽圣,申皓,田增民.高技术通讯,第11期. 2001 * |
Also Published As
Publication number | Publication date |
---|---|
CN1840298A (en) | 2006-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101493855B (en) | Real-time simulation system for under-driven double-feet walking robot | |
CN107671857B (en) | Three-dimensional simulation platform for operation demonstration and algorithm verification of service robot | |
CN102120325B (en) | Novel remote operation far-end robot control platform and method | |
CN100389013C (en) | Reconstruction of human emulated robot working scene based on multiple information integration | |
CN109434870A (en) | A kind of virtual reality operation system for robot livewire work | |
CN106527177A (en) | Multi-functional and one-stop type remote control design, the simulation system and method thereof | |
Corke et al. | ACME, a telerobotic active measurement facility | |
CN101587329A (en) | Robot predicting method and system | |
Morosi et al. | Coordinated control paradigm for hydraulic excavator with haptic device | |
CN108908298B (en) | Master-slave type spraying robot teaching system fusing virtual reality technology | |
CN110977981A (en) | Robot virtual reality synchronization system and synchronization method | |
Baier et al. | Distributed PC-based haptic, visual and acoustic telepresence system-experiments in virtual and remote environments | |
Feng et al. | Flexible virtual fixtures for human-excavator cooperative system | |
CN112894820A (en) | Flexible mechanical arm remote operation man-machine interaction device and system | |
CN109213306A (en) | A kind of robot remote control platform and its design method | |
CN206877277U (en) | A kind of virtual man-machine teaching system based on mixed reality technology | |
Kobayashi et al. | Overlay what humanoid robot perceives and thinks to the real-world by mixed reality system | |
CN104669233A (en) | Little finger force feedback device | |
Ryu et al. | Development of wearable haptic system for tangible studio to experience a virtual heritage alive | |
Wang | Improving human-machine interfaces for construction equipment operations with mixed and augmented reality | |
KR20060063187A (en) | Virtual manufacturing system using hand interface and method thereof | |
CN202079595U (en) | Novel control platform for tele-operation of remote robot | |
Foroughi et al. | Controlling servo motor angle by exploiting Kinect SDK | |
CN104669232A (en) | Middle finger force feedback device | |
Freund et al. | Projective Virtual Reality in space applications: A telerobotic ground station for a space mission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080521 Termination date: 20110401 |