CN1840298A - Reconstruction of human emulated robot working scene based on multiple information integration - Google Patents
Reconstruction of human emulated robot working scene based on multiple information integration Download PDFInfo
- Publication number
- CN1840298A CN1840298A CN 200510059915 CN200510059915A CN1840298A CN 1840298 A CN1840298 A CN 1840298A CN 200510059915 CN200510059915 CN 200510059915 CN 200510059915 A CN200510059915 A CN 200510059915A CN 1840298 A CN1840298 A CN 1840298A
- Authority
- CN
- China
- Prior art keywords
- robot
- scene
- model
- data
- operator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Manipulator (AREA)
Abstract
The invention discloses an anthropomorphic mold robot operation scope, which is the real-time video image associates operator control command and feedback information, wherein the scope can display anthropomorphic robot mode of field work and operation environment mode; the scope can receive location information of anthropomorphic robot work field and sensing information in robot operation; using above information activation anthropomorphic robot and environmental mode motion can display real-time video image; the scope carries out artificial prediction according to operator sending order to create anthropomorphic robot operating data and data set of every mode in ideal condition. the data which using prediction simulation to create drives mode, when the real-time feedback information misses or can't be get.
Description
Affiliated technical field:
The invention belongs to the robot field, be mainly used in human emulated robot working scene is carried out three-dimensionalreconstruction.It is applicable to teleoperation of robot control, can show the 3-D view of the object in human emulated robot and the working scene thereof in real time, for the distant operation of human emulated robot provides approach sense of vision.
Background technology:
Human emulated robot, but a kind of exactly robot with the elemental motion of human appearance feature simulating human.Distant operation is an important technology of robot application.Pass through remote-controlled operation platform, operating personnel can monitor and control distant place robot and finish various job tasks, thus make human emulated robot can replace human that can't touch at some in addition some jeopardize under the environment of human health or life security and finish various tasks.
The image of operating environment shows it is a key technology of distant operation control.The image of operating environment shows at present, the three-dimensional tridimensional virtual scene dual mode that mainly adopts on-the-spot two-dimensional video image and order based on the operator.
On-the-spot two-dimensional video image mode is at a plurality of video cameras of the on-the-spot installation of robot manipulating task, and robot and operation field thereof are carried out video camera.Video image is transferred to operator's end through network, shows on computers.Such image is the real scene of robot manipulating task, can truly reflect the situation of robot manipulating task.But have following shortcoming: 1) can not provide three-dimensional information, therefore actual three-dimensional position operation is difficult to realize; 2) visual angle is limited, and at the field erected video camera of robot manipulating task, fixed-site can't provide comprehensive observation visual angle; 3) than long time delay, the video image file data volume of transmitting in network is bigger, and under the limited situation of the network bandwidth, there is bigger time delay in transmission course.
Basic principle based on the three-dimensional tridimensional virtual scene of operator order is, the various operational orders that the operator sends to robot, suppose the correct execution of robot and produce corresponding motion, in order to describe this motion, on the computer of operator's end, use 3 d modeling software, set up robot motion model and three-dimensional virtual image, produce the effect of prediction emulation based on operator's instruction.Such image is the prediction to robot fill order process, can offer the working scene of operator's image.In addition, owing to use 3 d modeling software to make, the operator can change viewing angle very easily.But have following shortcoming: such system offers the operator is prediction and emulation to robot and operation field thereof, can not truly reflect the operation situation of robot.
Summary of the invention:
The sensor detection information of binding operation person control command of the present invention, human emulated robot self and the 3 D stereo scene of an operating environment of environment measuring information architecture.
The technical solution used in the present invention is:
The human emulated robot working scene that the present invention makes up is the real time video image of a kind of binding operation order and feedback information.Scene can show the environmental model at human emulated robot model and robot manipulating task scene in real time.
Scene can receive the operating heat transfer agent of positional information and robot at human emulated robot working scene.Use the motion of above information-driven human emulated robot model and environmental model thereof to show the real-time video animated image.Simultaneously scene receives the order that the operator sends and also predicts, generate ideally the robot service data and the position data of each model.When real-time feedback information lacked or can't obtain, the data of using forecasting institute to generate were come driving model.
The operator can change the visual angle arbitrarily, realizes the observation at full visual angle.
Its main technical schemes is:
Use 3 d modeling software, make the threedimensional model of each type objects of operation field of human emulated robot, model has the resemblance identical with actual object.Set up data processing module simultaneously with Model Matching.Data processing module can receive multiple information, and the process matching treatment drives the model sport in the working scene.
The model of human emulated robot working scene mainly contains two kinds: environmental model and human emulated robot model.
(1) the use 3 d modeling software is made the environmental model in the known robot manipulating task place of structure.Environmental model has resemblance identical with actual object and position relation.Can realize full view.
(2) use 3 d modeling software to make the human emulated robot model.Model has profile geometric properties identical with robot and free degree setting, satisfies anthropomorphic robot multi-connecting-rod mechanism kinematical constraint condition.Human emulated robot model receiving position data are located in scene.The angle that the joint angles data that model receives each free degree drive between each connecting rod changes, the motion of performance human emulated robot.
At the on-the-spot installation site of robot manipulating task sensor, the three-dimensional coordinate data that measures is exported in the position of robot measurement and operative goals thereof in real time.Each joint of robot health is equipped with angular transducer, in real time the relative angle data between the connecting rod that connected of each joint of robot measurement health.
Data processing module is mainly realized following function:
(1) receives the position data of robot, the robot model is located in operation site surrounding model at its operating environment.
(2) receive relative angle data between the connecting rod that each joint of the machine person connected, it is matched the robot model, the motion of robot is expressed in motion between each connecting rod of driving model.
(3) receive the order that the operator sends, carry out command interpretation.According to the ideal movements track of order generation robot manipulating task, the service data of robot is predicted.The movement locus that can generate comprises robot joint angles data variation track in the command execution process, the position data variation track of robot.In the ideal case, these data are identical with the on-the-spot feedback data of robot manipulating task.
Scene is in the process of work, and the data that receive from the robot working site in real time under the data processing module normal condition drive all kinds of model sports.Under the situation of the interim disappearance of data, the prediction data that the simulation and prediction of using the operator to give an order generates drives the motion of each class model at the scene, keeps the continuity of model sport.
The invention has the beneficial effects as follows:
1. merge the animated image of the real-time display device people of multiple information operation field.On-the-spot feedback data is used in the driving of each model in the scene, and under the situation of the temporary transient disappearance of feedback data, the utility command prediction data shows, keeps the continuity that shows at the scene.
2. full view image.The operator can change observation visual angle arbitrarily, observes the details of scene.
Description of drawings:
Fig. 1 is based on the human emulated robot working scene fundamental diagram of many information fusion
The specific embodiment:
The whole human emulated robot operative scenario course of work is as follows:
The first step, robot brings into operation, distant operation control beginning.Start computer program, show the model of place of having set up.Use initialization data to determine robot model and Action Target model initial position thereof, determine the initial angle between robot model's the connecting rod.What this step generated is the initialization interface of scene.
In second step, the contextual data processing module receives the operational order that distant operator sends in real time, explains the generation forecast track data.The use prediction data drives each model in the virtual scene, to constitute three-dimensional virtual scene.The scene that this step generated can show, the ideal movements image that the executable operations person of robot gives an order.
In the 3rd step, the sensor of robot itself is measured each joint angle data in real time, is transferred to the data processing module of scene through remote-controlled operation platform.Data processing module is stopped using prediction joint angles data, changes into and uses true joint angle data-driven model to express the motion of robot self.At this moment, the robot model in the three-dimensional virtual scene can express the running status of robot at operation field really.
In the 4th step, the environmental detection sensor at robot manipulating task scene brings into operation, and gets access to the position data of robot and operative goals thereof.Above data are transferred to the data processing module of scene through remote-controlled operation platform.Data module uses above data to position as robot model and environmental model thereof.At this moment, three-dimensional scenic can show the position relation of robot in the operation field and environment thereof really.
Use robot self-sensor device feedback data and environment measuring position sensor feedback data simultaneously, come the three-dimensional virtual scene of the model of driven machine people and operating environment thereof, running status that can the truly expressed robot and the positional information in environment thereof.Wherein, under the situation of the temporary transient disappearance of feedback, the contextual data processing module can be selected automatically to predict the track data that generates by order, drive each class model at the scene.After getting access to feedback data again, switch to three-dimensional virtual scene by the True Data driving model.
Claims (3)
1, a kind of three-dimensional working scene that can be used for the distant operation of human emulated robot.Can provide video image to monitor the ruuning situation of robot to the operator.It is characterized in that: it is a kind of use real time video image; Show the environmental model in human emulated robot and the operation place thereof in the image; Scene can receive the several data of robot operation field and operator's order, uses above information to come driving model to show real time video image.
2, according to the described working scene of claim 1, it is characterized in that: use 3 d modeling software to set up each class model, model has the resemblance identical with actual object; The operator can change the visual angle arbitrarily to observe the details of model.
3, according to the described working scene of claim 1, it is characterized in that: scene receives field position data, robot service data, drives the model sport in the scene; Scene receives the order that the operator sends, and makes an explanation the generation forecast data; Under the situation of the temporary transient disappearance of data, use the model sport in the prediction data driving scene at the scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1840298A true CN1840298A (en) | 2006-10-04 |
CN100389013C CN100389013C (en) | 2008-05-21 |
Family
ID=37029581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100599150A Expired - Fee Related CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction of human emulated robot working scene based on multiple information integration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100389013C (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101844353A (en) * | 2010-04-14 | 2010-09-29 | 华中科技大学 | Teleoperation task planning and simulation method for mechanical arm/dexterous hand system |
CN101434066B (en) * | 2008-10-20 | 2012-11-21 | 北京理工大学 | Method and platform for predicating teleoperation of robot |
CN102867074A (en) * | 2011-06-15 | 2013-01-09 | 天宝导航有限公司 | Method of placing a total station in a building |
CN103717358A (en) * | 2011-08-02 | 2014-04-09 | 索尼公司 | Control system, display control method, and non-transitory computer readable storage medium |
WO2017185208A1 (en) * | 2016-04-25 | 2017-11-02 | 深圳前海达闼云端智能科技有限公司 | Method and device for establishing three-dimensional model of robot, and electronic device |
CN112388678A (en) * | 2020-11-04 | 2021-02-23 | 公安部第三研究所 | Behavior detection robot based on low-power-consumption pattern recognition technology |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3079186B2 (en) * | 1995-09-28 | 2000-08-21 | 株式会社小松製作所 | Structure measurement system |
JPH09300272A (en) * | 1996-05-14 | 1997-11-25 | Nippon Telegr & Teleph Corp <Ntt> | Robot control method and device thereof |
US6282461B1 (en) * | 1998-07-15 | 2001-08-28 | Westinghouse Electric Company Llc | Independent tube position verification system |
US6233502B1 (en) * | 1998-10-16 | 2001-05-15 | Xerox Corporation | Fault tolerant connection system for transiently connectable modular elements |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
CN1289270C (en) * | 2001-11-09 | 2006-12-13 | 中国科学院自动化研究所 | Vision controlling platform for opened industrial robot |
JP4167940B2 (en) * | 2003-05-29 | 2008-10-22 | ファナック株式会社 | Robot system |
CN1256224C (en) * | 2003-06-26 | 2006-05-17 | 上海交通大学 | Open-type network robot universal control systems |
-
2005
- 2005-04-01 CN CNB2005100599150A patent/CN100389013C/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101434066B (en) * | 2008-10-20 | 2012-11-21 | 北京理工大学 | Method and platform for predicating teleoperation of robot |
CN101844353A (en) * | 2010-04-14 | 2010-09-29 | 华中科技大学 | Teleoperation task planning and simulation method for mechanical arm/dexterous hand system |
CN101844353B (en) * | 2010-04-14 | 2011-08-10 | 华中科技大学 | Teleoperation task planning and simulation method for mechanical arm/dexterous hand system |
CN102867074A (en) * | 2011-06-15 | 2013-01-09 | 天宝导航有限公司 | Method of placing a total station in a building |
CN102867074B (en) * | 2011-06-15 | 2015-12-16 | 天宝导航有限公司 | The method of placing total station between floors |
CN103717358A (en) * | 2011-08-02 | 2014-04-09 | 索尼公司 | Control system, display control method, and non-transitory computer readable storage medium |
CN103717358B (en) * | 2011-08-02 | 2019-09-06 | 索尼公司 | Control system, display control method and non-transient computer readable storage medium |
WO2017185208A1 (en) * | 2016-04-25 | 2017-11-02 | 深圳前海达闼云端智能科技有限公司 | Method and device for establishing three-dimensional model of robot, and electronic device |
CN112388678A (en) * | 2020-11-04 | 2021-02-23 | 公安部第三研究所 | Behavior detection robot based on low-power-consumption pattern recognition technology |
Also Published As
Publication number | Publication date |
---|---|
CN100389013C (en) | 2008-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107671857B (en) | Three-dimensional simulation platform for operation demonstration and algorithm verification of service robot | |
CN111633644A (en) | Industrial robot digital twin system combined with intelligent vision and operation method thereof | |
CN101493855B (en) | Real-time simulation system for under-driven double-feet walking robot | |
CN109434870A (en) | A kind of virtual reality operation system for robot livewire work | |
Savall et al. | Description of a haptic system for virtual maintainability in aeronautics | |
CN100389013C (en) | Reconstruction of human emulated robot working scene based on multiple information integration | |
CN106527177A (en) | Multi-functional and one-stop type remote control design, the simulation system and method thereof | |
Corke et al. | ACME, a telerobotic active measurement facility | |
CN1843712A (en) | Flexible and remote-controlled operation platform based on virtual robot | |
Baier et al. | Distributed PC-based haptic, visual and acoustic telepresence system-experiments in virtual and remote environments | |
Feng et al. | Flexible virtual fixtures for human-excavator cooperative system | |
Andaluz et al. | Transparency of a bilateral tele-operation scheme of a mobile manipulator robot | |
CN112894820A (en) | Flexible mechanical arm remote operation man-machine interaction device and system | |
Gosselin et al. | Design of a wearable haptic interface for precise finger interactions in large virtual environments | |
CN1233514C (en) | Internet control system for remotely controlling robots to play chess | |
He et al. | Six-degree-of-freedom haptic rendering in virtual teleoperation | |
US20220101477A1 (en) | Visual Interface And Communications Techniques For Use With Robots | |
Ryu et al. | Development of wearable haptic system for tangible studio to experience a virtual heritage alive | |
Wang | Improving human-machine interfaces for construction equipment operations with mixed and augmented reality | |
Kutter et al. | Modeling and Simulating Mobile Robot Environments. | |
KR20240147280A (en) | Method and system for imitation learning | |
Finkenzeller et al. | Visum: A vr system for the interactive and dynamics simulation of mechatronic systems | |
Wei | Design and Implementation of the Inverse Kinematics and Monitoring Module for Six-axis Crank Arm Platform | |
Al-Mouhamed et al. | Experimentation of a multi-threaded distributed telerobotic framework | |
Ogrinc et al. | Control and collision avoidance for two Kuka LWR robots operated with the Kinect sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080521 Termination date: 20110401 |