WO2021114654A1 - 一种三体智能系统及探测机器人 - Google Patents

一种三体智能系统及探测机器人 Download PDF

Info

Publication number
WO2021114654A1
WO2021114654A1 PCT/CN2020/101380 CN2020101380W WO2021114654A1 WO 2021114654 A1 WO2021114654 A1 WO 2021114654A1 CN 2020101380 W CN2020101380 W CN 2020101380W WO 2021114654 A1 WO2021114654 A1 WO 2021114654A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
virtual
detection
environment
module
Prior art date
Application number
PCT/CN2020/101380
Other languages
English (en)
French (fr)
Inventor
丁亮
高海波
袁野
刘岩
邓宗全
李树
刘振
Original Assignee
哈尔滨工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 哈尔滨工业大学 filed Critical 哈尔滨工业大学
Priority to US17/026,342 priority Critical patent/US12079005B2/en
Publication of WO2021114654A1 publication Critical patent/WO2021114654A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0044Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G1/00Cosmonautic vehicles
    • B64G1/16Extraterrestrial cars
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • G05D1/2435Extracting 3D information
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/247Arrangements for determining position or orientation using signals provided by artificial sources external to the vehicle, e.g. navigation beacons
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/40Control within particular dimensions
    • G05D1/43Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to the technical field of robots, in particular to a three-body intelligent system and a detection robot.
  • Detection robots are widely used in planetary exploration, deep-sea exploration, cave exploration and other scientific exploration fields unknown to human beings to meet the needs of scientific experiments and development.
  • the invention aims to solve to a certain extent the problem that the amount of information transmitted by various parties to the operator and related cooperating personnel during the control process of the existing detection robot is too large, and the operator and the relevant cooperating personnel cannot effectively control the detection robot.
  • the present invention provides a three-body intelligent system for detecting robots, including:
  • the digital twin module is used to create a virtual detection environment and a virtual robot based on the environment data of the detected environment acquired by the detection robot in real time and the robot data of the detection robot;
  • the virtual reality module is used to generate the process and result of the virtual robot executing the control instructions in the virtual detection environment according to the virtual detection environment, the virtual robot, and the control instructions of the control personnel to the detection robot ;as well as
  • the human-machine fusion module is used to transmit the control instructions and show the control personnel the process and results of the virtual robot executing the control instructions in the virtual detection environment; After the feedback of the control instruction, the detection robot is caused to execute the control instruction.
  • the digital twin module can enable the detected environment and the virtual detection environment to map each other; and/or, the digital twin module can enable the detection robot and the virtual robot to map each other.
  • control instruction includes: determining a scientific detection target of the detection robot or determining a driving path of the detection robot.
  • the three-body intelligent system of the detection robot and the cloud platform are mapped to each other.
  • the multiple cloud platforms include:
  • the digital cloud platform module maps each other with the digital twin module and the virtual reality module; or/and
  • a physical cloud platform module that maps to the detection robot; or/and
  • the biological cloud platform module maps to the man-machine fusion module.
  • the digital twin module is used to create the virtual detection environment according to the environment data of the detected environment acquired by the detection robot in real time and the set data.
  • the human-machine fusion module includes VR glasses and/or a motion-sensing seat to show the control personnel the process of the virtual robot executing the control instruction in the virtual detection environment.
  • the detected environment is synchronized with the virtual detection environment in real time
  • the detection robot is synchronized with the virtual robot in real time.
  • the present invention also provides a detection robot, which includes the three-body intelligent system of the detection robot.
  • the detection robot is a planet detection robot or a deep sea detection robot.
  • the environment data of the detected environment acquired by the detection robot in real time and the robot data of the detection robot obtained by the detection robot in real time are created into a virtual detection environment and a virtual robot through the virtual intelligent general module, and then the human-machine fusion module is used Show the controller the situation of the virtual robot in the virtual detection environment.
  • the controller and related cooperating personnel can feel the environment to be detected and the situation of the detection robot in the environment; so as to realize the large amount of data
  • the information is fully integrated and fully and vividly displayed to the controller and related cooperating personnel, so that the robot controller and related cooperating personnel can be fully informed.
  • control instructions are transmitted through the human-machine fusion module, and the process and results of the virtual robot executing the control instructions in the virtual detection environment are shown to the control personnel; thus, the control personnel can accurately learn the execution process and results of the control instruction and avoid Dangers occur, and the control efficiency and control accuracy of the detection robot are improved.
  • Fig. 1 is a schematic flow chart of part A of the three-body intelligent system of a detection robot according to a specific embodiment of the present invention
  • FIG. 2 is a schematic flowchart of part B of the three-body intelligent system of the detection robot according to the specific embodiment of the present invention
  • FIG. 3 is a schematic flowchart of part C of the three-body intelligent system of the detection robot according to the specific embodiment of the present invention.
  • Fig. 4 is a schematic diagram of mutual mapping between the detection robot and the cloud platform according to the specific embodiment of the present invention.
  • first”, “second”, etc. are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first”, “second”, etc. may explicitly or implicitly include at least one of the features.
  • the functional units described in the form of systems, modules, etc. can be mechanical or electronic functional units that realize physical functions, or can be computer programs running on computing devices.
  • Functional unit Multiple functional units can be integrated into one physical device, or each unit can be located in a separate physical device, or two or more units can be integrated into one physical device. If the functional unit is implemented in the form of a computer program functional unit and sold or used as an independent product, it can also be stored in one or more computer readable storage media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the so-called non-transitory computer-readable medium may include any computer-readable medium, except for the signal itself that is temporarily propagating.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • this embodiment provides a three-body intelligent system for detecting robots, including a digital twin module, a virtual reality module, and a man-machine fusion module.
  • the digital twin module is used to create a virtual detection environment and virtual robot based on the environment data of the detected environment acquired by the detection robot in real time and the robot data of the detection robot;
  • the virtual reality module is used to detect the detection environment according to the virtual detection environment, the virtual robot and the control personnel
  • the control instructions of the robot generate the process and results of the virtual robot executing the control instructions in the virtual detection environment;
  • the human-machine fusion module is used to transmit the control instructions and show the control personnel the process and results of the virtual robot executing the control instructions in the virtual detection environment ; And after obtaining the feedback of the control personnel to confirm the control instruction, make the detection robot execute the control instruction.
  • a virtual detection environment can be comprehensively created based on the detection robot's real-time acquisition of environmental data of the detected environment and the environmental data obtained from the observation and detection of the detected environment in advance.
  • the three-body intelligent system of the detection robot also includes a virtual intelligent master module, which includes a digital twin module and a virtual reality module.
  • the virtual intelligent total module involved in the present invention refers to the virtual digital model created in the process of design, development, research and application of the robot system, which can imitate the real objects in reality, the environment or the conceived objects in the imagination, the environment, etc.
  • the shape, texture, texture, etc. all have a strong similarity with the imitated object, which can reflect the relevant characteristics of the imitated object.
  • Virtual digital aircraft, virtual digital planetary vehicles, virtual digital scenes, etc. all belong to the category of digital entities in the present invention.
  • the controller mentioned in this embodiment does not only refer to the controller who controls the detection robot, but also includes a certain education, knowledge and operating ability, and can assign tasks to the robot, conduct human-computer interaction, and develop and operate the robot.
  • Technical personnel of including non-technical personnel who can be serviced or assisted by technologies such as robots or virtual reality. Medical personnel, scientific research personnel, technical workers, management personnel, the elderly and the sick and disabled, etc., all belong to the scope of the present invention. That is to say, the control personnel include at least scientists and engineers. In the subsequent scientists, engineers and biological cloud platform modules After the combination, the intelligence of scientists and engineers is formed.
  • the environmental data of the detected environment may include various environmental data such as weather data of the detected environment, data of topography and landforms.
  • the detection robot can be an intelligent detection robot, and the intelligent detection robot can carry out autonomous scientific detection and independent design choices of driving routes.
  • control instructions can refer to humans assisting the robot in making relevant decisions (for example, judging scientific goals; when the driving path is very complicated and the robot itself cannot solve it, the ground staff specifies the path for it), thereby reflecting the detection robot and the controller, To assist the integration of cooperating personnel.
  • the three-body intelligent system of the detection robot of this embodiment creates a virtual detection environment and a virtual robot through the digital twin module of the environment data of the detected environment acquired by the detection robot in real time and the robot data of the detection robot, and is generated by the virtual reality module
  • the virtual robot executes the process and result of the control instruction in the virtual detection environment; and then shows the virtual robot in the virtual detection environment to the controller through the human-machine fusion module, for example, through VR technology and somatosensory seat
  • the control personnel and relevant cooperating personnel feel the detection environment and the situation of the detection robot in the environment; thus, the large amount of data information can be fully integrated and fully and vividly displayed to the control personnel and relevant cooperating personnel, so that the robot can be fully integrated.
  • the control personnel and relevant cooperating personnel are fully informed.
  • control instructions are transmitted to the virtual intelligent master module through the human-machine fusion module, and the process and results of the virtual robot executing the control instructions in the virtual detection environment are shown to the control personnel; so that the control personnel can accurately learn the control instructions
  • the implementation process and results avoid the occurrence of danger, and improve the control efficiency and accuracy of the detection robot.
  • the digital twin module can enable the detected environment and the virtual detection environment to map each other; and/or, the digital twin module can enable the detection robot and the virtual detection environment to map each other The robots map to each other.
  • the digital twin module of this embodiment is an organic combination of a detection robot and a virtual intelligent total module, and is a system that combines physical and digital features, and the two interact and coexist.
  • the digital twin module integrates multiple disciplines and multiple physical quantities, creates a virtual model of a physical entity in a digital way, completes the mutual mapping between virtual space and real space, and can reflect the operating state of the physical entity of the detection robot in the virtual detection environment.
  • the digital twin module can be used in the design, production, and testing phases of the detection robot. For example, in the detection robot testing phase, the robot data of the detection robot obtained in the virtual intelligent master module can be modified to detect the reaction of the detection robot , Improve the overall performance of the detection robot through continuous iteration.
  • the remote monitoring and manipulation control of the robot can be carried out through the digital twin module, and various information related to its own state obtained during the operation of the detection robot can be mapped to the virtual robot in real time, and the virtual robot will be controlled. The effect is reflected in the real detection robot.
  • the digital twin module can realize the mutual feedback of the state between the real detection robot and the virtual robot.
  • the virtual robot can follow the detection robot to automatically make changes, and the detection robot can also adjust the motion state according to the instructions received by the virtual robot.
  • the digital twin module includes key technologies such as high-performance computing, advanced sensor acquisition, digital simulation, intelligent data analysis, and two-way information communication control.
  • the planetary exploration robot transmits the acquired detection information to the virtual intelligent master module, and the virtual intelligent master module responds accordingly based on the detection information.
  • the virtual intelligent master module executes the control instructions, it also transmits the information to the real robot, and the real robot also performs related actions, thereby realizing real-time control of the detection robot.
  • the virtual reality module referred to in the present invention is an organic combination of humans and digital bodies, and is a combination of digital and biological systems, and the two interact and coexist.
  • the virtual reality module can augment reality and so on.
  • the control personnel can use the virtual reality module to perceive and interact with the virtual detection environment and the virtual robot in the virtual intelligent general module, which can bring the control personnel the feeling and experience of interacting with the detection robot.
  • the virtual reality module includes key technologies such as dynamic environment modeling technology, real-time 3D graphics generation technology, stereo display and sensor technology.
  • the virtual reality module enables the controller and other cooperating personnel to interact with the virtual detection environment and virtual robot; for example, the controller and other cooperating personnel can use keyboard and mouse, VR glasses, somatosensory seats, etc. in the virtual
  • the virtual robot is controlled in the digital environment (that is, the virtual detection environment) and the feedback of the virtual robot is received.
  • the three-body intelligent system of the detection robot and the cloud platform are mapped to each other.
  • the multiple cloud platforms include:
  • the digital cloud platform module is mapped to the virtual intelligent master module; or/and
  • the physical cloud platform module which maps with the detection robot; or/and
  • the biological cloud platform module maps each other with the controller through the man-machine fusion module.
  • the multiple cloud platforms include at least one of a digital cloud platform module, a physical cloud platform module, and a biological cloud platform module.
  • the detection robot and the virtual intelligent master module can offload large-scale data processing, difficult motion planning, multi-machine collaboration and other intensive calculations to the physical cloud platform module and digital cloud platform module through network communication, and return the calculation result or For storage of related data, etc.
  • using the powerful resource sharing and online learning capabilities of the cloud platform by making full use of the capabilities of the cloud platform, the calculation and storage load of the detection robot and the virtual intelligent module can be reduced, and the decision-making of the detection robot and the virtual intelligent module can be expanded.
  • Execution, calculation and storage capabilities make it break free from the constraints of ontology, and can solve a series of complex problems more efficiently.
  • the physical cloud platform module and the digital cloud platform module do not rely on the detection robot and the virtual intelligent total module body, the physical cloud platform module and the digital cloud platform module can perform online learning when there is no need for their own computing tasks. jobs.
  • the working mode of the biological cloud platform module is slightly different from the digital cloud platform module and the physical cloud platform module.
  • the biological cloud platform module can store multiple controllers and other related information, and different controllers can directly exchange information with the biological cloud platform module. , Use its storage and calculation functions.
  • the biological cloud platform module can also provide relevant information of the corresponding controller to the detection robot or virtual intelligent master module when the controller interacts with the detection robot or virtual intelligent master module, so as to provide users with personalized services.
  • cloud platform mapping physical cloud platform module
  • digital cloud platform module digital cloud platform module
  • biological cloud platform module The three parts are interconnected and interoperable, which effectively improves the work efficiency of the entire system architecture and has more powerful functions.
  • the biological cloud platform module can carry out common thinking, historical experience, collective wisdom, and the amount of data is greater than or equal to the library;
  • the physical cloud platform module can perform fault diagnosis, life evaluation, productivity level, calculation analysis, and the amount of data is larger than the library;
  • the digital cloud platform module can perform comparative analysis and forecast, which is forward-looking.
  • the virtual intelligent total module creates a virtual detection environment based on the environment data of the detected environment acquired by the detection robot in real time and the set data.
  • the human-machine fusion module includes VR glasses and/or a somatosensory seat to show the control personnel the process of the virtual robot executing the control instructions in the virtual detection environment.
  • the detected environment and the virtual detection environment are synchronized in real time, and the detecting robot and the virtual robot are synchronized in real time.
  • this embodiment also provides a detection robot, which includes the aforementioned three-body intelligent system of the detection robot.
  • the detection robot is a planet detection robot or a deep sea detection robot.
  • controllers can refer to engineers and scientists on the ground. They can set scientific detection targets for the planet detection robot, use various data collected by the planet detection robot to analyze the surface conditions of the planet, and the planet detection robot is in trouble Or help when making a decision, etc.
  • Planetary exploration robots can have functions such as autonomous navigation, scientific instrument operation and scientific target detection in non-rugged and complex environments;
  • the virtual intelligent total module can be the virtual planet detection robot and the virtual environment in the virtual simulation software, and the two can perform virtual interaction.
  • the scientific research personnel design the body structure, size configuration, electronic control system, and control algorithm of the planet detection robot through the process of demand analysis and program demonstration.
  • the establishment of a high-fidelity virtual model can verify the feasibility of the algorithm in the virtual environment, reduce the risk of experimental uncertainty, and shorten the development cycle.
  • the existing schemes are further optimized and improved.
  • virtual reality equipment can be used to manipulate the virtual planet detection robot in a high-fidelity virtual environment, practice specifying scientific targets, send control instructions, etc., and train planet detection robot operators.
  • This stage makes full use of man-machine fusion, virtual reality, and digital twin modules to play an important role in key links such as planetary exploration robot body design, algorithm optimization, program verification, and simulation training.
  • the body information and environment information of the planet detection robot are transmitted to the ground through satellite communication, that is, the detection robot obtains real-time environmental data of the detected environment and the robot data of the detection robot, and then the virtual intelligent total module Create virtual detection environment and virtual robot based on it.
  • the human-machine fusion module is used to show the controller the virtual robot in the virtual detection environment.
  • VR glasses, motion sensing seats and other equipment can be used to make the controller feel the motion state and surrounding environment of the planet detection robot in the virtual detection environment.
  • Information, and control the virtual robot through the mouse, keyboard, handle and other devices.
  • the virtual robot is commanded to complete autonomous path planning and path tracking in the virtual detection environment, and the operating state of the virtual robot in the virtual detection environment is evaluated. If the effect is not good, the controller can perform human intervention to give the key points of the path. Or the complete path, the process can be repeated many times in order to find the best robot control strategy.
  • the best control command will be confirmed later, that is, after the controller confirms the control command, it will be uploaded to the control system of the planet detection robot body through satellite communication.
  • the detection robot can perform related operations according to the command and place the detection robot on the surface of the planet.
  • the state of the virtual robot is fed back to the virtual robot, and the state of the virtual robot is updated, that is, the detected environment is synchronized with the virtual detection environment in real time, and the detection robot is synchronized with the virtual robot in real time, so that the detection robot and the virtual robot have the same state.
  • the planet detection robot when the planet detection robot landed on the surface of the planet to be detected, it uses its own sensing system composed of various sensors to perform multi-source information measurement (using a camera to measure image information; lidar to measure 3D point clouds The information is to perceive the distance and depth; the motor encoder measures the angular velocity of the wheel; the inertial measurement unit IMU measures the posture information of the probe robot; the six-dimensional force sensor measures the force status of each key position of the robot, etc.).
  • multi-source information measurement using a camera to measure image information; lidar to measure 3D point clouds The information is to perceive the distance and depth; the motor encoder measures the angular velocity of the wheel; the inertial measurement unit IMU measures the posture information of the probe robot; the six-dimensional force sensor measures the force status of each key position of the robot, etc.
  • the planetary detection robot virtual digital environment that is, the virtual intelligent total module
  • the height is established based on the image of the environment and point cloud; the speed, pose and force of the detection robot
  • a fidelity virtual digital detection robot and a virtual digital planet environment that is, a virtual detection environment and a virtual robot are created based on the environment data of the detected environment acquired by the detection robot in real time and the robot data of the detection robot).
  • the planet detection robot uses the relevant information measured by the sensing system to identify and estimate the planet's soil parameters, and then simultaneously locates and builds a map containing the geometric and physical properties of the planet's surface (that is, SLAM&Characterization in Figure 1), which will also be obtained at this time Related information is transmitted to the virtual digital body.
  • SLAM&Characterization in Figure 1 the relevant information measured by the sensing system to identify and estimate the planet's soil parameters, and then simultaneously locates and builds a map containing the geometric and physical properties of the planet's surface (that is, SLAM&Characterization in Figure 1), which will also be obtained at this time Related information is transmitted to the virtual digital body.
  • the current planet detection robot has limited intelligence and cannot automatically perform related scientific tasks (explain scientific tasks: for example: which stone is of higher scientific value and needs to be used Detecting instrument to detect its composition? Or where is the star soil/soil more representative and needs to be sampled and brought back to the earth?
  • ground-based scientists have rich scientific research accumulation in related fields. , Can judge the scientific value based on experience, determine the scientific target to be explored, and which scientific detection instrument to use to detect the target. After the ground scientist confirms the target, the instruction is transmitted to the planet detection robot.
  • S is the wheel slip rate of the intelligent detection robot
  • Z is the amount of wheel subsidence of the intelligent detection robot
  • F is the wheel force information of the intelligent detection robot (the force and moment in the xyz direction, a total of six quantities, Called: six-dimensional force information)
  • w wheel speed which is calculated by the wheel subsidence Z of the intelligent detection robot, the wheel slip rate S of the intelligent detection robot, and the wheel force information F of the intelligent detection robot, which can be calculated by ground mechanics related formulas Out the soil parameters.
  • the planet-exploring robot performs the next step (navigating to the location of the target) according to the instruction.
  • it is divided into landform passability analysis, road and planning, and path tracking.
  • the planet detection robot passes through the previously established map and combines the kinematics and dynamics information of its own structure to determine which places on the map are safe for the planet detection robot. Passed (the ground slope is too large, the ground obstacles are too much, the ground soil is too soft, etc. are all situations that the robot cannot pass), use the judgment results to plan how to reach the designated target point (here, there will be different paths according to different planning standards As a result, the shortest path and the minimum energy are all criteria for planning the path). After the path is planned, path tracking is performed.
  • the planet detection robot arrives at the target location according to the planned path to sample or use scientific instruments to analyze related scientific targets. In the navigation phase, it may encounter very complicated ground conditions. At this time, the planet detection robot itself cannot plan a path to the designated target location. It needs the assistance of ground staff. The ground staff perform the above passability analysis and analysis in the virtual detection environment.
  • the instructions can be repeatedly tested and continuously verified. After the feasible instructions are obtained, they are uploaded to the control system of the real detection robot, and they are ordered to move according to the instructions and reach the designated target point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

一种三体智能系统及探测机器人,具体涉及机器人技术领域。三体智能系统包括:数字孪生模块,用于根据探测机器人实时获取的被探测环境的环境数据以及探测机器人的机器人数据,而创建虚拟探测环境和虚拟机器人;虚拟现实模块,用于根据虚拟探测环境、虚拟机器人以及控制人员对探测机器人的控制指令,生成虚拟机器人在虚拟探测环境中执行控制指令的过程和结果;以及人机融合模块,控制人员展示虚拟机器人在虚拟探测环境中执行控制指令的过程和结果;并在获取控制人员确认控制指令的反馈后,使探测机器人执行控制指令。

Description

一种三体智能系统及探测机器人 技术领域
本发明涉及机器人技术领域,具体而言,涉及一种的三体智能系统及探测机器人。
背景技术
探测机器人被广泛应用于星球探测、深海探测、洞穴探测等对于人类未知的科学探测领域,以满足科学实验及开发需要。
同时,随着技术的不断发展,大量先进的检测器件被广泛应用于检测机器人,以获得更多、更全面的机器人现场信息,同时随着计算机信息技术的大量应用,能够根据获得的机器人现场信息预判出探测机器人下一步的操控结果从而形成新的信息供操控人员及相关配合人员使用;这也导致了信息量过大,使机器人操控人员及相关配合人员无法在短时间内获得全部有效的信息量,并基于该信息量对探测机器人进行有效操控,尤其是用于星球探测的机器人。
发明内容
本发明旨在一定程度上解决现有的探测机器人操控过程中各方传输给操控人员及相关配合人员信息量过大,而不能使操控人员及相关配合人员对探测机器人进行有效控制的问题。
为解决上述问题,本发明提供一种探测机器人的三体智能系统,包括:
数字孪生模块,用于根据探测机器人实时获取的被探测环境的环境数据以及所述探测机器人的机器人数据,而创建虚拟探测环境和虚拟机器人;
虚拟现实模块,用于根据所述虚拟探测环境、所述虚拟机器人以及控制人员对所述探测机器人的控制指令,生成所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程和结果;以及
人机融合模块,用于传输所述控制指令,并向所述控制人员展示所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程和结果;并在获取所述控制人员确认所述控制指令的反馈后,使所述探测机器人执行所述控制 指令。
进一步地,所述数字孪生模块能够使所述被探测环境与所述虚拟探测环境相互映射;和/或,所述数字孪生模块能够使所述探测机器人与所述虚拟机器人相互映射。
进一步地,所述控制指令包括:确定所述探测机器人的科学探测目标或确定所述探测机器人的行驶路径。
进一步地,所述探测机器人的三体智能系统与云平台相互映射。
进一步地,所述云平台为多个,多个所述云平台包括:
数字云平台模块,分别与所述数字孪生模块及所述虚拟现实模块相互映射;或/和
物理云平台模块,与所述探测机器人相互映射;或/和
生物云平台模块,与所述人机融合模块相互映射。
进一步地,所述数字孪生模块用于根据所述探测机器人实时获取的被探测环境的环境数据以及设定的数据而创建所述虚拟探测环境。
进一步地,所述人机融合模块包括VR眼镜和/或体感座椅,以向所述控制人员展示所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程。
进一步地,所述被探测环境与所述虚拟探测环境实时同步,所述探测机器人与所述虚拟机器人实时同步。
另外,本发明还提供了一种探测机器人,所述探测机器人包括所述的探测机器人的三体智能系统。
进一步地,所述探测机器人为星球探测机器人或深海探测机器人。
本发明的探测机器人的三体智能系统,通过虚拟智能总模块将探测机器人实时获取的被探测环境的环境数据以及探测机器人的机器人数据,创建为虚拟探测环境和虚拟机器人,然后通过人机融合模块向控制人员展示虚拟机器人在虚拟探测环境中的情况,例如通过VR技术和体感座椅使控制人员以及相关配合人员切身感受到被检测环境以及探测机器人在该环境中的情况;从而实现将大量数据信息进行充分整合并充分全面而又生动的展现给控制人员以及相关配合人员,从而使机器人操控人员及相关配合人员充分获知。
在此基础上,通过人机融合模块传送控制指令,并向控制人员展示虚拟机器人在虚拟探测环境中执行控制指令的过程和结果;从而使控制人员准确获知该控制指令的执行过程和结果,避免危险的发生,并提高对探测机器人的操控效率和操控的精准性。
附图说明
图1为本发明的具体实施方式的探测机器人的三体智能系统的A部分的示意性流程图;
图2为本发明的具体实施方式的探测机器人的三体智能系统的B部分的示意性流程图;
图3为本发明的具体实施方式的探测机器人的三体智能系统的C部分的示意性流程图;
图4为本发明的具体实施方式的探测机器人和云平台相互映射的示意图。
附图标记说明:
1-第一连接处,2-第二连接处,3-第三连接处,4-第四连接处,5-第五连接处。
具体实施方式
为使本发明的上述目的、特征和优点能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
术语“第一”、“第二”、等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”等的特征可以明示或者隐含地包括至少一个该特征。
对于本实施方式的附图,需要说明的是,为了使本实施方式尽可能公开充分,该探测机器人的三体智能系统具体细化后的流程图过多,导致流程图过大,为使本实施方式的附图展示清楚,所以将探测机器人的三体智能系统具体细化后的流程图划分为A、B和C三个部分,而这三部分的划分并没有严格限制,主要目的是为了使该探测机器人的三体智能系统具体细化后的流 程展示清楚。
同时,图1至图4中,第一连接处1、第二连接处2、第三连接处3、第四连接处4和第五连接处5,仅仅是为了将探测机器人的三体智能系统具体细化后的流程相互准确的对应联系起来,以保证图1至图4相互联系的准确性。
要说明的是,在本公开的各个实施例中,以系统、模块等形式进行描述的功能单元,可以是实现物理作用的机械、电子的功能单元,也可以是运行于计算设备上的计算机程序功能单元。多个功能单元可以集成在一个物理设备中,也可以是各个单元分别位于单独的物理设备,也可以两个或两个以上单元集成在一个物理设备中。所述功能单元如果以计算机程序功能单元的形式实现并作为独立的产品销售或使用时,也可以存储在一个或多个计算机可读取存储介质中。
一般来说,用于实现本发明计算机程序功能单元的计算机指令的可以采用一个或多个计算机可读的介质的任意组合来承载。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。所谓非临时性计算机可读介质可以包括任何计算机可读介质,除了临时性地传播中的信号本身。
计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
参见图1至图4,本实施方式提供了一种探测机器人的三体智能系统,包括数字孪生模块、虚拟现实模块以及人机融合模块。数字孪生模块用于根据探测机器人实时获取的被探测环境的环境数据以及探测机器人的机器人数据,而创建虚拟探测环境和虚拟机器人;虚拟现实模块用于根据虚拟探测环境、虚拟机器人以及控制人员对探测机器人的控制指令,生成虚拟机器人在虚拟探测环境中执行控制指令的过程和结果;人机融合模块用于传输控制指令,并向控制人员展示虚拟机器人在虚拟探测环境中执行控制指令的过程 和结果;并在获取控制人员确认控制指令的反馈后,使探测机器人执行控制指令。
需要说明的是,可以同时根据探测机器人实时获取被探测环境的环境数据,以及预先对被探测环境的观察检测所获得的环境数据而综合创建出虚拟探测环境。
另外,该探测机器人的三体智能系统还包括虚拟智能总模块,虚拟智能总模块包括数字孪生模块和虚拟现实模块。
本发明所涉及的虚拟智能总模块指机器人系统设计、开发、研制、应用过程中所创建的虚拟数字模型,可以对现实中的真实物体、环境或想象中的构思物体、环境等进行模仿,在形态、质地、纹理等方面均和被模仿物体具有很强的相似性,可以反映被模仿物体的相关特性。虚拟数字飞行器、虚拟数字星球车、虚拟数字场景等都属于本发明中数字体的范畴。
另外,本实施方式中提及的控制人员不单单指控制探测机器人的控制人员,既包括接受过一定教育、具备一定的知识和操作能力,可以给机器人指派任务、进行人机交互、研发操作机器人的技术型人员,又包括可以被机器人或虚拟现实等技术服务或协助的非技术型人员。医护人员、科研人员、技术工作者、管理人员、老幼病残等都属于本发明中人的范畴,也就是说,控制人员至少包括科学家和工程师,在后续的科学家、工程师与生物云平台模块结合后,便形成了科学家和工程师的智能。
另外,被探测环境的环境数据可以包括被探测环境的天气情况的数据、地形地貌的数据等各种环境数据。
另外,探测机器人可以是智能探测机器人,智能探测机器人能够进行自主的科学探测和行驶路线的自主设计选择。
另外,控制指令可以是指人类协助机器人进行相关决策(例如,判断科学目标;行驶路径十分复杂,机器人自己无法解算时,地面工作人员为其指定路径),从而体现出探测机器人和控制人员、辅助配合人员的相互融合。本实施方式的探测机器人的三体智能系统,通过数字孪生模块将探测机器人实时获取的被探测环境的环境数据以及探测机器人的机器人数据,创建为虚拟探测环境和虚拟机器人,并且通过虚拟现实模块生成所述虚拟机器人在所 述虚拟探测环境中执行所述控制指令的过程和结果;然后通过人机融合模块向控制人员展示虚拟机器人在虚拟探测环境中的情况,例如通过VR技术和体感座椅使控制人员以及相关配合人员切身感受到被检测环境以及探测机器人在该环境中的情况;从而实现将大量数据信息进行充分整合并充分全面而又生动的展现给控制人员以及相关配合人员,从而使机器人操控人员及相关配合人员充分获知。
在此基础上,通过人机融合模块将控制指令传送到虚拟智能总模块,并向控制人员展示虚拟机器人在虚拟探测环境中执行控制指令的过程和结果;从而使控制人员准确获知该控制指令的执行过程和结果,避免危险的发生,并提高对探测机器人的操控效率和操控的精准性。
参见图1至图4,优选地,所述数字孪生模块能够使所述被探测环境与所述虚拟探测环境相互映射;和/或,所述数字孪生模块能够使所述探测机器人与所述虚拟机器人相互映射。
本实施方式的数字孪生模块是探测机器人和虚拟智能总模块的有机结合,是一种物理性和数字性相结合的体系,两者相互作用、关联共生。数字孪生模块集成多学科、多物理量,以数字化方式创建物理实体的虚拟模型,完成虚拟空间和真实空间的相互映射,在虚拟探测环境中可反映探测机器人的物理实体的运行状态。另外,在探测机器人的设计、生产、测试等阶段均可以使用数字孪生模块,如在探测机器人测试阶段,可对虚拟智能总模块中获得的探测机器人的机器人数据进行相关修改,检测探测机器人的反应,通过不断迭代提高探测机器人的整体性能。在探测机器人的实际使用阶段,可以通过数字孪生模块进行机器人的远程监控和操纵控制,将探测机器人运行过程中获得的各种自身状态相关信息实时映射到虚拟机器人上,将对虚拟机器人进行状态控制的效果反应在真实的探测机器人上。数字孪生模块可以实现真实的探测机器人和虚拟机器人之间状态的互相反馈,虚拟机器人可以跟随探测机器人自动做出变化,探测机器人也可以根据虚拟机器人接收到的指令进行运动状态调整。数字孪生模块包括高性能计算、先进传感器采集、数字仿真、智能数据分析、双向信息通讯控制等关键技术。
另外,对于数字孪生模块来说,星球探测机器人将所获得的探测信息传 输给虚拟智能总模块,虚拟智能总模块根据探测信息做出相应反应。当虚拟智能总模块执行控制指令时候,也将信息传给真实机器人,真实机器人也执行相关动作,从而实现对探测机器人的实时控制。
本发明所指代的虚拟现实模块为人和数字体的有机结合,是一种数字性和生物性相结合的体系,两者相互作用、关联共生。虚拟现实模块可增强现实等。控制人员可使用虚拟现实模块对虚拟智能总模块中的虚拟探测环境和虚拟机器人进行感知与交互,可以带给控制人员与探测机器人交互的感受和体验。在虚拟现实模块包括动态环境建模技术、实时三维图形生成技术、立体显示和传感器技术等关键技术。
另外,需要说明的是,虚拟现实模块使控制人员及其他配合人员能够与虚拟探测环境及虚拟机器人进行交互;例如,控制人员及其他配合人员可以利用键盘鼠标、VR眼镜、体感座椅等在虚拟数字环境(也就是虚拟探测环境)中控制虚拟机器人并接受虚拟机器人的反馈。
参见图1至图4,优选地,探测机器人的三体智能系统与云平台相互映射。
参见图1至图4,优选地,云平台为多个,多个云平台包括:
数字云平台模块,与虚拟智能总模块相互映射;或/和
物理云平台模块,与探测机器人相互映射;或/和
生物云平台模块,通过人机融合模块与控制人员相互映射。
也就是说,多个云平台至少包括数字云平台模块、物理云平台模块以及生物云平台模块中的一个。
探测机器人和虚拟智能总模块可以通过网络通信将大规模数据处理、高难度运动规划、多机协作等密集计算卸载到物理云平台模块和数字云平台模块中,通过云平台计算后返回计算结果或进行相关数据的存储等。同时利用云平台具有的强大资源共享、在线学习的能力,通过充分发挥云平台的各项能力可以分别降低探测机器人和虚拟智能总模块的运算、存储负载,拓展探测机器人和虚拟智能总模块的决策、执行、运算和存储能力使其挣脱本体约束的限制,可更高效地解决一系列复杂的问题。除此之外,由于物理云平台模块和数字云平台模块并不依赖探测机器人和虚拟智能总模块本体,使得物 理云平台模块和数字云平台模块可以在没有其本体计算任务需求时进行在线学习等工作。生物云平台模块与数字云平台模块和物理云平台模块的工作方式略有不同,生物云平台模块可以储存多个控制人员等相关信息,不同的控制人员可以直接和生物云平台模块之间交换信息,使用其存储和计算功能。生物云平台模块也可以在控制人员与探测机器人或虚拟智能总模块交互时,向探测机器人或虚拟智能总模块提供对应的控制人员的相关信息,为用户提供个性化服务。
综上所述,增加了云平台映射——物理云平台模块、数字云平台模块和生物云平台模块,三部分之间互联互通,有效地提高整个系统构架的工作效率,功能更加强大。
具体地,生物云平台模块可以进行共性思维、历史经验,集体智慧,数据量大于等于库;
物理云平台模块可以进行故障诊断、寿命评估、生产力水平、计算分析,数据量大于库;
数字云平台模块可以进行比较分析,进行预测,具有前瞻性。
参见图1至图4,优选地,虚拟智能总模块根据探测机器人实时获取的被探测环境的环境数据以及设定的数据而创建虚拟探测环境。
参见图1至图4,优选地,人机融合模块包括VR眼镜和/或体感座椅,以向控制人员展示虚拟机器人在虚拟探测环境中执行控制指令的过程。
参见图1至图4,优选地,被探测环境与虚拟探测环境实时同步,探测机器人与虚拟机器人实时同步。
另外,本实施方式还提供了一种探测机器人,探测机器人包括前述的探测机器人的三体智能系统。
参见图1至图4,优选地,探测机器人为星球探测机器人或深海探测机器人。
为使本实施方式对探测机器人的三体智能系统及探测机器人解释清楚,下面以星球探测机器人为例,进行解释说明;当然,深海探测机器人及其他探测机器人也可以使用该系统架构。
对于星球探测机器人,控制人员可以指代地面的工程师和科学家等,他 们可以为星球探测机器人设定科学探测目标、利用星球探测机器人采集到的各种数据分析星球表面状况、在星球探测机器人陷入困境或无法决策时给予帮助等。
星球探测机器人,可以具备在非崎岖复杂环境下自主导航、科学仪器操作和科学目标探测等功能;
虚拟智能总模块可以是虚拟仿真软件中的虚拟星球探测机器人和所处虚拟环境,两者可进行虚拟交互。
另外,在星球探测机器人初期研制阶段,科研人员经过需求分析、方案论证等过程,对星球探测机器人本体结构、尺寸构型、电控系统、控制算法等进行设计。设计时便可以使用虚拟智能总模块进行三维建模、虚拟仿真软件建立机器人与环境等虚拟模型,开发相关程序,在真实环境中开展大量基础性实验研究,将实验得到的真实物理信息映射到虚拟环境中,建立高保真的虚拟模型,可在虚拟环境中对算法进行可行性验证,降低实验不确定性带来的风险,也缩短了开发周期。基于实验与仿真结果,进一步优化与改进现有方案。同时可借助虚拟现实设备,在高保真的虚拟环境中操纵虚拟星球探测机器人,练习指定科学目标、发送控制指令等,培训星球探测机器人操控人员。此阶段充分利用人机融合、虚拟现实、数字孪生模块,对星球探测机器人本体设计、算法优化、方案验证及模拟培训等关键环节起到重要作用。星球探测机器人到达星球表面后,星球探测机器人本体信息和所处环境信息通过卫星通信传输到地面,也就是探测机器人实时获取的被探测环境的环境数据以及探测机器人的机器人数据,然后虚拟智能总模块根据其创建虚拟探测环境和虚拟机器人。同时通过人机融合模块向控制人员展示虚拟机器人在虚拟探测环境中的情况,例如,可使用VR眼镜、体感座椅等设备使控制人员在虚拟探测环境中感受星球探测机器人的运动状态和周围环境信息,并通过鼠标、键盘、手柄等设备对虚拟机器人进行控制。并通过人机融合模块命令虚拟机器人在虚拟探测环境中完成自主路径规划和路径跟踪,评估在虚拟探测环境中虚拟机器人的运行状态,若效果不佳则控制人员可以进行人为干涉给出路径关键点或完整路径,该过程可以反复进行多次,以求寻找到最佳的机器人控制策略。之后将确认的最佳控制指令,也就是在控制人员确认控 制指令后,通过卫星通信上传到星球探测机器人本体的控制系统中,探测机器人即可按照指令执行相关操作,并将探测机器人在星球表面的状态反馈到虚拟机器人,对虚拟机器人状态更新,也就是被探测环境与虚拟探测环境实时同步,探测机器人与虚拟机器人实时同步,使探测机器人和虚拟机器人状态一致。此阶段充分利用人机融合、虚拟现实、数字孪生模块各自优势及特点,不仅增强了地面操作人员直观感知能力,同还时对星球探测机器人未知作业的难度系数分析、可通过性方案验证起到重要作用。
参见图1至图4,当星球探测机器人降落到被探测的星球表面后,利用其自身携带的各种传感器组成的感知系统进行多源信息测量(使用摄像头测量图像信息;激光雷达测量3D点云信息以感知距离深度;电机编码器测量车轮转动角速度;惯性测量单元IMU测量探测机器人的姿态信息;六维力传感器测量机器人各个关键位置的受力状态等等)。在测量信息的同时将其传输到地面的星球探测机器人虚拟数字体环境中(也就是虚拟智能总模块),根据环境的图像、点云;探测机器人的速度、位姿和受力情况等建立高度保真的虚拟的数字探测机器人和虚拟数字星球环境(也就是根据探测机器人实时获取的被探测环境的环境数据以及所述探测机器人的机器人数据,而创建虚拟探测环境和虚拟机器人)。星球探测机器人利用感知系统测量的相关信息进行星球土壤参数的辨识和估计,之后同时进行定位和建立包含星球表面几何和物理性质的地图(也就是图1中的SLAM&Characterization),此时也将获得的相关信息传输到虚拟数字体中。在星球探测机器人完成以上工作后,需要进行科学任务规划及决策,目前星球探测机器人的智能程度有限,暂不能自动进行相关科学任务(解释科学任务:例如:哪块石头科学价值更高,需要使用探测仪器检测其成分?或哪里的星壤/土壤更具有代表性,需要采样带回地球?等等)的规划,此时需要地面的科学家进行协助,地面的科学家具有相关领域丰富的科学研究积累,可根据经验进行科学价值的判断,确定要探索的科学目标,以及使用何种科学探测仪器对该目标进行检测,在地面科学家确认目标后将指令传输到星球探测机器人。
另外,图1中,S为智能探测机器人的车轮滑转率;Z为智能探测机器人的车轮沉陷量;F为智能探测机器人的车轮受力信息(xyz方向的力和力 矩,共六个量,称为:六维力信息);w车轮转速,由智能探测机器人的车轮沉陷量Z、智能探测机器人的车轮滑转率S和智能探测机器人的车轮受力信息F,可以通过地面力学相关公式计算出土壤参数。
星球探测机器人根据该指令进行下一步工作(到达该目标所在的位置即导航)。在导航阶段分为,地貌可通过性分析、路及规划以及路径跟踪,星球探测机器人通过之前建立的地图,结合自身结构的运动学和动力学信息,判断地图上哪些地方是星球探测机器人可以安全通过的(地面坡度过大、地面障碍物过多,地面土壤过于松软等情况都是机器人不可通过的),利用判断结果规划出如何到达指定目标点(此处根据规划标准不同会有不同的路径结果,路径最短、能量最小等都是规划路径的一种标准),规划好路径后进行路径跟踪,星球探测机器人按照规划的路径到达目标地点进行采样或使用科学仪器分析相关科学目标。在导航阶段可能遇到地面十分复杂的情况,此时星球探测机器人自身无法规划出到达指定目标地点的路径,需要地面工作人员的协助,地面工作人员在虚拟探测环境中进行以上可通过性分析、路径规划和路径跟踪过程,可反复测试不断验证指令,得到可行指令后将其上传到真实探测机器人的控制系统中,命令其按照指令移动,到达指定目标点。综上所述,利用该探测机器人的三体智能系统,使星球探测任务中的科学家及工程师通过与生物云平台模块结合而变为智能,也就是控制人员的智能,智能探测机器人和虚拟智能总模块三者任务分工明确,从而形成三体智能系统;接着,智能探测机器人、虚拟智能总模块与控制人员的智能相互交叉,从而形成数字孪生模块、虚拟现实模块及人机融合模块,并使它们配合紧密,相互间有机融合,可靠、高效、智能的完成星球探测任务,是典型的三体智能系统的应用场景。
虽然本发明公开披露如上,但本发明公开的保护范围并非仅限于此。本领域技术人员在不脱离本发明公开的精神和范围的前提下,可进行各种变更与修改,这些变更与修改均将落入本发明的保护范围。

Claims (10)

  1. 一种探测机器人的三体智能系统,其中,包括:
    数字孪生模块,用于根据探测机器人实时获取的被探测环境的环境数据以及所述探测机器人的机器人数据,而创建虚拟探测环境和虚拟机器人;
    虚拟现实模块,用于根据所述虚拟探测环境、所述虚拟机器人以及控制人员对所述探测机器人的控制指令,生成所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程和结果;以及
    人机融合模块,用于传输所述控制指令,并向所述控制人员展示所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程和结果;并在获取所述控制人员确认所述控制指令的反馈后,使所述探测机器人执行所述控制指令;
    其中,所述探测机器人为智能探测机器人。
  2. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述数字孪生模块能够使所述被探测环境与所述虚拟探测环境相互映射;和/或,所述数字孪生模块能够使所述探测机器人与所述虚拟机器人相互映射。
  3. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述控制指令包括:确定所述探测机器人的科学探测目标或确定所述探测机器人的行驶路径。
  4. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述探测机器人的三体智能系统与云平台相互映射。
  5. 根据权利要求4所述的探测机器人的三体智能系统,其中,所述探测机器人的三体智能系统还包括虚拟智能总模块,所述虚拟智能总模块包括所述数字孪生模块和所述虚拟现实模块;
    所述云平台为多个,多个所述云平台包括:
    数字云平台模块,与所述虚拟智能总模块相互映射;或/和
    物理云平台模块,与所述探测机器人相互映射;或/和
    生物云平台模块,通过所述人机融合模块与所述控制人员相互映射。
  6. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述数字孪生模块用于根据所述探测机器人实时获取的被探测环境的环境数据以及 设定的数据而创建所述虚拟探测环境。
  7. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述人机融合模块包括VR眼镜和/或体感座椅,以向所述控制人员展示所述虚拟机器人在所述虚拟探测环境中执行所述控制指令的过程。
  8. 根据权利要求1所述的探测机器人的三体智能系统,其中,所述被探测环境与所述虚拟探测环境实时同步,所述探测机器人与所述虚拟机器人实时同步。
  9. 一种探测机器人,其中,所述探测机器人包括权利要求1至8中任一项所述的探测机器人的三体智能系统。
  10. 根据权利要求9所述的探测机器人,其中,所述探测机器人为星球智能探测机器人或深海智能探测机器人。
PCT/CN2020/101380 2019-12-13 2020-07-10 一种三体智能系统及探测机器人 WO2021114654A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/026,342 US12079005B2 (en) 2019-12-13 2020-07-10 Three-layer intelligence system architecture and an exploration robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911283281.5A CN110989605B (zh) 2019-12-13 2019-12-13 一种三体智能系统架构及探测机器人
CN201911283281.5 2019-12-13

Publications (1)

Publication Number Publication Date
WO2021114654A1 true WO2021114654A1 (zh) 2021-06-17

Family

ID=70093421

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101380 WO2021114654A1 (zh) 2019-12-13 2020-07-10 一种三体智能系统及探测机器人

Country Status (3)

Country Link
US (1) US12079005B2 (zh)
CN (1) CN110989605B (zh)
WO (1) WO2021114654A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114935916A (zh) * 2022-06-02 2022-08-23 南京维拓科技股份有限公司 一种利用物联网和虚拟现实技术实现工业元宇宙的方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989605B (zh) * 2019-12-13 2020-09-18 哈尔滨工业大学 一种三体智能系统架构及探测机器人
US11393175B2 (en) 2020-02-06 2022-07-19 Network Documentation & Implementation Inc. Methods and systems for digital twin augmented reality replication of non-homogeneous elements in integrated environments
CN111650852B (zh) * 2020-04-23 2021-10-26 中国电子科技集团公司第三十八研究所 系留气球数字孪生监控系统
CN111654319B (zh) * 2020-04-23 2022-03-15 中国电子科技集团公司第三十八研究所 系留气球系统数字孪生监控方法
CN111812299A (zh) * 2020-07-17 2020-10-23 哈尔滨工业大学 基于轮式机器人的土壤参数辨识方法、装置及存储介质
EP4257301A4 (en) * 2020-11-12 2024-08-21 Yujin Robot Co Ltd FUNCTIONAL SAFETY SYSTEM FOR ROBOTS
CN113687718A (zh) * 2021-08-20 2021-11-23 广东工业大学 一种人-机集成的数字孪生系统及其构建方法
CN113959444A (zh) * 2021-09-30 2022-01-21 达闼机器人有限公司 用于无人设备的导航方法、装置、介质及无人设备
CN115268646B (zh) * 2022-08-02 2024-07-19 清华大学 一种人机协同建造过程感知系统、装置、分析方法和介质
US20240078442A1 (en) * 2022-09-07 2024-03-07 International Business Machines Corporation Self-development of resources in multi-machine environment
CN116382476B (zh) * 2023-03-30 2023-10-13 哈尔滨工业大学 一种用于月表人机协同作业的穿戴式交互系统
CN116401623B (zh) * 2023-04-19 2023-11-03 深圳墨影科技有限公司 一种机器人控制算法融合方法
CN116740280A (zh) * 2023-06-14 2023-09-12 山东科技大学 基于虚拟现实的井下巷道掘进爆破无人化操作系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103592854A (zh) * 2013-11-14 2014-02-19 哈尔滨工程大学 一种水下无人航行器观测任务的同步虚拟推演装置
US9671777B1 (en) * 2016-06-21 2017-06-06 TruPhysics GmbH Training robots to execute actions in physics-based virtual environment
CN108388146A (zh) * 2018-02-01 2018-08-10 东南大学 一种基于信息物理融合的三维装配工艺设计系统及运行方法
CN108427390A (zh) * 2018-04-16 2018-08-21 长安大学 一种基于数字孪生的车间级智能制造系统及其配置方法
CN110181519A (zh) * 2019-06-25 2019-08-30 广东希睿数字科技有限公司 基于数字孪生机器人的地铁台站门故障检测方法及系统
CN110221546A (zh) * 2019-05-21 2019-09-10 武汉理工大学 虚实融合的船舶智能控制系统测试平台
CN110454290A (zh) * 2019-07-02 2019-11-15 北京航空航天大学 一种基于数字孪生技术的汽车发动机管控方法
CN110989605A (zh) * 2019-12-13 2020-04-10 哈尔滨工业大学 一种三体智能系统架构及探测机器人

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040134337A1 (en) * 2002-04-22 2004-07-15 Neal Solomon System, methods and apparatus for mobile software agents applied to mobile robotic vehicles
CN102306216A (zh) * 2011-08-10 2012-01-04 上海交通大学 月球车多规律仿真测试系统
US9623561B2 (en) * 2012-10-10 2017-04-18 Kenneth Dean Stephens, Jr. Real time approximation for robotic space exploration
US20140320392A1 (en) * 2013-01-24 2014-10-30 University Of Washington Through Its Center For Commercialization Virtual Fixtures for Improved Performance in Human/Autonomous Manipulation Tasks
US10078712B2 (en) * 2014-01-14 2018-09-18 Energid Technologies Corporation Digital proxy simulation of robotic hardware
EP3287861A1 (en) * 2016-08-24 2018-02-28 Siemens Aktiengesellschaft Method for testing an autonomous system
US10877470B2 (en) * 2017-01-26 2020-12-29 Honeywell International Inc. Integrated digital twin for an industrial facility
US10452078B2 (en) * 2017-05-10 2019-10-22 General Electric Company Self-localized mobile sensor network for autonomous robotic inspection
EP3655826B1 (en) * 2017-07-17 2024-07-03 Johnson Controls Tyco IP Holdings LLP Systems and methods for agent based building simulation for optimal control
US20190385364A1 (en) * 2017-12-12 2019-12-19 John Joseph Method and system for associating relevant information with a point of interest on a virtual representation of a physical object created using digital input data
WO2019133792A1 (en) * 2017-12-30 2019-07-04 Infinite Kingdoms, LLC Smart entertainment technology attractions
JP7082416B2 (ja) * 2018-05-24 2022-06-08 ザ カラニー ホールディング エスエーアールエル 実世界を表現するリアルタイム3d仮想世界の中でのリアルタイム3d仮想物体の双方向リアルタイム3dインタラクティブ操作
CN108919765B (zh) * 2018-07-20 2021-06-04 王德权 一种基于数字孪生的智能制造工厂虚拟调试和虚拟监控方法及系统
CN112672860B (zh) * 2018-09-10 2024-04-09 发纳科美国公司 用于ar和数字孪生的机器人校准
CN111126735B (zh) * 2018-11-01 2022-07-19 中国石油化工股份有限公司 一种钻井数字孪生系统
US11584020B2 (en) * 2018-12-04 2023-02-21 Cloudminds Robotics Co., Ltd. Human augmented cloud-based robotics intelligence framework and associated methods
CN109571476A (zh) * 2018-12-14 2019-04-05 南京理工大学 工业机器人数字孪生实时作业控制、监控与精度补偿方法
US11762369B2 (en) * 2019-02-06 2023-09-19 Sensory Robotics, Inc. Robotic control via a virtual world simulation
US20200259896A1 (en) * 2019-02-13 2020-08-13 Telefonaktiebolaget Lm Ericsson (Publ) Industrial Automation with 5G and Beyond
GB201906813D0 (en) * 2019-05-16 2019-06-26 Roboraca Ltd Metaverse
CN110320873A (zh) * 2019-07-05 2019-10-11 武汉魅客科技有限公司 一种基于分布式传感网络的实时三维呈现系统
AU2020400985A1 (en) * 2019-12-10 2022-06-30 RemSense Pty Ltd An asset virtualisation system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103592854A (zh) * 2013-11-14 2014-02-19 哈尔滨工程大学 一种水下无人航行器观测任务的同步虚拟推演装置
US9671777B1 (en) * 2016-06-21 2017-06-06 TruPhysics GmbH Training robots to execute actions in physics-based virtual environment
CN108388146A (zh) * 2018-02-01 2018-08-10 东南大学 一种基于信息物理融合的三维装配工艺设计系统及运行方法
CN108427390A (zh) * 2018-04-16 2018-08-21 长安大学 一种基于数字孪生的车间级智能制造系统及其配置方法
CN110221546A (zh) * 2019-05-21 2019-09-10 武汉理工大学 虚实融合的船舶智能控制系统测试平台
CN110181519A (zh) * 2019-06-25 2019-08-30 广东希睿数字科技有限公司 基于数字孪生机器人的地铁台站门故障检测方法及系统
CN110454290A (zh) * 2019-07-02 2019-11-15 北京航空航天大学 一种基于数字孪生技术的汽车发动机管控方法
CN110989605A (zh) * 2019-12-13 2020-04-10 哈尔滨工业大学 一种三体智能系统架构及探测机器人

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU YINGNIAN , YANG QI: "Visual Servo Grab System and Its Digital Twin System", COMPUTER INTEGRATED MANUFACTURING SYSTEMS, vol. 25, no. 6, 30 June 2019 (2019-06-30), pages 1528 - 1535, XP055820312, ISSN: 1006-5911, DOI: 10.13196/j.cims.2019.06.020 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114935916A (zh) * 2022-06-02 2022-08-23 南京维拓科技股份有限公司 一种利用物联网和虚拟现实技术实现工业元宇宙的方法

Also Published As

Publication number Publication date
US20220350341A1 (en) 2022-11-03
CN110989605A (zh) 2020-04-10
CN110989605B (zh) 2020-09-18
US12079005B2 (en) 2024-09-03

Similar Documents

Publication Publication Date Title
WO2021114654A1 (zh) 一种三体智能系统及探测机器人
O'Kelly et al. F1/10: An open-source autonomous cyber-physical platform
US11745355B2 (en) Control device, control method, and non-transitory computer-readable storage medium
WO2019076044A1 (zh) 移动机器人局部运动规划方法、装置及计算机存储介质
Chen et al. A learning model for personalized adaptive cruise control
Xia et al. Sensory augmentation for subsea robot teleoperation
Al-Mashhadani et al. Autonomous exploring map and navigation for an agricultural robot
Do Quang et al. Mapping and navigation with four-wheeled omnidirectional mobile robot based on robot operating system
Li et al. A software framework for multi-agent control of multiple autonomous underwater vehicles for underwater mine counter-measures
Huang et al. Immersive virtual simulation system design for the guidance, navigation and control of unmanned surface vehicles
Básaca-Preciado et al. Intelligent transportation scheme for autonomous vehicle in smart campus
Kryuchkov et al. Simulation of the «cosmonaut-robot» system interaction on the lunar surface based on methods of machine vision and computer graphics
Mettler et al. Research infrastructure for interactive human-and autonomous guidance
Cichon et al. Robotic teleoperation: Mediated and supported by virtual testbeds
Chow et al. Learning human navigational skill for smart wheelchair in a static cluttered route
Zheng et al. Virtual Prototyping-Based Path Planning of Unmanned Aerial Vehicles for Building Exterior Inspection
Rossmann et al. The virtual testbed: Latest virtual reality technologies for space robotic applications
Baklouti et al. Remote control of mobile robot through 3D virtual reality environment
Ootsubo et al. Support system for slope shaping based on a teleoperated construction robot
Subhashini et al. Autonomous Navigation using Lidar Sensor in ROS and GAZEBO
Kiran et al. Design and development of autonomous mobile robot for mapping and navigation system
Chow et al. Learning human navigational skill for smart wheelchair
Truong et al. An integrative approach of social dynamic long short-term memory and deep reinforcement learning for socially aware robot navigation
McLeod Robust and Reliable Real-time Adaptive Motion Planning
Adetunji et al. Digital Twins Below the Surface: Enhancing Underwater Teleoperation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899511

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899511

Country of ref document: EP

Kind code of ref document: A1