WO2023051289A1 - 用于无人设备的导航方法、装置、介质及无人设备 - Google Patents

用于无人设备的导航方法、装置、介质及无人设备 Download PDF

Info

Publication number
WO2023051289A1
WO2023051289A1 PCT/CN2022/119467 CN2022119467W WO2023051289A1 WO 2023051289 A1 WO2023051289 A1 WO 2023051289A1 CN 2022119467 W CN2022119467 W CN 2022119467W WO 2023051289 A1 WO2023051289 A1 WO 2023051289A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment
virtual
unmanned
map
digital twin
Prior art date
Application number
PCT/CN2022/119467
Other languages
English (en)
French (fr)
Inventor
黄晓庆
张站朝
马世奎
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Publication of WO2023051289A1 publication Critical patent/WO2023051289A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the present disclosure relates to the field of unmanned equipment control, and in particular, relates to a navigation method, device, medium and unmanned equipment for unmanned equipment.
  • unmanned equipment wants to realize autonomous walking, it needs to be able to realize accurate positioning of unmanned equipment.
  • the quality of navigation map construction will directly affect the navigation path of unmanned equipment.
  • the unmanned equipment In related technologies, usually when the unmanned equipment arrives in a new environment, the unmanned equipment is controlled to perform mobile scanning in the environment, so as to collect environmental information during the mobile scanning process, and generate a navigation map to control the movement of the unmanned equipment. Subsequent moves. In the above process, the unmanned equipment needs to plan the path according to the map, which reduces the navigation accuracy and navigation efficiency of the unmanned equipment.
  • the purpose of the present disclosure is to provide a high-precision navigation method, device, medium and unmanned equipment for unmanned equipment.
  • a navigation method for unmanned equipment which is applied to unmanned equipment, and the method includes:
  • the local environment map is generated from information obtained by scanning the target environment by the unmanned device, or the local environment map is based on the cloud and the Synchronized virtual environment map constructed by the virtual twin environment corresponding to the target environment;
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • the method also includes:
  • the local environment map is updated according to the environmental information collected by the unmanned equipment.
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment; the virtual environment map is determined in the following manner:
  • the virtual device is controlled to traverse the virtual twin environment, and the feature information in the virtual twin environment is collected based on the virtual sensor in the virtual device, so as to generate the virtual environment map according to the feature information.
  • the virtual environment map includes a grid map and a visual feature map
  • the virtual sensor includes a virtual lidar and a virtual vision camera
  • the collecting feature information in the virtual twin environment based on the virtual sensor in the virtual device, so as to generate the virtual environment map according to the feature information includes:
  • the local environment map is generated from information obtained by scanning the target environment by the unmanned device; the local environment map includes a grid map and a visual feature map;
  • the local environment map is generated as follows:
  • the pose feature information and visual image feature information corresponding to the target environment are collected based on the visual camera set in the unmanned device, and the visual feature map is generated according to the pose feature information and the visual image feature information.
  • updating the local environment map according to the environmental information collected by the unmanned equipment includes:
  • controlling the unmanned equipment to collect the environmental information of the target environment according to a preset time interval
  • the environment information collected by the unmanned device at the mobile location is compared with the local environment map, and the local environment map is updated according to the comparison result.
  • the virtual environment map includes a grid map and a visual feature map
  • the unmanned device is positioned according to the environment information and the local environment map, and the moving position of the unmanned device is determined, include:
  • the moving position is determined according to the first position and the second position.
  • the determining the mobile location according to the first location and the second location includes:
  • the method also includes:
  • the determining the mobile location according to the first location and the second location further includes:
  • the third position is determined as the mobile position.
  • a navigation method for an unmanned device which is applied to a cloud, and the method includes:
  • controlling the virtual device to traverse the virtual twin environment, and collecting characteristic information in the virtual twin environment based on virtual sensors in the virtual device, so as to generate the virtual environment map according to the characteristic information;
  • a navigation device for unmanned equipment which is applied to the unmanned equipment, and the device includes:
  • a first acquiring module configured to acquire a local environment map of a target environment used for navigation, wherein the local environment map is generated from information obtained by scanning the target environment by the unmanned device, or the local environment map It is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • a first determination module configured to determine the initial position of the unmanned device based on the local environment map
  • the first sending module is used to determine the moving target position according to the received moving instruction, and send the initial position and the target position to the cloud, so that the cloud can determine that the virtual device corresponding to the unmanned device is in the The virtual initial position and the virtual target position in the virtual twin environment, and based on the virtual initial position and the virtual target position, determine the movement path of the virtual device, wherein the virtual device is generated on the cloud and in a virtual device corresponding to the unmanned device in the virtual twin environment;
  • the control module is configured to control the movement of the unmanned device according to the movement path in response to receiving the movement path sent by the cloud.
  • a navigation device for unmanned equipment which is applied to the cloud, and the device includes:
  • the third acquisition module is used to acquire the environmental data information of the target environment used for navigation;
  • a reconstruction module configured to perform three-dimensional space reconstruction based on the environmental data information, and obtain a virtual twin environment corresponding to the target environment;
  • a generating module configured to generate a virtual device corresponding to the unmanned device in the target environment in the virtual twin environment
  • a collection module configured to control the virtual device to traverse the virtual twin environment, and collect feature information in the virtual twin environment based on virtual sensors in the virtual device, so as to generate the virtual twin environment according to the feature information. environment map;
  • the third determination module is configured to determine the virtual initial position and virtual target position of the virtual device corresponding to the unmanned device in the virtual twin environment in response to the received initial position and target position, and based on the virtual initial position position and the virtual target position, and determine the movement path of the virtual device;
  • the second sending module is configured to send the moving path to the unmanned device.
  • a computer program including computer readable code, which, when the computer readable code is run on a computing processing device, causes the computing processing device to execute the program described in the first aspect or the second aspect. described method.
  • a non-transitory computer-readable storage medium on which is stored the computer program as proposed in the embodiment of the fifth aspect, and when the program is executed by a processor, any of the first aspect or the second aspect can be realized. A step of said method.
  • an unmanned device including:
  • a memory on which the computer program as proposed in the embodiment of the fifth aspect is stored
  • a processor configured to execute the computer program in the memory, so as to implement the steps of any one of the methods of the first aspect or the second aspect.
  • the unmanned device itself generates or obtains the local environment map of the target environment used for navigation from the cloud, so that the determined initial position of the unmanned device and the moving target position can be sent to the cloud so that the cloud determines the virtual initial position and the virtual target position of the virtual device corresponding to the unmanned device in the virtual twin environment, and based on the virtual initial position and the virtual target position, determines the movement path of the unmanned device, Then the unmanned device controls the movement of the unmanned device according to the movement path in response to receiving the movement path sent by the cloud.
  • the unmanned device can send its initial position and target position to the cloud, so that the cloud can generate a virtual device based on the virtual twin environment, and generate the moving path of the unmanned device based on the information of the virtual device.
  • the cloud can effectively reduce the requirements of path planning on the performance of unmanned equipment itself.
  • it can display and monitor the movement path of unmanned equipment in real time, further ensuring the accuracy of unmanned equipment navigation and improving user experience.
  • FIG. 1 is a flow chart of a navigation method for an unmanned device provided according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a laser raster image provided according to an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a visual feature map provided according to an embodiment of the present disclosure.
  • Fig. 4 is a block diagram of a navigation device for unmanned equipment provided according to an embodiment of the present disclosure
  • Fig. 5 is a block diagram of an unmanned device according to an exemplary embodiment
  • FIG. 6 is a schematic structural diagram of a computing processing device provided by an embodiment of the present disclosure.
  • Fig. 7 provides a schematic diagram of a storage unit for portable or fixed program codes for implementing the method according to the present disclosure according to an embodiment of the present disclosure.
  • FIG. 1 it is a flow chart of a navigation method for an unmanned device provided according to an embodiment of the present disclosure. As shown in FIG. 1, the method may include:
  • step 11 the local environment map of the target environment used for navigation is obtained, wherein the local environment map is generated by the information obtained by scanning the target environment by the unmanned device, or the local environment map is based on
  • the cloud is obtained by synchronizing the virtual environment map constructed by the virtual twin environment corresponding to the target environment.
  • the target environment may be an environment served by the unmanned device, such as a hotel, a campus, and the like, and the unmanned device may be a robot or an unmanned delivery device.
  • the target environment corresponding to the unmanned device can be preset, so that it can communicate with the cloud.
  • the unmanned device can be directly placed in the target environment, so that the unmanned device can be controlled to perform mobile scanning in the target environment, so as to obtain the local environment map.
  • the local environment map may be sent to the cloud to provide the environment map for the cloud.
  • the unmanned device may synchronize the environment map required for its positioning from the cloud, and save the environment map locally to obtain the local environment map.
  • the environment map can be constructed based on the virtual twin environment, thereby effectively reducing the number of times of map construction.
  • step 12 the initial location of the unmanned device is determined based on the local environment map.
  • the unmanned device can obtain the environmental characteristics of its location, so as to perform positioning based on the environmental characteristics and the local environment map. For example, environmental image shooting can be performed based on the visual camera installed on the unmanned equipment, so that image recognition can be performed based on the captured environmental image, and the environmental image characteristics of the location of the unmanned equipment can be obtained. By combining the environmental image characteristics with the local The features in the environment map are compared, and the initial position is determined according to the matched features.
  • step 13 determine the moving target position according to the received movement instruction, and send the initial position and the target position to the cloud, so that the cloud can determine the virtual initial position of the virtual device corresponding to the unmanned device in the virtual twin environment position and virtual target position, and based on the virtual initial position and virtual target position, determine the movement path of the virtual device, wherein the virtual device is generated on the cloud, in the virtual twin environment and the The virtual device corresponding to the above-mentioned unmanned device.
  • the movement command can be triggered through the preset APP interface, such as inputting the target location of the movement, that is, the target position, or the current target environment can be displayed, and the user can trigger the movement command by clicking in the target environment, and the selected
  • the location of the target location is the target location;
  • the movement command can be triggered by voice input, that is, the user voices "Please bring me the book XX on the table", and the unmanned device receives the voice and determines it through voice recognition.
  • the position of "book XX on the table” is the target position.
  • the unmanned device sends its initial position and target position to the cloud, so that the cloud can plan its path without calculation on the unmanned device side, reducing the high requirements for unmanned device processing and improving the navigation method.
  • the cloud can generate a virtual device corresponding to the unmanned device in the virtual twin environment, and then map the initial position and the target position to the virtual twin environment respectively to obtain The virtual initial position and virtual target position corresponding to the virtual device, so that the path of the virtual device from the virtual initial position to the virtual target position can be determined based on the path planning method commonly used in this field, and the moving path of the virtual device in the virtual twin environment It is the same as the moving path of the unmanned device in the target environment.
  • the moving path may be determined based on the path selection requirements preset by the user, for example, the path selection requirements may be the shortest path, the shortest time, and the least energy consumption of the unmanned equipment, etc., which is not limited in the present disclosure.
  • step 14 in response to receiving the movement path sent by the cloud, the movement of the unmanned device is controlled according to the movement path.
  • the unmanned device can be controlled to move along the movement path.
  • the position of the unmanned equipment can be located at intervals of preset periods, so as to monitor and correct the movement path of the unmanned equipment in real time.
  • the unmanned device itself generates or obtains the local environment map of the target environment used for navigation from the cloud, so that the determined initial position of the unmanned device and the moving target position can be Send to the cloud, so that the cloud determines the virtual initial position and the virtual target position of the virtual device corresponding to the unmanned device in the virtual twin environment, and based on the virtual initial position and the virtual target position, determine the virtual target position of the unmanned device A moving path, and then the unmanned device controls the movement of the unmanned device according to the moving path in response to receiving the moving path sent by the cloud.
  • the unmanned device can send its initial position and target position to the cloud, so that the cloud can generate a virtual device based on the virtual twin environment, and generate the moving path of the unmanned device based on the information of the virtual device.
  • the cloud can effectively reduce the requirements of path planning on the performance of unmanned equipment itself.
  • it can display and monitor the movement path of unmanned equipment in real time, further ensuring the accuracy of unmanned equipment navigation and improving user experience.
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • the method also includes:
  • the local environment map is updated according to the environmental information collected by the unmanned equipment.
  • the environment map built on the cloud is constructed based on the virtual twin environment obtained by virtualizing the real target environment. It is difficult to map the changes of some objects in the real environment to the virtual twin environment in real time, so that the local The environment map deviates from the target environment for this reality. Therefore, in this embodiment, during the process of controlling the movement of the unmanned equipment, the unmanned equipment can collect environmental information of the environment it passes through during the movement process, for example, the laser radar sensor installed on the unmanned equipment can collect environmental information Laser point cloud data, based on the 3D depth camera to obtain visual image data, so that the collected information can be compared with the local environment map, and the local environment map can be updated to improve the local environment map and the unmanned equipment. adaptability.
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment; the virtual environment map is determined in the following manner:
  • the environmental data information can be the characteristic data obtained by collecting information on the target environment through lidar and 3D vision tools (such as multi-line laser and IMU (Inertial Measurement Unit, inertial measurement unit) and other sensors), and then The feature data is post-processed to delete the repeated information, and the feature data corresponding to the same position are integrated to form the 3D dense point cloud data of the realistic target environment, which can then be based on the 3D dense point cloud data Perform 3D space rendering and reconstruction to obtain the virtual twin environment.
  • 3D vision tools such as multi-line laser and IMU (Inertial Measurement Unit, inertial measurement unit) and other sensors
  • the point cloud can be spliced based on the original image collected by the monocular camera, the corresponding depth map, and the corresponding camera pose to generate a three-dimensional dense point cloud map.
  • MVS Multiple View Stereo, dense reconstruction
  • the 3D model diagram of the target scene uploaded based on the existing unmanned equipment scanning can also be uploaded to the cloud.
  • the twin simulation environment can be constructed based on the existing digital twin technology, which is not limited in the present disclosure.
  • a physical unmanned device can be used to scan the target environment, and the collected original 3D point cloud data and RGB image data can be uploaded to the cloud, where the environment can be analyzed based on the 3D point cloud and RGB image information.
  • 3D reconstruction semantic segmentation of the scene based on the 3D reconstruction scene to form a digital twin environment corresponding to the target environment, in which the semantic segmentation model can be pre-trained through the neural network, so as to realize image-based semantic segmentation and 3D reconstruction.
  • a virtual device corresponding to the unmanned device is generated in the virtual twin environment.
  • virtual generation may be performed based on multiple sensors set on the unmanned device, so as to generate a virtual device having the same information collection sensors as the unmanned device.
  • a virtual device identical to the unmanned device can be generated in the virtual twin environment. If the virtual device is a twin unmanned device, such as a digital twin device generated based on a digital twin, the twin unmanned device can The physical sensor parameters of the physical unmanned device are simulated, that is, the twin unmanned The sensor is as close as possible to the simulation effect of the physical unmanned equipment to improve the accuracy of information collection.
  • the virtual device is controlled to traverse the virtual twin environment, and the feature information in the virtual twin environment is collected based on the virtual sensor in the virtual device, so as to generate the virtual environment map according to the feature information.
  • the virtual device can be controlled to move in the virtual twin environment to collect feature information of the virtual twin environment.
  • the virtual device can obtain the distance, angular resolution, scanning frequency, etc. of the lidar through its virtual multi-sensor, as well as 3D Feature information such as internal parameters of the visual camera, so as to generate the virtual environment map based on the feature information collected by the virtual device.
  • the virtual twin environment can be obtained by digital twin generation of the real target environment, and at the same time, the virtual equipment can be obtained by virtualizing the physical unmanned equipment, and then the virtual twin environment can be controlled by controlling the virtual equipment.
  • Scanning for mapping on the one hand, there is no need to control the physical unmanned equipment for mobile scanning, which improves the efficiency of environmental scanning;
  • the environmental map of human equipment does not require each unmanned equipment to scan and collect environmental information, which improves the scope of use of this method.
  • the device provides high-precision environmental maps to improve the accuracy and efficiency of its navigation.
  • the virtual environment map includes a grid map and a visual feature map
  • the virtual sensor includes a virtual lidar and a virtual vision camera
  • the exemplary implementation of collecting feature information in the virtual twin environment based on the virtual sensor in the virtual device to generate the virtual environment map according to the feature information is as follows, and this step may include:
  • the SLAM mapping algorithm can be used to construct a map based on the obtained laser point cloud feature information to obtain a laser grid map (GridMap) for positioning, as shown in FIG. 2 .
  • the grid image is essentially a bitmap image, and each "pixel" in the bitmap image represents the probability distribution of obstacles in the actual target environment, so that the possible obstacles in the target environment can be determined based on the grid image.
  • Passage part As shown in Figure 2, the greater the probability of the presence of obstacles, the darker the color, the white part can be used to indicate the part where there is no obstacle, that is, the passable part; the black part is used to indicate the presence of obstacles part, that is, the impassable part.
  • the visual feature map can be determined according to the visual image collected by the 3D visual camera based on the time mapping algorithm of vSLAM. For example, according to the positioning point and pose feature information during the virtual device acquisition process, Obtain the visual image corresponding to the anchor point and the pose feature information, and extract the feature points based on the visual image to obtain the visual feature map, and then perform feature splicing based on each anchor point and pose feature information, and the obtained overall visual feature map is as follows: As shown in FIG. 3 , each point therein is a feature point in the determined visual feature map.
  • a feature point extraction model may be pre-trained, and the model may be a neural network model, and then the visual feature map may be input into the feature point extraction model to obtain a feature map.
  • the information collected by the virtual device in the virtual twin environment can be used to generate a raster map and a visual feature map, where the raster map can be used to represent the obstacle situation in the target environment, so as to determine the target
  • the visual feature map is used to determine the feature points of each part of the target environment, so as to achieve feature comparison to determine the position of an object, so that the target environment can be accurately determined by combining the above grid map and visual feature map
  • it provides accurate data support for determining the position of the unmanned device moving to the target object, improves the accuracy and efficiency of navigation, and improves the user experience.
  • the local environment map is generated from information obtained by scanning the target environment by the unmanned device; the local environment map includes a grid map and a visual feature map;
  • the local environment map is generated as follows:
  • the pose feature information and visual image feature information corresponding to the target environment are collected based on the visual camera set in the unmanned device, and the visual feature map is generated according to the pose feature information and the visual image feature information.
  • the target environment can be scanned directly based on the components set by the unmanned device itself, so that the actual environment information in the target environment can be obtained, ensuring that the information used to build the local environment map is consistent with the actual environment and the wireless environment.
  • the matching of human and equipment provides accurate data support for subsequent positioning based on the local environment map, improves the accuracy of the determined unmanned equipment positioning, and thus ensures the accuracy and effectiveness of unmanned equipment navigation.
  • this step may include :
  • the unmanned equipment During the movement of the unmanned equipment, the unmanned equipment is controlled to collect the environmental information of the target environment according to a preset time interval.
  • the environment information may be a visual image in the target environment, for example, it may be photographed based on a visual camera installed on the unmanned device, so as to obtain an environmental image of its current location.
  • the environment information may be obstacle information in the target environment.
  • the unmanned device may be monitored and photographed based on a laser radar installed on the unmanned device, so as to obtain obstacle information at its current location.
  • the unmanned device is positioned according to the environment information and the local environment map, and the moving position of the unmanned device is determined.
  • the mobile position is the real-time position of the unmanned device during the movement.
  • the features and matching degrees in the local environmental map that match the environmental information can be obtained, and the corresponding features with the highest matching degree
  • the location of is determined as the mobile location.
  • the environment information collected by the unmanned device at the mobile location is compared with the local environment map, and the local environment map is updated according to the comparison result.
  • the current location of the unmanned device After the current location of the unmanned device is determined, it can be further based on the unmanned device.
  • the current direction obtains the image in the field of view of the unmanned device in the local environment map, and compares the image with the characteristics of the real-time environmental information collected by the unmanned device at this location, and if the two are consistent, maintain the local environment The map remains unchanged. If the two are inconsistent, the characteristics of the corresponding location in the local environment map are updated with the characteristics of the real-time environmental information collected by the unmanned device at this location, so as to ensure that the local environment map is consistent with the current reality.
  • the features in the target environment are consistent, so that the local environment map can be updated during the movement of the unmanned device, further improving the accuracy of the local environment map while saving the operation of the unmanned device, and at the same time it can be used for subsequent unmanned devices.
  • the navigation provides more accurate data support.
  • the local environment map can be sent to the cloud, so that the cloud can update the virtual environment map based on the updated local environment map.
  • the virtual environment map includes a grid map and a visual feature map
  • the unmanned device is positioned according to the environment information and the local environment map, and the unmanned device is determined to be
  • An exemplary implementation of the mobile location of the device is as follows, and this step may include:
  • a second position is determined according to the visual image information and the visual feature map.
  • feature extraction can be carried out based on the visual image information collected by the unmanned device at the current position to obtain real-time feature points, and the real-time feature points are compared with the feature points in the visual feature map, so as to determine the difference between the visual feature map and the visual feature map.
  • the position of the feature point with the highest matching degree is determined as the second position.
  • the moving position is determined according to the first position and the second position.
  • the unmanned equipment when the unmanned equipment is positioned in real time during the movement of the unmanned equipment, it can be positioned by combining the grid map and the visual feature map at the same time, and the multi-angle positioning can ensure that the unmanned The accuracy of the moving position of the unmanned equipment, and can grasp the moving position of the unmanned equipment in real time, which is convenient for the control of the moving path of the unmanned equipment, so as to control the local path of the unmanned equipment, so that it can move according to the moving path, Ensure the accuracy and efficiency of unmanned device navigation.
  • the determining the mobile location according to the first location and the second location may include:
  • the corresponding matching degrees when determining the first position and the second position can be used as their respective corresponding position confidence levels, so that when the position confidence level corresponding to the first position is greater than the position confidence level corresponding to the second position,
  • the first position may be directly used as the moving position
  • the second position may be directly used as the moving position. Therefore, when the position of the unmanned device is determined in real time based on the grid map and the visual feature map, the more accurate position can be used as the mobile position of the unmanned device, and the positioning can be combined with the two maps to further improve the accuracy of the unmanned device.
  • the accuracy of the moving position of the unmanned equipment provides data support for controlling the smooth movement of the unmanned equipment to the target position.
  • the method may also include:
  • the acceleration information and the rotation angle information can be collected by an inertial sensor installed on the unmanned device.
  • the partial moving path of the unmanned equipment can be determined according to the acceleration information and the rotation angle information, so that it can be based on the last moving position,
  • the local moving path and the local environment map determine the current position predicted based on the last moving position, that is, the third position.
  • Another exemplary implementation of determining the mobile position according to the first position and the second position is as follows, and this step may also include:
  • the third position is determined as the mobile position.
  • the third position and the current position predicted based on the last mobile position if the distance between the determined mobile position and the third position is greater than the distance threshold, it means that the determined current position is different from the current position determined based on the last mobile position. If the deviation of the current position obtained is large, at this time, the current position determined based on the last mobile position, that is, the third position may be determined as the mobile position.
  • the continuity of positioning during the movement of the unmanned equipment can be improved to a certain extent, which conforms to the actual movement route of the unmanned equipment, so that the accuracy of the navigation of the unmanned equipment can be improved to a certain extent, so that The movement path of unmanned device navigation matches the actual environment, and can also provide the accuracy of machine movement control and improve user experience.
  • the present disclosure also provides a navigation method for unmanned equipment, the method is applied to the cloud, and the method includes:
  • controlling the virtual device to traverse the virtual twin environment, and collecting characteristic information in the virtual twin environment based on virtual sensors in the virtual device, so as to generate the virtual environment map according to the characteristic information;
  • a virtual environment map suitable for navigation by each unmanned device in the target environment can be constructed on the cloud, and the cloud sharing of the environmental map can be realized, without the need for each unmanned device newly added to the target environment to repeat
  • the construction of environmental maps saves the amount of data required for unmanned equipment to construct environmental maps, and can improve the accuracy of navigation maps to a certain extent; at the same time, it facilitates the unified management of environmental maps and improves the accuracy of unmanned equipment navigation and comprehensiveness.
  • the virtual device is generated based on the virtual twin environment through cloud-to-cloud, and the movement path of the unmanned device is generated based on the information of the virtual device. On the one hand, it can effectively reduce the performance requirements of the unmanned device itself for path planning.
  • the virtual device can display and monitor the movement path of the unmanned device in real time, further ensuring the accuracy of the navigation of the unmanned device and improving the user experience.
  • the present disclosure also provides a navigation device for unmanned equipment, as shown in FIG. 4 , applied to unmanned equipment, the device 10 includes:
  • the first acquisition module 100 is configured to acquire a local environment map of the target environment used for navigation, wherein the local environment map is generated by information obtained by scanning the target environment by the unmanned device, or the local environment The map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • the first determination module 200 is configured to determine the initial position of the unmanned device based on the local environment map
  • the first sending module 300 is configured to determine the moving target position according to the received moving instruction, and send the initial position and the target position to the cloud, so that the cloud can determine that the virtual device corresponding to the unmanned device is at a virtual initial position and a virtual target position in the virtual twin environment, and based on the virtual initial position and the virtual target position, determine the movement path of the virtual device, wherein the virtual device is generated on the cloud, a virtual device corresponding to the unmanned device in the virtual twin environment;
  • the control module 400 is configured to control the movement of the unmanned device according to the movement path in response to receiving the movement path sent by the cloud.
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • the device also includes:
  • An updating module configured to update the local environment map according to the environmental information collected by the unmanned equipment during the process of controlling the movement of the unmanned equipment.
  • the local environment map is obtained by synchronizing the virtual environment map constructed by the cloud in the virtual twin environment corresponding to the target environment;
  • the virtual environment map is determined by a composition module, and the composition module includes:
  • the first acquisition submodule is used to acquire the environmental data information of the target environment
  • the reconstruction sub-module is used to perform three-dimensional space reconstruction based on the environmental data information, and obtain a virtual twin environment corresponding to the target environment;
  • the first generation submodule is used to generate a virtual device corresponding to the unmanned device in the virtual twin environment
  • the second generating submodule is configured to control the virtual device to traverse the virtual twin environment, and collect feature information in the virtual twin environment based on the virtual sensor in the virtual device, so as to generate the Virtual environment map.
  • the virtual environment map includes a grid map and a visual feature map
  • the virtual sensor includes a virtual lidar and a virtual vision camera
  • the second generating submodule includes:
  • the third generation sub-module is used to collect laser point cloud feature information corresponding to the virtual twin environment based on the virtual lidar, and generate the raster image according to the laser point cloud feature information;
  • the fourth generation sub-module is used to collect pose feature information and visual image feature information corresponding to the virtual twin environment based on the virtual vision camera, and generate the Visual feature map.
  • the local environment map is generated from information obtained by scanning the target environment by the unmanned device; the local environment map includes a grid map and a visual feature map;
  • the local environment map is generated as follows:
  • the pose feature information and visual image feature information corresponding to the target environment are collected based on the visual camera set in the unmanned device, and the visual feature map is generated according to the pose feature information and the visual image feature information.
  • the update module includes:
  • the acquisition sub-module is used to control the unmanned equipment to collect the environmental information of the target environment according to a preset time interval during the movement of the unmanned equipment;
  • the first determining submodule is used to locate the unmanned device according to the environmental information and the local environment map, and determine the moving position of the unmanned device;
  • the updating submodule is configured to compare the environmental information collected by the unmanned device at the mobile position with the local environmental map, and update the local environmental map according to the comparison result.
  • the virtual environment map includes a grid map and a visual feature map
  • the first determining submodule includes:
  • the second acquisition sub-module is used to acquire laser point cloud information and visual image information corresponding to the location of the unmanned device
  • the second determining submodule is used to determine the first position according to the laser point cloud information and the grid map;
  • a third determining submodule configured to determine a second position according to the visual image information and the visual feature map
  • a fourth determining submodule configured to determine the moving position according to the first position and the second position.
  • the fourth determination submodule includes:
  • the fifth determination sub-module is configured to determine the position confidence levels corresponding to the first position and the second position, and determine a position with a high position confidence level as the moving position.
  • the device also includes:
  • the second acquisition module is used to acquire the acceleration information and rotation angle information of the unmanned device
  • the second determination module is configured to determine a third position according to the last mobile position, the acceleration information, the rotation angle information and the local environment map;
  • the fourth determining submodule also includes:
  • a sixth determining submodule configured to determine the third location as the moving location in a case where the determined distance between the moving location and the third location is greater than a distance threshold.
  • the present disclosure also provides a navigation device for unmanned equipment, which is applied to the cloud, and the device includes:
  • the third acquisition module is used to acquire the environmental data information of the target environment used for navigation;
  • a reconstruction module configured to perform three-dimensional space reconstruction based on the environmental data information, and obtain a virtual twin environment corresponding to the target environment;
  • a generating module configured to generate a virtual device corresponding to the unmanned device in the target environment in the virtual twin environment
  • a collection module configured to control the virtual device to traverse the virtual twin environment, and collect feature information in the virtual twin environment based on virtual sensors in the virtual device, so as to generate the virtual twin environment according to the feature information. environment map;
  • the third determination module is configured to determine the virtual initial position and virtual target position of the virtual device corresponding to the unmanned device in the virtual twin environment in response to the received initial position and target position, and based on the virtual initial position location and virtual target location, determining the moving path of the virtual device;
  • the second sending module is configured to send the moving path to the unmanned device.
  • Fig. 5 is a block diagram of an unmanned device 700 according to an exemplary embodiment.
  • the unmanned device 700 may include: a processor 701 and a memory 702 .
  • the unmanned device 700 may also include one or more of a multimedia component 703 , an input/output (I/O) interface 704 , and a communication component 705 .
  • I/O input/output
  • the processor 701 is used to control the overall operation of the unmanned device 700, so as to complete all or part of the steps in the above-mentioned navigation method for the unmanned device.
  • the memory 702 is used to store various types of data to support the operation of the unmanned device 700, such data may include instructions for any application or method operated on the unmanned device 700, and application-related Data such as contact data, sent and received messages, pictures, audio, video, and more.
  • the memory 702 can be realized by any type of volatile or non-volatile memory device or their combination, such as Static Random Access Memory (Static Random Access Memory, referred to as SRAM), Electrically Erasable Programmable Read-Only Memory (EPROM) Electrically Erasable Programmable Read-Only Memory, referred to as EEPROM), Erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory, referred to as EPROM), Programmable Read-Only Memory (Programmable Read-Only Memory, referred to as PROM), read-only Memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • Multimedia components 703 may include screen and audio components.
  • the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals.
  • an audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in memory 702 or sent via communication component 705 .
  • the audio component also includes at least one speaker for outputting audio signals.
  • the I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, a mouse, buttons, and the like. These buttons can be virtual buttons or physical buttons.
  • the communication component 705 is used for wired or wireless communication between the unmanned device 700 and other devices.
  • Wireless communication such as Wi-Fi, Bluetooth, Near Field Communication (NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or more of them Combinations are not limited here. Therefore, the corresponding communication component 705 may include: a Wi-Fi module, a Bluetooth module, an NFC module and the like.
  • the unmanned device 700 can be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, referred to as ASIC), digital signal processor (Digital Signal Processor, referred to as DSP), digital signal processing equipment ( Digital Signal Processing Device, referred to as DSPD), programmable logic device (Programmable Logic Device, referred to as PLD), field programmable gate array (Field Programmable Gate Array, referred to as FPGA), controller, microcontroller, microprocessor or other electronic
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD digital signal processing equipment
  • PLD programmable logic device
  • FPGA field programmable gate array
  • controller microcontroller, microprocessor or other electronic
  • the component is implemented for executing the above-mentioned navigation method for an unmanned device.
  • a computer-readable storage medium including program instructions.
  • the program instructions are executed by a processor, the steps of the above-mentioned navigation method for unmanned equipment are realized.
  • the computer-readable storage medium can be the above-mentioned memory 702 including program instructions, and the above-mentioned program instructions can be executed by the processor 701 of the unmanned device 700 to complete the above-mentioned navigation method for the unmanned device.
  • the present disclosure also proposes a computing processing device, including:
  • One or more processors when the computer readable code is executed by the one or more processors, the computing processing device executes the aforementioned navigation method for unmanned equipment.
  • the present disclosure also proposes a computer program, including computer readable codes, and when the computer readable codes run on a computing processing device, cause the computing processing device to execute the aforementioned navigation method.
  • the computer-readable storage medium proposed in the present disclosure stores the aforementioned computer program therein.
  • FIG. 6 is a schematic structural diagram of a computing processing device provided by an embodiment of the present disclosure.
  • the computing processing device typically includes a processor 1110 and a computer program product or computer readable medium in the form of memory 1130 .
  • Memory 1130 may be electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 1130 has a storage space 1150 for program code 1151 for performing any method steps in the methods described above.
  • the storage space 1150 for program codes may include respective program codes 1151 for respectively implementing various steps in the above methods. These program codes can be read from or written into one or more computer program products.
  • These computer program products comprise program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as shown in FIG. 7 .
  • the storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory 1130 in the computing processing device of FIG. 6 .
  • the program code can eg be compressed in a suitable form.
  • the storage unit includes computer readable code 1151', i.e. code readable by, for example, a processor such as 1110, which when executed by the server causes the server to perform the various steps in the methods described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Navigation (AREA)

Abstract

一种用于无人设备(700)的导航方法、装置(10)、介质及无人设备(700),用于无人设备(700)的导航方法包括:获取用于导航的目标环境的本地环境地图(11);基于本地环境地图,确定无人设备(700)所处的初始位置(12);根据接收到的移动指令确定移动的目标位置,并向云端发送初始位置和目标位置,以使云端确定无人设备(700)对应的虚拟设备在虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于虚拟初始位置和虚拟目标位置,确定虚拟设备的移动路径(13),虚拟设备为在云端生成的、在虚拟孪生环境中与无人设备(700)对应的虚拟设备;响应于接收到云端发送的移动路径,根据移动路径控制无人设备(700)移动(14),以提升无人设备(700)导航和路径规划的精确性。

Description

用于无人设备的导航方法、装置、介质及无人设备
相关申请的交叉引用
本公开要求在2021年09月30日提交中国专利局、申请号为202111162555.2、名称为“用于无人设备的导航方法、装置、介质及无人设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及无人设备控制领域,具体地,涉及一种用于无人设备的导航方法、装置、介质及无人设备。
背景技术
无人设备想要实现自主行走,则需要能够实现对无人设备的准确定位,在定位导航技术中通常导航地图构建的质量也将直接影响无人设备的导航路径。
相关技术中,通常是在无人设备到达一个新的环境时,控制该无人设备在该环境中进行移动扫描,以在该移动扫描的过程中采集环境信息,生成导航地图控制无人设备的后续移动。而在上述过程中需要无人设备根据地图进行路径规划,降低无人设备导航准确度和导航效率。
发明内容
本公开的目的是提供一种高精度的用于无人设备的导航方法、装置、介质及无人设备。
为了实现上述目的,根据本公开的第一方面,提供一种用于无人设备的导航方法,应用于无人设备,所述方法包括:
获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
基于所述本地环境地图,确定所述无人设备所处的初始位置;
根据接收到的移动指令确定移动的目标位置,并向所述云端发送所述初始位置和所述目标位置,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟 初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径,其中,所述虚拟设备为在所述云端生成的、在所述虚拟孪生环境中与所述无人设备对应的虚拟设备;
响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。
可选地,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
所述方法还包括:
在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新。
可选地,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;所述虚拟环境地图通过以下方式确定:
获取所述目标环境的环境数据信息;
基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
在所述虚拟孪生环境中生成与所述无人设备对应的所述虚拟设备;
控制虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图。
可选地,所述虚拟环境地图包括栅格图和视觉特征图,所述虚拟传感器包括虚拟激光雷达和虚拟视觉相机;
所述基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图,包括:
基于所述虚拟激光雷达采集所述虚拟孪生环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
基于所述虚拟视觉相机采集所述虚拟孪生环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
可选地,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成;所述本地环境地图包括栅格图和视觉特征图;
所述本地环境地图通过如下方式生成:
基于所述无人设备中设置的激光雷达采集所述目标环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
基于所述无人设备中设置的视觉相机采集所述目标环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
可选地,所述在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新,包括:
在所述无人设备移动的过程中,按照预设时间间隔控制所述无人设备采集所述目标环境的环境信息;
根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置;
根据所述无人设备在所述移动位置采集到的环境信息、与所述本地环境地图进行对比,根据对比结果对所述本地环境地图进行更新。
可选地,所述虚拟环境地图包括栅格图和视觉特征图,所述根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置,包括:
获取所述无人设备在所处位置对应的激光点云信息和视觉图像信息;
根据所述激光点云信息和所述栅格图确定第一位置;
根据所述视觉图像信息和所述视觉特征图确定第二位置;
根据所述第一位置和所述第二位置,确定所述移动位置。
可选地,所述根据所述第一位置和所述第二位置,确定所述移动位置,包括:
确定所述第一位置和所述第二位置分别对应的位置置信度,并将位置置信度大的位置确定为所述移动位置。
可选地,所述方法还包括:
获取所述无人设备的加速度信息和旋转角度信息;
根据上一移动位置、所述加速度信息、旋转角度信息以及所述本地环境地图确定第三位置;
所述根据所述第一位置和所述第二位置,确定所述移动位置,还包括:
在确定出的移动位置与所述第三位置的距离大于距离阈值的情况下,将所述第三位置确定为所述移动位置。
根据本公开的第二方面,提供一种用于无人设备的导航方法,应用于云端,所述方法包括:
获取用于导航的目标环境的环境数据信息;
基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
在所述虚拟孪生环境中生成与所述目标环境中的无人设备对应的虚拟设备;
控制所述虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
响应于接收到的初始位置和目标位置,确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径;
向所述无人设备发送所述移动路径。
根据本公开的第三方面,提供一种用于无人设备的导航装置,应用于所述无人设备,所述装置包括:
第一获取模块,用于获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
第一确定模块,用于基于所述本地环境地图,确定所述无人设备所处的初始位置;
第一发送模块,用于根据接收到的移动指令确定移动的目标位置,并向所述云端发送所述初始位置和所述目标位置,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径,其中,所述虚拟设备为在所述云端生成的、在所述虚拟孪生环境中与所述无人设备对应的虚拟设备;
控制模块,用于响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。
根据本公开的第四方面,提供一种用于无人设备的导航装置,应用于云端,所述装置包括:
第三获取模块,用于获取用于导航的目标环境的环境数据信息;
重建模块,用于基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
生成模块,用于在所述虚拟孪生环境中生成与所述目标环境中的无人设备对应的虚拟设备;
采集模块,用于控制所述虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中 的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
第三确定模块,用于响应于接收到的初始位置和目标位置,确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径;
第二发送模块,用于向所述无人设备发送所述移动路径。
根据本公开的第五方面,提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行第一方面或第二方面所述的方法。
根据本公开的第六方面,提供一种非临时性计算机可读存储介质,其上存储有如第五方面实施例提出的计算机程序,该程序被处理器执行时实现第一方面或第二方面任一所述方法的步骤。
根据本公开的第七方面,提供一种无人设备,包括:
存储器,其上存储有如第五方面实施例提出的计算机程序;
处理器,用于执行所述存储器中的所述计算机程序,以实现第一方面或第二方面任一所述方法的步骤。
在上述技术方案中,无人设备自身生成或者从云端获取用于导航的目标环境的本地环境地图,从而可以将确定出的所述无人设备所处的初始位置以及移动的目标位置发送至云端,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定无人设备的移动路径,之后无人设备响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。由此,通过上述技术方案,无人设备可以将其初始位置和目标位置发送至云端,以由云端基于虚孪生环境生成虚拟设备,基于该虚拟设备的信息生成无人设备的移动路径,一方面可以有效降低路径规划对无人设备本身性能的要求,同时基于虚拟孪生环境和虚拟设备可以对无人设备的移动路径进行实时显示和监控,进一步保证无人设备导航的准确性,提升使用体验。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体 实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是根据本公开的一种实施方式提供的用于无人设备的导航方法的流程图;
图2是根据本公开的一种实施方式提供的激光栅格图的示意图;
图3是根据本公开的一种实施方式提供的视觉特征图的示意图;
图4是根据本公开的一种实施方式提供的用于无人设备的导航装置的框图;
图5是根据一示例性实施例示出的一种无人设备的框图;
图6为本公开实施例提供了一种计算处理设备的结构示意图;
图7为本公开实施例提供了一种用于便携式或者固定实现根据本公开的方法的程序代码的存储单元的示意图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
图1所示,为根据本公开的一种实施方式提供的用于无人设备的导航方法的流程图,如图1所示,所述方法可以包括:
在步骤11中,获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得。
示例地,该目标环境可以是该无人设备所服务的环境,如酒店、校园等环境,无人设备可以是机器人或者无人配送设备等。在该实施例中,可以预先设置无人设备对应的目标环境,从而其可以与云端进行通信。
作为示例,可以直接将该无人设备放置在该目标环境中,从而可以控制该无人设备在目标环境进行移动扫描,从而获得所述本地环境地图。作为示例,在本地创建本地环境地图后,可以将该本地环境地图发送至云端,以为云端提供环境地图。
作为另一示例,无人设备可以从云端同步其定位所需的环境地图,并将该环境地图保存在本地,以获得该本地环境地图。在该实施例中,在针对一个环境进行地图创建时,无需每一个无人设备对环境进行扫描以分别进行建图,只需要对目标环境进行一次扫描在云端生成该目标环境对应的虚拟孪生环境,进而可以基于该虚拟孪生环境进行环境地图的构建,从而有效降低建图的次数。
在步骤12中,基于本地环境地图,确定无人设备所处的初始位置。
示例地,在获得本地环境地图后,则可以通过无人设备获得其所处位置的环境特征,从而基于该环境特征与该本地环境地图进行定位。例如,可以基于无人设备上安装的视觉相机进行环境图像拍摄,从而可以基于拍摄的环境图像进行图像识别,获得该无人设备所处位置的环境图像特征,通过将该环境图像特征与该本地环境地图中的特征进行对比,根据匹配到的特征确定该初始位置。
在步骤13中,根据接收到的移动指令确定移动的目标位置,并向云端发送初始位置和目标位置,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径,其中,所述虚拟设备为在所述云端生成的、在所述虚拟孪生环境中与所述无人设备对应的虚拟设备。
示例地,可以通过预设的APP界面触发该移动指令,例如输入移动的目标地点即该目标位置,也可以显示当前目标环境,用户可以在目标环境中通过点选的方式触发移动指令,其选择的位置即为该目标位置;又例如,可以通过语音输入触发该移动指令,即用户通过语音“请将桌子上的书XX拿给我”,则无人设备接收到该语音,通过语音识别确定“桌子上的书XX”的位置为目标位置。
之后,该无人设备将其初始位置和目标位置发送至云端,以由云端对其进行路径规划,无需在无人设备端进行计算,降低对无人设备处理的高要求,提高该导航方法的适用范围。示例地,云端在接收到初始位置和目标位置后,可以在虚拟孪生环境中生成与该无人设备对应的虚拟设备,之后则可以分别将该初始位置和目标位置映射至虚拟孪生环境中,获得虚拟设备对应的虚拟初始位置和虚拟目标位置,从而可以基于本领域中常用的路径规划方法确定出该虚拟设备从虚拟初始位置到达该虚拟目标位置的路径,虚拟设备在虚拟孪生环境中的移动路径与所述无人设备在所述目标环境中的移动路径相同。示例地,可以基于用户预先设置的路径选择要求确定出移动路径,例如该路径选择要求可以是路径最短、时间最短、无人设备能耗最小等,本公开对此不进行限定。
在步骤14中,响应于接收到云端发送的移动路径,根据移动路径控制无人设备移动。
示例地,可以根据云端确定出的移动路径,控制无人设备沿该移动路径的指示进行移动。作为示例,在控制无人设备沿移动路径进行移动的过程中,可以间隔预设时段对无人设备所处的位置进行定位,以便于对该无人设备的移动路径进行实时监控和纠正。
由此,在上述技术方案中,无人设备自身生成或者从云端获取用于导航的目标环境的本地环境地图,从而可以将确定出的所述无人设备所处的初始位置以及移动的目标位置发送至云端,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定无人设备的移动路径,之后无人设备响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。由此,通过上述技术方案,无人设备可以将其初始位置和目标位置发送至云端,以由云端基于虚孪生环境生成虚拟设备,基于该虚拟设备的信息生成无人设备的移动路径,一方面可以有效降低路径规划对无人设备本身性能的要求,同时基于虚拟孪生环境和虚拟设备可以对无人设备的移动路径进行实时显示和监控,进一步保证无人设备导航的准确性,提升使用体验。
在一种可能的实施例中,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
所述方法还包括:
在控制无人设备移动的过程中,根据无人设备采集的环境信息对本地环境地图进行更新。
其中,云端构建的环境地图是基于对现实的目标环境进行虚拟化获得的虚拟孪生环境构建出的,现实环境中部分物体的摆放的变化难以实时的映射到该虚拟孪生环境中,从而使得本地环境地图与该现实的目标环境有所偏差。因此,在该实施例中,在控制无人设备移动的过程中,该无人设备可以对其移动过程中经过的环境进行环境信息采集,例如,可以无人设备上安装的激光雷达传感器采集环境的激光点云数据,基于3D深度相机获得视觉图像数据,从而可以基于采集到的信息与本地环境地图进行对比,并对其本地环境地图进行更新,以提高该本地环境地图与该无人设备的适配性。
由此,通过上述技术方案,可以通过在云端构建出适用于该目标环境中的各个无人设备导航的虚拟环境地图,实现环境地图的云端共享,无需新加入目标环境中的每一无人设备重复进行环境地图的构建,节省无人设备进行环境地图构建所需的数据量,并且可以在一定程度上提高导航地图的准确性;同时便于对环境地图的统一管理,提高无人设备导航的准确性和全面性。并且可以基于每一无人设备在移动过程中的采集信息对其本地环境地图进行更新,从而可以进一步提高该本地环境地图与该无人设备的适配性,提升无人设备导航和路径规划的精确性,并且可以扩展该导航方法的使用范围。
在一种可能的实施例中,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;所述虚拟环境地图通过以下方式确定:
获取所述目标环境的环境数据信息;
基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境。
其中,该环境数据信息可以是通过激光雷达和3D视觉工具(如多线激光以及IMU(Inertial Measurement Unit,惯性测量单元)等传感器)等对该目标环境的进行信息采集获得的特征数据,之后对该特征数据进行后处理,以删除掉其中重复的信息,并将对应于同一位置的特征数据进行整合,形成该现实的目标环境的3D稠密点云数据,之后则可以基于该3D稠密点云数据进行三维空间渲染重建,获得该虚拟孪生环境。
示例地,可以基于单目相机采集到的原始图像、对应的深度图,以及对应的相机位姿等数据进行拼接点云生成三维稠密点云地图,其中,可以通过MVS(Multiple View Stereo,稠密重建)算法逐像素的计算图像中每一个像素点对应的三维点,得到图像中物体表面密集的三维点云,其具体计算方式在此不再赘述。
作为另一示例,也可以基于已有的无人设备扫描上传的目标场景的3D模型图上传到云端,该3D模型图可以经过渲染的3D装修布局效果图,从而可以基于该3D模型形成3D数字孪生的仿真环境,其中可以基于现有的数字孪生技术进行仿真构建,本公开对此不进行限定。
作为另一示例,可以通过物理无人设备在目标环境中进行扫描,将采集到的原始3D点云数据和RGB图像数据上传到云端,在云端则可以基于3D点云和RGB图像信息对环境进行3D重建,基于3D重建场景进行场景的语义分割以形成与目标环境对应的数字孪生环境,其中可以通过神经网络预训练语义分割的模型,从而可以实现基于图像的语义分割,实现3D重建。
在所述虚拟孪生环境中生成与所述无人设备对应的虚拟设备。作为示例,可以基于无人设备上设置的多个传感器进行虚拟生成,从而生成与该无人设备具有相同的信息采集传感器的虚拟设备。作为另一示例,可以在该虚拟孪生环境中生成与该无人设备相同的虚拟设备,如该虚拟设备为孪生无人设备,如基于数字孪生生成的数字孪生设备,该孪生无人设备可以对物理无人设备的物理传感器参数进行仿真模拟,即孪生无人设备上虚拟设置有虚拟激光雷达、虚拟3D相机等,从而可以和物理无人设备获得同样的采集参数,使得孪生无人设备的虚拟传感器尽可能与物理无人设备的仿真效果更接近,提高信 息采集的准确性。
之后,控制虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图。
示例地,可以控制虚拟设备在该虚拟孪生环境中进行移动从而采集虚拟孪生环境的特征信息,该虚拟设备可以通过其虚拟的多传感器获得激光雷达的距离、角分辨率、扫描频率等,以及3D视觉相机的内部参数等特征信息,从而基于该虚拟设备采集到的特征信息生成所述虚拟环境地图。
由此,通过上述技术方案,可以通过对现实的目标环境进行数字孪生生成,从而获得虚拟孪生环境,同时可以对物理无人设备进行虚拟获得虚拟设备,之后,控制虚拟设备对该虚拟孪生环境进行扫描以进行建图,一方面无需控制物理无人设备进行移动扫描,提高环境扫描的效率,另一方面,只需要进行一次环境扫描,则可以针对目标环境进行建图,获得适用于多个无人设备的环境地图,无需每一无人设备进行环境信息扫描采集,提高该方法的使用范围,同时采集信息的精度不再受无人设备的传感器精度限制,从而可以为精度较低的无人设备提供高精度的环境地图,提高其进行导航的准确性和效率。
在一种可能的实施例中,所述虚拟环境地图包括栅格图和视觉特征图,所述虚拟传感器包括虚拟激光雷达和虚拟视觉相机;
所述基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图的示例性实现方式如下,该步骤可以包括:
基于所述虚拟激光雷达采集所述虚拟孪生环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
作为示例,可以通过SLAM建图算法基于获得的激光点云特征信息进行建图,以获得用于定位的激光栅格图(GridMap),如图2所示。其中,栅格图本质上为位图图片,该位图图片中每个“像素”表示了现实的目标环境中存在障碍物的概率分布,从而可以基于该栅格图确定出目标环境中的可通行部分。如图2所示,其中存在障碍物的概率越大颜色越深,则其中的白色的部分可以用于表示不存在障碍物的部分,即可通行部分;黑色的部分则用于表示存在障碍物的部分,即不可通行部分。
基于所述虚拟视觉相机采集所述虚拟孪生环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
作为另一示例,可以基于vSLAM的时间建图算法根据该3D视觉相机采集的视觉图像确定该视觉特征图(FeatureMap),示例地,可以根据虚拟设备采集过程中的定位点和位姿特征信息,获得该定位点和位姿特征信息对应的视觉图像,基于该视觉图像进行特征点提取,从而获得视觉特征图,之后基于各个定位点和位姿特征信息进行特征拼接,获得的整体视觉特征图如图3所示,其中的各个点即为确定出的视觉特征图中的特征点。其中,可以预训练一特征点提取模型,该模型可以为神经网络模型,则可以将该视觉特征图输入该特征点提取模型,从而获得特征图。
由此,通过上述技术方案,可以通过虚拟设备在虚拟孪生环境中采集到的信息生成栅格图和视觉特征图,其中栅格图可以用于表示该目标环境中的障碍物情况,从而确定目标环境中的可通行路径,视觉特征图用于确定目标环境中各个部分的特征点,从而实现特征比对以确定某一物体的位置,从而结合上述栅格图和视觉特征图可以准确确定目标环境中的目标物体的位置,同时为确定无人设备移动至目标物体的位置提供准确的数据支持,提升导航的准确性和效率,提升用户使用体验。
在一种可能的实施例中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成;所述本地环境地图包括栅格图和视觉特征图;
所述本地环境地图通过如下方式生成:
基于所述无人设备中设置的激光雷达采集所述目标环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
基于所述无人设备中设置的视觉相机采集所述目标环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
其中,生成栅格图和视觉特征图的具体方式已在上文进行详述,在此不再赘述。在该实施例中,可以直接基于该无人设备自身设置的部件对目标环境进行扫描,从而可以获得目标环境中的实际环境信息,保证用于构建本地环境地图的信息与该实际环境以及该无人设备的匹配性,为后续基于该本地环境地图进行定位提供准确的数据支持,提高确定出的无人设备定位的准确性,进而保证对无人设备导航的准确性和有效性。
在一种可能的实施例中,在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新的示例性实现方式如下,该步骤可以包括:
在所述无人设备移动的过程中,按照预设时间间隔控制所述无人设备采集目标环境的环境信息。
其中,所述环境信息可以是目标环境中的视觉图像,示例地,可以基于无人设备上安装的视觉相机进行拍摄,从而获取其当前所处位置的环境图像。又例如,所述环境信息可以是目标环境中的障碍物信息,示例地,可以基于无人设备上安装的激光雷达进行监测拍摄,从而获取其当前所处位置的障碍物信息。
根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置。
其中,该移动位置即为无人设备在移动过程中的实时位置。作为一种可能的实施例,可以通过将采集的环境信息与本地环境地图中的特征进行对比,从而获得本地环境地图中与该环境信息匹配的特征以及匹配度,将对应的匹配度最高的特征的位置确定为该移动位置。
根据所述无人设备在所述移动位置采集到的环境信息、与所述本地环境地图进行对比,根据对比结果对所述本地环境地图进行更新。
示例地,目标环境中可能会由于物品摆放的变化而导致虚拟环境地图与现实的环境地图之间有所偏差,则可以在确定出无人设备的当前位置后,可以进一步地基于无人设备当前的方向获得本地环境地图中该无人设备视野中的图像,并将该图像与该无人设备在该位置采集到的实时的环境信息的特征进行对比,若两者一致,则保持本地环境地图不变,若两者不一致,则以该无人设备在该位置采集到的实时的环境信息的特征更新该本地环境地图中的相对应的位置的特征,从而保证该本地环境地图与当前现实的目标环境中的特征相一致,从而可以在无人设备的移动过程中对本地环境地图进行更新,进一步提高本地环境地图的准确性的同时节省无人设备的操作,同时可以为后续无人设备的导航提供更加准确的数据支持。
在一种可能的实施例中,在本地环境地图更新后,可以将该本地环境地图发送至云端,以由云端基于该更新的本地环境地图对虚拟环境地图进行更新。
在一种可能的实施例中,所述虚拟环境地图包括栅格图和视觉特征图,所述根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置的示例性实现方式如下,该步骤可以包括:
获取所述无人设备在所处位置对应的激光点云信息和视觉图像信息。其中,上述信息的获取方式已在上文进行详述,在此不再赘述。
根据所述激光点云信息和所述栅格图确定第一位置,其中,可以根据无人设备在当 前位置采集的激光点云信息和该栅格图中的各个栅格的特征进行对比,从而确定出该栅格图中与该激光点云信息匹配的栅格,将匹配度最高的栅格的位置确定为该第一位置。
根据所述视觉图像信息和所述视觉特征图确定第二位置。同样地,可以根据无人设备在当前位置采集的视觉图像信息进行特征提取获得实时特征点,并将该实时特征点与视觉特征图中的特征点进行对比,从而确定出该视觉特征图中与该实时特征点匹配的特征点,将匹配度最高的特征点的位置确定为该第二位置。
根据所述第一位置和所述第二位置,确定所述移动位置。
由此,通过上述技术方案,在无人设备的移动过程中对无人设备进行实时定位时,可以同时结合栅格图和视觉特征图对该其进行定位,通过多角度的定位可以保证该无人设备移动位置的准确性,并且可以实时掌握无人设备的移动位置,便于对无人设备的移动路径的控制,以便于对该无人设备进行局部路径控制,使其按照移动路径进行移动,保证无人设备导航的准确性和效率。
在一种可能的实施例中,所述根据所述第一位置和所述第二位置,确定所述移动位置可以包括:
确定所述第一位置和所述第二位置分别对应的位置置信度,并将位置置信度大的位置确定为所述移动位置。
作为示例,可以将确定第一位置和第二位置时其分别对应的匹配度作为其分别对应的位置置信度,从而在第一位置对应的位置置信度大于第二位置对应的位置置信度时,可以直接将该第一位置作为该移动位置,在第一位置对应的位置置信度小于第二位置对应的位置置信度时,可以直接将该第二位置作为该移动位置。由此,在基于栅格图和视觉特征图对无人设备的位置进行实时确定时,可以将其中更加准确的位置作为该无人设备的移动位置,结合两种地图进行定位,以进一步提高无人设备移动位置的准确性,为控制该无人设备顺利移动到目标位置提供数据支持。
在一种可能的实施例中,所述方法还可以包括:
获取所述无人设备的加速度信息和旋转角度信息。其中,可以通过无人设备上安装的惯性传感器对该加速度信息和旋转角度信息进行采集。
根据上一移动位置、所述加速度信息、旋转角度信息以及所述本地环境地图确定第三位置。
其中,由于无人设备的移动路径具有连续性,因此,在无人设备移动过程中,可以 根据所述加速度信息和旋转角度信息确定无人设备的局部移动路径,从而可以基于上一移动位置、该局部移动路径和本地环境地图确定出基于上一移动位置预测出的当前位置,即该第三位置。
所述根据所述第一位置和所述第二位置,确定所述移动位置的另一示例性实现方式如下,该步骤还可以包括:
在确定出的移动位置与所述第三位置的距离大于距离阈值的情况下,将所述第三位置确定为所述移动位置。
其中,该第三位置与基于上一移动位置预测出的当前位置,若确定出的移动位置与所述第三位置的距离大于距离阈值,表示该确定出的当前位置与基于上一移动位置确定出的当前位置的偏差较大,此时,可以将该基于上一移动位置确定出的当前位置,即该第三位置确定为该移动位置。由此,通过上述技术方案,可以在一定程度上提高无人设备移动过程中定位的连续性,符合无人设备实际的移动路线,从而可以在一定程度上提高无人设备导航的准确性,使得无人设备导航的移动路径与实际环境相匹配,也可以提供机器移动控制的准确性,提升用户使用体验。
基于同样地发明构思,本公开还提供一种用于无人设备的导航方法,该方法应用于云端,所述方法包括:
获取用于导航的目标环境的环境数据信息;
基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
在所述虚拟孪生环境中生成与所述目标环境中的无人设备对应的虚拟设备;
控制所述虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
响应于接收到的初始位置和目标位置,确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径;
向所述无人设备发送所述移动路径。
其中,上述步骤的具体实现过程已在上文进行详述,在此不再赘述。由此,通过上述技术方案,可以在云端构建出适用于该目标环境中的各个无人设备导航的虚拟环境地图,实现环境地图的云端共享,无需新加入目标环境中的每一无人设备重复进行环境地图的构建,节省无人设备进行环境地图构建所需的数据量,并且可以在一定程度上提高 导航地图的准确性;同时便于对环境地图的统一管理,提高无人设备导航的准确性和全面性。并且,通过云端对云端基于虚孪生环境生成虚拟设备,基于该虚拟设备的信息生成无人设备的移动路径,一方面可以有效降低路径规划对无人设备本身性能的要求,同时基于虚拟孪生环境和虚拟设备可以对无人设备的移动路径进行实时显示和监控,进一步保证无人设备导航的准确性,提升使用体验。
本公开还提供一种用于无人设备的导航装置,如图4所示,应用于无人设备,所述装置10包括:
第一获取模块100,用于获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
第一确定模块200,用于基于所述本地环境地图,确定所述无人设备所处的初始位置;
第一发送模块300,用于根据接收到的移动指令确定移动的目标位置,并向所述云端发送所述初始位置和所述目标位置,以使云端确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径,其中,所述虚拟设备为在所述云端生成的、在所述虚拟孪生环境中与所述无人设备对应的虚拟设备;
控制模块400,用于响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。
可选地,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;
所述装置还包括:
更新模块,用于在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新。
可选地,所述本地环境地图是根据云端在与所述目标环境对应的虚拟孪生环境构建的虚拟环境地图同步所得;所述虚拟环境地图通过构图模块确定,所述构图模块包括:
第一获取子模块,用于获取所述目标环境的环境数据信息;
重建子模块,用于基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
第一生成子模块,用于在所述虚拟孪生环境中生成与所述无人设备对应的虚拟设备;
第二生成子模块,用于控制虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图。
可选地,所述虚拟环境地图包括栅格图和视觉特征图,所述虚拟传感器包括虚拟激光雷达和虚拟视觉相机;
所述第二生成子模块包括:
第三生成子模块,用于基于所述虚拟激光雷达采集所述虚拟孪生环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
第四生成子模块,用于基于所述虚拟视觉相机采集所述虚拟孪生环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
可选地,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成;所述本地环境地图包括栅格图和视觉特征图;
所述本地环境地图通过如下方式生成:
基于所述无人设备中设置的激光雷达采集所述目标环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
基于所述无人设备中设置的视觉相机采集所述目标环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
可选地,所述更新模块包括:
采集子模块,用于在所述无人设备移动的过程中,按照预设时间间隔控制所述无人设备采集所述目标环境的环境信息;
第一确定子模块,用于根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置;
更新子模块,用于根据所述无人设备在所述移动位置采集到的环境信息、与所述本地环境地图进行对比,根据对比结果对所述本地环境地图进行更新。
可选地,所述虚拟环境地图包括栅格图和视觉特征图,所述第一确定子模块包括:
第二获取子模块,用于获取所述无人设备在所处位置对应的激光点云信息和视觉图像信息;
第二确定子模块,用于根据所述激光点云信息和所述栅格图确定第一位置;
第三确定子模块,用于根据所述视觉图像信息和所述视觉特征图确定第二位置;
第四确定子模块,用于根据所述第一位置和所述第二位置,确定所述移动位置。
可选地,所述第四确定子模块包括:
第五确定子模块,用于确定所述第一位置和所述第二位置分别对应的位置置信度,并将位置置信度大的位置确定为所述移动位置。
可选地,所述装置还包括:
第二获取模块,用于获取所述无人设备的加速度信息和旋转角度信息;
第二确定模块,用于根据上一移动位置、所述加速度信息、旋转角度信息以及所述本地环境地图确定第三位置;
所述第四确定子模块还包括:
第六确定子模块,用于在确定出的移动位置与所述第三位置的距离大于距离阈值的情况下,将所述第三位置确定为所述移动位置。
本公开还提供一种用于无人设备的导航装置,应用于云端,所述装置包括:
第三获取模块,用于获取用于导航的目标环境的环境数据信息;
重建模块,用于基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的虚拟孪生环境;
生成模块,用于在所述虚拟孪生环境中生成与所述目标环境中的无人设备对应的虚拟设备;
采集模块,用于控制所述虚拟设备遍历所述虚拟孪生环境,并基于所述虚拟设备中的虚拟传感器对所述虚拟孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
第三确定模块,用于响应于接收到的初始位置和目标位置,确定所述无人设备对应的虚拟设备在所述虚拟孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述虚拟设备的移动路径;
第二发送模块,用于向所述无人设备发送所述移动路径。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图5是根据一示例性实施例示出的一种无人设备700的框图。如图5所示,该无人设备700可以包括:处理器701,存储器702。该无人设备700还可以包括多媒体组件703, 输入/输出(I/O)接口704,以及通信组件705中的一者或多者。
其中,处理器701用于控制该无人设备700的整体操作,以完成上述的用于无人设备的导航方法中的全部或部分步骤。存储器702用于存储各种类型的数据以支持在该无人设备700的操作,这些数据例如可以包括用于在该无人设备700上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如联系人数据、收发的消息、图片、音频、视频等等。该存储器702可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。多媒体组件703可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器702或通过通信组件705发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口704为处理器701和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件705用于该无人设备700与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G、4G、NB-IOT、eMTC、或其他5G等等,或它们中的一种或几种的组合,在此不做限定。因此相应的该通信组件705可以包括:Wi-Fi模块,蓝牙模块,NFC模块等等。
在一示例性实施例中,无人设备700可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的用于无人设备的导航方法。
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的用于无人设备的导航方法的步骤。例如,该计算机可 读存储介质可以为上述包括程序指令的存储器702,上述程序指令可由无人设备700的处理器701执行以完成上述的用于无人设备的导航方法。
为了实现上述实施例,本公开还提出了一种计算处理设备,包括:
存储器,其中存储有计算机可读代码;以及
一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行前述的用于无人设备的导航方法。
为了实现上述实施例,本公开还提出了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行前述用于无人设备的导航方法。
本公开所提出的计算机可读存储介质,其中存储了前述的计算机程序。
图6为本公开实施例提供了一种计算处理设备的结构示意图。该计算处理设备通常包括处理器1110和以存储器1130形式的计算机程序产品或者计算机可读介质。存储器1130可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器1130具有用于执行上述方法中的任何方法步骤的程序代码1151的存储空间1150。例如,用于程序代码的存储空间1150可以包括分别用于实现上面的方法中的各种步骤的各个程序代码1151。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如图7所示的便携式或者固定存储单元。该存储单元可以具有与图6的计算处理设备中的存储器1130类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码1151’,即可以由例如诸如1110之类的处理器读取的代码,这些代码当由服务器运行时,导致该服务器执行上面所描述的方法中的各个步骤。
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合。为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。

Claims (15)

  1. 一种用于无人设备的导航方法,其特征在于,应用于所述无人设备,所述方法包括:
    获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的数字孪生环境构建的虚拟环境地图同步所得;
    基于所述本地环境地图,确定所述无人设备所处的初始位置;
    根据接收到的移动指令确定移动的目标位置,并向所述云端发送所述初始位置和所述目标位置,以使云端确定所述无人设备对应的数字孪生设备在所述数字孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述数字孪生设备的移动路径,其中,所述数字孪生设备为在所述云端生成的、在所述数字孪生环境中与所述无人设备对应的虚拟设备;
    响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。
  2. 根据权利要求1所述的方法,其特征在于,所述本地环境地图是根据云端在与所述目标环境对应的数字孪生环境构建的虚拟环境地图同步所得;
    所述方法还包括:
    在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新。
  3. 根据权利要求1或2所述的方法,其特征在于,所述本地环境地图是根据云端在与所述目标环境对应的数字孪生环境构建的虚拟环境地图同步所得;所述虚拟环境地图通过以下方式确定:
    获取所述目标环境的环境数据信息;
    基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的数字孪生环境;
    在所述数字孪生环境中生成与所述无人设备对应的所述数字孪生设备;
    控制数字孪生设备遍历所述数字孪生环境,并基于所述数字孪生设备中的虚拟传感器对所述数字孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图。
  4. 根据权利要求3所述的方法,其特征在于,所述虚拟环境地图包括栅格图和视觉特征图,所述虚拟传感器包括虚拟激光雷达和虚拟视觉相机;
    所述基于所述数字孪生设备中的虚拟传感器对所述数字孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图,包括:
    基于所述虚拟激光雷达采集所述数字孪生环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
    基于所述虚拟视觉相机采集所述数字孪生环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
  5. 根据权利要求1-4中任一所述的方法,其特征在于,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成;所述本地环境地图包括栅格图和视觉特征图;
    所述本地环境地图通过如下方式生成:
    基于所述无人设备中设置的激光雷达采集所述目标环境对应的激光点云特征信息,并根据所述激光点云特征信息生成所述栅格图;
    基于所述无人设备中设置的视觉相机采集所述目标环境对应的位姿特征信息和视觉图像特征信息,并根据所述位姿特征信息和所述视觉图像特征信息生成所述视觉特征图。
  6. 根据权利要求2所述的方法,其特征在于,所述在控制所述无人设备移动的过程中,根据所述无人设备采集的环境信息对所述本地环境地图进行更新,包括:
    在所述无人设备移动的过程中,按照预设时间间隔控制所述无人设备采集所述目标环境的环境信息;
    根据所述环境信息和所述本地环境地图对所述无人设备进行定位,确定所述无人设备的移动位置;
    根据所述无人设备在所述移动位置采集到的环境信息、与所述本地环境地图进行对比,根据对比结果对所述本地环境地图进行更新。
  7. 根据权利要求1-6中任一所述的方法,其特征在于,所述虚拟环境地图包括栅格图和视觉特征图,所述根据所述环境信息和所述本地环境地图对所述无人设备进行定位, 确定所述无人设备的移动位置,包括:
    获取所述无人设备在所处位置对应的激光点云信息和视觉图像信息;
    根据所述激光点云信息和所述栅格图确定第一位置;
    根据所述视觉图像信息和所述视觉特征图确定第二位置;
    根据所述第一位置和所述第二位置,确定所述移动位置。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第一位置和所述第二位置,确定所述移动位置,包括:
    确定所述第一位置和所述第二位置分别对应的位置置信度,并将位置置信度大的位置确定为所述移动位置。
  9. 根据权利要求7或8所述的方法,其特征在于,所述方法还包括:
    获取所述无人设备的加速度信息和旋转角度信息;
    根据上一移动位置、所述加速度信息、旋转角度信息以及所述本地环境地图确定第三位置;
    所述根据所述第一位置和所述第二位置,确定所述移动位置,还包括:
    在确定出的移动位置与所述第三位置的距离大于距离阈值的情况下,将所述第三位置确定为所述移动位置。
  10. 一种用于无人设备的导航方法,其特征在于,应用于云端,所述方法包括:
    获取用于导航的目标环境的环境数据信息;
    基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的数字孪生环境;
    在所述数字孪生环境中生成与所述目标环境中的无人设备对应的数字孪生设备;
    控制所述数字孪生设备遍历所述数字孪生环境,并基于所述数字孪生设备中的虚拟传感器对所述数字孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
    响应于接收到的初始位置和目标位置,确定所述无人设备对应的数字孪生设备在所述数字孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述数字孪生设备的移动路径;
    向所述无人设备发送所述移动路径。
  11. 一种用于无人设备的导航装置,其特征在于,应用于所述无人设备,所述装置包括:
    第一获取模块,用于获取用于导航的目标环境的本地环境地图,其中,所述本地环境地图由所述无人设备对所述目标环境进行扫描所得的信息生成,或者所述本地环境地图是根据云端在与所述目标环境对应的数字孪生环境构建的虚拟环境地图同步所得;
    第一确定模块,用于基于所述本地环境地图,确定所述无人设备所处的初始位置;
    第一发送模块,用于根据接收到的移动指令确定移动的目标位置,并向所述云端发送所述初始位置和所述目标位置,以使云端确定所述无人设备对应的数字孪生设备在所述数字孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述数字孪生设备的移动路径,其中,所述数字孪生设备为在所述云端生成的、在所述数字孪生环境中与所述无人设备对应的虚拟设备;
    控制模块,用于响应于接收到云端发送的所述移动路径,根据所述移动路径控制所述无人设备移动。
  12. 一种用于无人设备的导航装置,其特征在于,应用于云端,所述装置包括:
    第三获取模块,用于获取用于导航的目标环境的环境数据信息;
    重建模块,用于基于所述环境数据信息进行三维空间重建,获得所述目标环境对应的数字孪生环境;
    生成模块,用于在所述数字孪生环境中生成与所述目标环境中的无人设备对应的数字孪生设备;
    采集模块,用于控制所述数字孪生设备遍历所述数字孪生环境,并基于所述数字孪生设备中的虚拟传感器对所述数字孪生环境中的特征信息进行采集,以根据所述特征信息生成所述虚拟环境地图;
    第三确定模块,用于响应于接收到的初始位置和目标位置,确定所述无人设备对应的数字孪生设备在所述数字孪生环境中的虚拟初始位置和虚拟目标位置,并基于所述虚拟初始位置和虚拟目标位置,确定所述数字孪生设备的移动路径;
    第二发送模块,用于向所述无人设备发送所述移动路径。
  13. 一种计算机程序,其特征在于,包括计算机可读代码,当所述计算机可读代码 在计算处理设备上运行时,导致所述计算处理设备执行根据权利要求1-10中任一项所述的方法。
  14. 一种非临时性计算机可读存储介质,其上存储有如权利要求13所述的计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-10中任一项所述方法的步骤。
  15. 一种无人设备,其特征在于,包括:
    存储器,其上存储有如权利要求13所述的计算机程序;
    处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-10中任一项所述方法的步骤。
PCT/CN2022/119467 2021-09-30 2022-09-16 用于无人设备的导航方法、装置、介质及无人设备 WO2023051289A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111162555.2 2021-09-30
CN202111162555.2A CN113959444A (zh) 2021-09-30 2021-09-30 用于无人设备的导航方法、装置、介质及无人设备

Publications (1)

Publication Number Publication Date
WO2023051289A1 true WO2023051289A1 (zh) 2023-04-06

Family

ID=79463018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119467 WO2023051289A1 (zh) 2021-09-30 2022-09-16 用于无人设备的导航方法、装置、介质及无人设备

Country Status (2)

Country Link
CN (1) CN113959444A (zh)
WO (1) WO2023051289A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113959444A (zh) * 2021-09-30 2022-01-21 达闼机器人有限公司 用于无人设备的导航方法、装置、介质及无人设备
CN114485621A (zh) * 2022-02-08 2022-05-13 达闼机器人股份有限公司 导航方法、装置及计算机可读存储介质
CN115359192B (zh) * 2022-10-14 2023-03-28 阿里巴巴(中国)有限公司 三维重建与商品信息处理方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
CN110716558A (zh) * 2019-11-21 2020-01-21 上海车右智能科技有限公司 一种基于数字孪生技术的非公开道路用自动驾驶系统
CN112440281A (zh) * 2020-11-16 2021-03-05 浙江大学 一种基于数字孪生的机器人轨迹规划方法
CN112632778A (zh) * 2020-12-22 2021-04-09 达闼机器人有限公司 数字孪生模型的运行方法、装置和电子设备
CN112668687A (zh) * 2020-12-01 2021-04-16 达闼机器人有限公司 云端机器人系统、云服务器、机器人控制模块和机器人
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
KR102266235B1 (ko) * 2020-03-02 2021-06-17 주식회사 클로버스튜디오 지능형 드론 비행계획 수립방법 및 이를 이용한 드론 관제 시스템
CN113959444A (zh) * 2021-09-30 2022-01-21 达闼机器人有限公司 用于无人设备的导航方法、装置、介质及无人设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10353388B2 (en) * 2016-10-17 2019-07-16 X Development Llc Drop-off location planning for delivery vehicle
CN111522003B (zh) * 2019-01-31 2022-11-11 广州汽车集团股份有限公司 车辆定位方法及系统、计算机设备、车辆、存储介质
CN110989605B (zh) * 2019-12-13 2020-09-18 哈尔滨工业大学 一种三体智能系统架构及探测机器人
CN111179435B (zh) * 2019-12-24 2024-02-06 Oppo广东移动通信有限公司 增强现实处理方法及装置、系统、存储介质和电子设备
CN111429574B (zh) * 2020-03-06 2022-07-15 上海交通大学 基于三维点云和视觉融合的移动机器人定位方法和系统
CN112365216A (zh) * 2020-12-02 2021-02-12 青岛慧拓智能机器有限公司 矿区无人运输仿真测试平台和矿区无人运输仿真方法
CN113190568A (zh) * 2021-05-12 2021-07-30 上海快仓自动化科技有限公司 一种地图更新方法、装置及相关组件

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109084732A (zh) * 2018-06-29 2018-12-25 北京旷视科技有限公司 定位与导航方法、装置及处理设备
CN110716558A (zh) * 2019-11-21 2020-01-21 上海车右智能科技有限公司 一种基于数字孪生技术的非公开道路用自动驾驶系统
KR102266235B1 (ko) * 2020-03-02 2021-06-17 주식회사 클로버스튜디오 지능형 드론 비행계획 수립방법 및 이를 이용한 드론 관제 시스템
CN112440281A (zh) * 2020-11-16 2021-03-05 浙江大学 一种基于数字孪生的机器人轨迹规划方法
CN112668687A (zh) * 2020-12-01 2021-04-16 达闼机器人有限公司 云端机器人系统、云服务器、机器人控制模块和机器人
CN112632778A (zh) * 2020-12-22 2021-04-09 达闼机器人有限公司 数字孪生模型的运行方法、装置和电子设备
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
CN113959444A (zh) * 2021-09-30 2022-01-21 达闼机器人有限公司 用于无人设备的导航方法、装置、介质及无人设备

Also Published As

Publication number Publication date
CN113959444A (zh) 2022-01-21

Similar Documents

Publication Publication Date Title
WO2023051289A1 (zh) 用于无人设备的导航方法、装置、介质及无人设备
US11127203B2 (en) Leveraging crowdsourced data for localization and mapping within an environment
US20210279967A1 (en) Object centric scanning
CN107341442B (zh) 运动控制方法、装置、计算机设备和服务机器人
TWI467494B (zh) 使用深度圖進行移動式攝影機定位
JP2023509099A (ja) 室内視覚ナビゲーション方法、装置、システム及び電子機器
KR102347239B1 (ko) 라이다와 카메라를 이용하여 이미지 특징점의 깊이 정보를 향상시키는 방법 및 시스템
CN108297115B (zh) 一种机器人的自主重定位方法
CN115631418B (zh) 图像处理方法及装置、神经辐射场的训练方法
CN113741698A (zh) 一种确定和呈现目标标记信息的方法与设备
KR20180005168A (ko) 로컬화 영역 설명 파일에 대한 프라이버시-민감 질의
CN109887003A (zh) 一种用于进行三维跟踪初始化的方法与设备
KR102234461B1 (ko) 2d 지도를 이용하여 거리뷰 이미지의 깊이 정보를 생성하는 방법 및 시스템
CN110361005B (zh) 定位方法、定位装置、可读存储介质及电子设备
KR20180082170A (ko) 3차원 얼굴 모델 획득 방법 및 시스템
CN109906600B (zh) 模拟景深
US20200349754A1 (en) Methods, devices and computer program products for generating 3d models
KR20190009081A (ko) 클라우드 소싱 기반의 ar 컨텐츠 템플릿을 수집하여 ar 컨텐츠를 자동으로 생성하는 방법 및 시스템
CN110146086A (zh) 一种生成室内地图的方法及装置
KR102383567B1 (ko) 시각 정보 처리 기반의 위치 인식 방법 및 시스템
CN115330946A (zh) 元宇宙构建方法、装置、存储介质及电子设备
WO2023273415A1 (zh) 基于无人机的定位方法、装置、存储介质、电子设备和产品
CA3099748C (en) Spatial construction using guided surface detection
JP2022034034A (ja) 障害物検出方法、電子機器、路側機器、及びクラウド制御プラットフォーム
US11385856B2 (en) Synchronizing positioning systems and content sharing between multiple devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874676

Country of ref document: EP

Kind code of ref document: A1