CN116045961A - Navigation method, navigation device, electronic equipment and storage medium - Google Patents

Navigation method, navigation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116045961A
CN116045961A CN202211714011.7A CN202211714011A CN116045961A CN 116045961 A CN116045961 A CN 116045961A CN 202211714011 A CN202211714011 A CN 202211714011A CN 116045961 A CN116045961 A CN 116045961A
Authority
CN
China
Prior art keywords
navigation
dimensional map
map
information
repositioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211714011.7A
Other languages
Chinese (zh)
Inventor
李龙喜
黄志鑫
韩赛飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202211714011.7A priority Critical patent/CN116045961A/en
Publication of CN116045961A publication Critical patent/CN116045961A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application relates to the technical field of navigation, and provides a navigation method, a device, electronic equipment and a storage medium.

Description

Navigation method, navigation device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of navigation, in particular to a navigation method, a navigation device, electronic equipment and a storage medium.
Background
Currently, fixed-point navigation techniques can be broadly divided into outdoor fixed-point navigation and indoor fixed-point navigation. The outdoor fixed-point navigation mainly adopts a satellite positioning navigation technology, namely, the fixed-point navigation is realized by means of satellite signals, and the fixed-point navigation difficulty is extremely high in a region without satellite signals. Indoor fixed-point navigation mainly adopts SLAM (Simultaneous Localization and Mapping, synchronous positioning and image construction) technology, the SLAM technology comprises laser SLAM adopting a laser radar and visual SLAM adopting a visual odometer (for example, a camera), the laser SLAM is unfavorable for large-scale popularization and use due to higher equipment requirement, the requirement of the visual SLAM on a scene is too harsh, and positioning deviation is larger under the scenes of weak texture, complex illumination and the like.
Therefore, in the prior art, a set of systems cannot be used indoors and outdoors to realize high-precision fixed-point navigation.
Disclosure of Invention
An object of an embodiment of the present application is to provide a navigation method, a device, an electronic apparatus, and a storage medium, which are used to implement high-precision fixed-point navigation indoors and outdoors by using a set of systems.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a navigation method, where the method includes:
acquiring a three-dimensional map of a target scene and navigation path points in the three-dimensional map, wherein the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of a carrier;
repositioning the carrier according to the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map;
and navigating the carrier according to the pose information obtained by the visual inertial odometer, the three-dimensional map and the navigation path point.
Optionally, the step of navigating the carrier according to the pose information obtained by the visual inertial odometer, the three-dimensional map and the navigation path point includes:
Acquiring running information of the carrier relative to the initialization completion time or the last positioning fusion time based on the image acquired by the visual inertial odometer in real time;
when the running information meets the set condition, determining a matching map area from a three-dimensional map according to the current pose information obtained by the visual inertial odometer;
positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information;
and navigating the carrier according to the fusion positioning information, the three-dimensional map and the navigation path point.
Optionally, the step of performing positioning fusion processing on the current pose information and the repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information includes:
determining a distance between the current pose information and the repositioning pose information;
if the distance is smaller than a preset distance threshold, the current pose information is used as the fusion positioning information;
and if the distance is greater than or equal to the preset distance threshold, a matching result of the image acquired by the visual inertial odometer and the three-dimensional map is obtained, and the fusion positioning information is determined according to the matching result.
Optionally, the three-dimensional map includes a plurality of three-dimensional points and key frame information and pose information associated with each of the three-dimensional points;
the step of obtaining the matching result of the image acquired by the visual inertial odometer and the three-dimensional map comprises the following steps:
determining target pose information matched with the current pose information from the three-dimensional map;
acquiring target key frame information associated with the target pose information;
and determining the number of matched feature points and average reprojection errors between the currently acquired image of the visual inertial odometer and the target key frame information, and obtaining the matching result.
Optionally, the matching result includes a matching feature point number and an average re-projection error;
the step of determining the fusion positioning information according to the matching result comprises the following steps:
if the number of the matched feature points is larger than a set number threshold and the average re-projection error is smaller than the initial average re-projection error, the repositioning pose information is used as the fusion positioning information;
if the number of the matched feature points is not larger than the set number threshold, or the average re-projection error is not smaller than the initial average re-projection error, the current pose information is used as the fusion positioning information;
The initial average re-projection error is obtained by matching the acquired image with the three-dimensional map when the visual inertial odometer is initialized.
Optionally, the running information includes at least one of a running time length, a running distance, and a turning curvature;
the setting conditions include: the length of travel is greater than at least one of a set time threshold, the distance of travel is greater than a set distance threshold, and the turning curvature is greater than a set curvature threshold.
Optionally, the three-dimensional map and the navigation path points in the three-dimensional map are obtained by:
acquiring image data and inertial measurement data acquired by the visual inertial odometer aiming at the target scene;
generating a three-dimensional map of the target scene and corresponding trajectory information based on the image data and the inertial measurement data;
the three-dimensional map is obtained by carrying out three-dimensional reconstruction on all key frames in the image data; the track information comprises pose coordinates of each key frame relative to a first key frame;
and converting the track information into an equipment matrix coordinate system to obtain a map building path, and extracting the navigation path point from the map building path.
Optionally, the mapping path includes a plurality of mapping path points;
the step of extracting the navigation path point from the map-building path comprises the following steps:
taking the starting point of the map building path as a navigation path point, and calculating the curvature of a path segment where the navigation path point is located;
if the curvature is smaller than a set threshold value, taking the map building path point at a first set distance behind the navigation path point as a new navigation path point;
if the curvature is greater than or equal to a set threshold, taking the map-building path point at a second set distance behind the navigation path point as a new navigation path point, wherein the first set distance is greater than the second set distance;
and replacing the navigation path point by the new navigation path point, returning to the step of executing the calculation of the curvature of the path section where the navigation path point is located until the map building path is traversed, and taking the map building path end point as the last navigation path point.
Optionally, the step of repositioning the carrier according to the three-dimensional map includes:
acquiring an image through the visual inertial odometer, and matching the image with the three-dimensional map to obtain a matching result;
If the matching result meets the preset constraint condition, determining that the repositioning is successful;
if the matching result does not meet the constraint condition, determining that repositioning fails, and continuing to acquire images through the visual inertial odometer to match until repositioning is successful.
Optionally, the step of repositioning the carrier according to the three-dimensional map includes:
acquiring a reference image containing calibration information through the visual inertial odometer;
and determining characteristic points matched with the calibration information from the three-dimensional image according to the reference image, and repositioning the carrier based on the positions of the characteristic points in the three-dimensional image.
In a second aspect, embodiments of the present application further provide a navigation device, the device including:
the acquisition module is used for acquiring a three-dimensional map of a target scene and navigation path points in the three-dimensional map, wherein the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of a carrier;
the repositioning module is used for repositioning the carrier according to the three-dimensional map so that pose information obtained through the visual inertial odometer is in the same coordinate system as the three-dimensional map;
And the navigation module is used for navigating the carrier according to the pose information obtained by the visual inertial odometer, the three-dimensional map and the navigation path point.
In a third aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory is configured to store a program, and the processor is configured to implement the navigation method in the first aspect when the program is executed.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the navigation method in the first aspect described above.
Compared with the prior art, the navigation method, the device, the electronic equipment and the storage medium provided by the embodiment of the application are used for carrying out map building on a target scene through the visual inertial odometer of the carrier and generating navigation path points, repositioning the carrier according to the three-dimensional map so that pose information obtained through the visual inertial odometer is located in the same coordinate system with the three-dimensional map, after the three-dimensional map is repositioned successfully, obtaining the pose information according to the visual inertial odometer, and carrying out navigation according to the visual inertial odometer and combining the three-dimensional map and the navigation path points, so that one set of system can be used indoors and outdoors for carrying out fixed-point navigation, meanwhile, the visual inertial odometer is adopted for carrying out map building and navigation, inertial measurement data and image data are fused, and the precision of fixed-point navigation is improved.
Drawings
Fig. 1 shows a flow chart of a navigation method according to an embodiment of the present application.
Fig. 2 illustrates an example diagram of extracting navigation path points from a mapping path provided by an embodiment of the present application.
Fig. 3 shows an exemplary diagram for navigating a carrier provided in an embodiment of the present application.
Fig. 4 is a schematic flow chart of step S103 in fig. 1.
Fig. 5 shows a block schematic diagram of a navigation device according to an embodiment of the present application.
Fig. 6 shows a block schematic diagram of an electronic device according to an embodiment of the present application.
Icon: 100-a navigation device; 101-an acquisition module; 102-repositioning module; 103-a navigation module; 10-an electronic device; 11-a processor; 12-memory; 13-bus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the prior art, the outdoor fixed-point navigation mainly adopts a satellite positioning navigation system, and in an area with better RTK (Real Time Kinematic) signals, the satellite positioning navigation system can stably achieve better precision, but in an area with weaker RTK signals or no RTK signals, the fixed-point navigation difficulty is extremely high through the satellite positioning navigation system, namely, the dependence of the satellite positioning navigation system on the RTK signals is higher, and the strength of the RTK signals directly influences navigation and operation precision.
The indoor fixed-point navigation mainly adopts laser SLAM and visual SLAM, and the laser SLAM is unfavorable for large-scale popularization and use because of the higher problem of equipment requirement of a laser radar, and the visual SLAM is too harsh on the scene, and has larger positioning deviation under the scenes of weak texture, complex illumination and the like.
In order to solve the technical problems, the embodiment of the application adopts the visual inertial odometer to build the map and navigate, the visual inertial odometer is used for building the map and generating the navigation path point, after the three-dimensional map is repositioned successfully, pose information is obtained according to the visual inertial odometer, and navigation is performed by combining the three-dimensional map and the navigation path point, so that a set of systems can be used for fixed-point navigation indoors and outdoors, and meanwhile, the accuracy of fixed-point navigation can be improved because the visual inertial odometer fuses inertial measurement data and image data. The following is a detailed description.
The navigation technology provided by the embodiment of the application can be applied to electronic equipment. The electronic device may be an unmanned vehicle, an autopilot, an unmanned ship or the like, or may be an unmanned aerial vehicle (for example, an agricultural unmanned aerial vehicle, a forestry unmanned aerial vehicle, an aerial survey unmanned aerial vehicle or the like), or may be a robot (a sweeping robot or the like), and the user may select different devices according to actual application scenes, which is not limited in this embodiment of the present application. The following examples illustrate unmanned vehicles.
The electronic device in the embodiment of the application can be provided with a VIO (Visual-Inertial Odometry, visual inertial odometer). That is, the electronic device may include: a carrier, e.g., an unmanned vehicle body, a robotic body, etc.; VIO mounted on a carrier; and the control module is in communication connection with the VIO and is used for executing the navigation method provided by the embodiment of the application.
VIO is an algorithm that fuses camera and IMU (Inertial Measurement Unit ) data, computing device position in space, where the camera may be a binocular camera, a depth camera, etc., without any limitation in the embodiments of the present application. The VIO technology integrates visual positioning and inertial navigation positioning, has the advantages of high updating speed, small drift, high robustness and the like, is widely focused and researched in the industry, and is increasingly applied to positioning navigation systems.
In this application embodiment, when installing VIO in the carrier, can carry out supplementary installation and angular adjustment through the display, guarantee that the camera can gather the clear picture in place ahead. The display may be a separate display communicatively connected to the electronic device, or may be a display screen of another device communicatively connected to the electronic device, for example, a display screen of a mobile terminal (e.g., a personal computer, a smart phone, a tablet computer, etc.) of the user, which is not limited in any way by the embodiments of the present application.
In one possible application scenario, a user may interact with an electronic device through an input device and an output device to implement mapping and navigation of a target scenario. The input device may be a device used by a user to input instructions, such as one or more of a keyboard, a mouse, a touch screen, and the like. The output device may be a device that outputs various information (e.g., images, etc.) externally to the electronic device, such as a display, etc.
In another possible Application scenario, a user may also interact with the electronic device through a mobile terminal, where the mobile terminal may be connected to the electronic device through a network in a communication manner, and an APP (Application program) may be installed on the mobile terminal, and the user may implement inputting an instruction, viewing an image, and so on through an operation on the APP.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flow chart illustrating a navigation method according to an embodiment of the present application. The navigation method is applied to the electronic equipment and can comprise the following steps:
s101, acquiring a three-dimensional map of a target scene and navigation path points in the three-dimensional map, wherein the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of a carrier.
In this embodiment, the target scene may be any outdoor or indoor scene requiring fixed-point navigation, for example, an outdoor area (if a garden) requiring unmanned vehicles for transportation operation, or an indoor area (such as an office) requiring cleaning by a cleaning robot, etc.
Aiming at a target scene, when the unmanned vehicle is in the target scene for the first time, a user can send an instruction to the electronic equipment through the input equipment or the APP to specify a map construction path starting point, then the unmanned vehicle enters a map construction mode, a three-dimensional map is obtained by map construction of the target scene through the VIO, and navigation path points in the three-dimensional scene are generated. After the drawing is built, the unmanned vehicle stores the three-dimensional map and the navigation path points, and then the fixed-point navigation can be performed based on the three-dimensional map and the navigation path points.
The process of creating a map to generate a three-dimensional map and navigation path points is described below.
In one possible implementation, the three-dimensional map and the navigation path points in the three-dimensional map are obtained by the following steps S1 to S3:
s1, aiming at a target scene, acquiring image data and inertial measurement data acquired by a visual inertial odometer.
And S2, generating a three-dimensional map of the target scene and corresponding track information based on the image data and the inertial measurement data.
In this embodiment, after entering the mapping mode, image data and inertial measurement data of the target scene are acquired in real time through the VIO, and simultaneously, a visual inertial odometer frame is entered, and based on the image data and the inertial measurement data, a three-dimensional map and corresponding track information of the target scene are generated.
Optionally, the process of generating the three-dimensional map and the corresponding trajectory information using the visual odometer frame may include:
1. the VIO is initialized to generate a first key frame and pose information corresponding to the first key frame, where the pose information may include: the drone collects the position and pose of the first keyframe.
2. Entering a visual inertial odometer frame, and generating a key frame sequence based on image data and inertial measurement data acquired by VIO in real time, wherein the key frame sequence comprises all key frames in the image data and pose information corresponding to each key frame;
3. performing three-dimensional reconstruction based on the key frame sequence to generate a three-dimensional map, wherein the three-dimensional map comprises a plurality of three-dimensional points, and key frame information and pose information associated with each three-dimensional point;
4. and carrying out feature extraction and matching based on the key frame sequence, and generating track information corresponding to the three-dimensional map, wherein the track information comprises pose coordinates of each key frame relative to the first key frame.
It should be noted that, the process of generating the three-dimensional map and the corresponding track information by using the visual inertial meter frame belongs to the existing algorithm frame, and the above process is only a simple description of general logic, and specific calculation and optimization processes are not repeated here.
And S3, converting the track information into an equipment matrix coordinate system to obtain a map building path, and extracting navigation path points from the map building path.
In this embodiment, after the three-dimensional map and the corresponding track information are generated, the track information is converted to the device matrix coordinate system to obtain the map-building path, and the subsequent control module can use the unified coordinate system to perform fixed-point navigation. Alternatively, taking the unmanned vehicle as an example, the origin of the equipment matrix coordinate system may be the center point of the vehicle body, the center point of the vehicle body is the x-axis to the right, and the center point of the vehicle body is the y-axis to the front.
In this embodiment, since the control module of the subsequent unmanned vehicle needs to perform fixed-point navigation according to the map-building path, in order to improve the efficiency of fixed-point navigation, it is also necessary to extract the navigation path points from the map-building path.
Alternatively, the map creating path may include a plurality of map creating path points, and the process of extracting the navigation path points from the map creating path may include:
Taking the starting point of the map building path as a navigation path point, and calculating the curvature of a path segment where the navigation path point is located;
if the curvature is smaller than the set threshold value, taking the map building path point at the first set distance behind the navigation path point as a new navigation path point;
if the curvature is greater than or equal to a set threshold value, taking the map building path point at a second set distance behind the navigation path point as a new navigation path point, wherein the first set distance is greater than the second set distance;
and replacing the navigation path point by using the new navigation path point, returning to the step of executing the calculation of the curvature of the path section where the navigation path point is located until the map path is traversed, and taking the map path end point as the last navigation path point.
That is, the navigation path points may be extracted by combining the distance between the map-forming path points and the path segment curvature, the distance may be represented by s, and the path segment curvature may be represented by v.
Referring to fig. 2, the map path includes a map path start point, a map path end point, and a plurality of map path points between the map path start point and the map path end point, wherein the map path points are shown as small black points in fig. 2. When extracting the navigation path point, firstly, taking the map-building path starting point as a first navigation path point, and calculating the curvature v of a path section where the first navigation path point is located according to the first navigation path point and a plurality of map-building path points behind the first navigation path point; then, when the curvature v is smaller (for example, v < 0.01), taking a map-building path point far (for example, 3-5 m) from the first navigation path point as a next navigation path point, and when the curvature v is larger (for example, v is more than or equal to 0.01), taking a map-building path point near (for example, 0.3-0.5 m) from the first navigation path point as a next navigation path point; and repeating the above process until the map building path is traversed, taking the end point of the map building path as the last navigation path point, wherein the navigation path point is shown as a large black point in fig. 2.
Optionally, the curvature setting threshold may be 0.01-0.02, the first setting distance may be 3-5 m, the second setting distance may be 0.3-0.5 m, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited in this embodiment of the present application.
S102, repositioning the carrier according to the three-dimensional map so that pose information obtained through the visual inertial odometer is in the same coordinate system as the three-dimensional map.
In this embodiment, after the unmanned vehicle finishes the map building or when the unmanned vehicle needs the fixed-point navigation (for example, when fruit is transported in an orchard), the unmanned vehicle can return to a repositioning starting point, then enter a repositioning mode, reposition the carrier according to the three-dimensional map, and after the carrier is repositioned successfully, pose information obtained through the visual inertial odometer is in the same coordinate system with the three-dimensional map.
Optionally, the repositioning starting point may be a map-building path starting point, or may be any point on the map-building path, and after the repositioning is successful, the unmanned vehicle may start the operation from the repositioning starting point.
In some possible scenarios, it may not be necessary for the drone to start the operation from the relocation start point, but rather from some designated location point, for example, when no electricity or no material is present during the operation of the drone, the operation needs to be stopped, the recharging or charging is performed back to the replenishment point, and then the operation is continued back to the breakpoint, that is, the designated location point, which may be set by the user through the input device or APP. For this scenario, the unmanned vehicle first needs to return to the relocation starting point for relocation, and after the relocation is successful, runs from the relocation starting point to a specified location point, and then starts the operation from the specified location point.
The process of repositioning the carrier according to the three-dimensional map in step S102 will be described in detail.
In one possible implementation, step S102 may include S1021-S1023.
S1021, acquiring an image through a visual inertial odometer, and matching the image with a three-dimensional map to obtain a matching result.
In this embodiment, after entering the repositioning mode, the image and the inertial measurement data of the target scene are acquired in real time through the VIO, and the image is matched with the three-dimensional map by adopting a pnp algorithm to obtain a matching result, and the matching process may include:
firstly, current pose information of an unmanned vehicle is obtained through VIO;
then, determining specific key frame information associated with the pose information from the three-dimensional map;
and then, calculating the number of matching feature points and average re-projection errors between the specific key frame information and the currently acquired image according to the specific key frame information and the currently acquired image of the VIO, and obtaining a matching result.
S1022, if the matching result meets the preset constraint condition, determining that the repositioning is successful.
S1023, if the matching result does not meet the constraint condition, determining that repositioning fails, and continuing to acquire images through the visual inertial odometer to match until repositioning is successful.
In this embodiment, the constraint conditions may include: the number of the matched feature points is larger than a set number threshold and the average re-projection error is smaller than the initial average re-projection error. The initial average re-projection error is obtained by matching an image acquired by the VIO with a three-dimensional map during the VIO initialization, wherein the VIO initialization refers to the initialization when entering a repositioning mode, and is not the initialization when entering a mapping mode.
That is, if the number of matching feature points is greater than a set number threshold (e.g., 50) and the average re-projection error is less than the initial average re-projection error, then determining that the repositioning is successful, and entering a navigation mode after the repositioning is successful; otherwise, determining that repositioning fails, and continuing to match through the VIO acquired images at the moment, namely, continuously cycling the repositioning mode until repositioning is successful.
It should be noted that, the constraint condition for determining whether repositioning is successful may be flexibly adjusted by the user according to actual requirements, for example, some scenes may have higher accuracy requirements for repositioning, and the constraint condition may be strict, for example, the set number threshold is higher, the initial average re-projection error is smaller, etc., so as to implement accurate repositioning; the accuracy requirement on repositioning in some scenes is not high, but the speed requirement on repositioning is high, and repositioning is required to be completed quickly, so that constraint conditions can be relaxed, for example, the set quantity threshold is lower, the initial average re-projection error is larger, and the like, so that quick repositioning is realized; the embodiments of the present application do not impose any limitation on this.
In another possible implementation manner, as described above, in some possible scenarios, the relocation may need to be completed quickly, if the relocation mode is always cycled, this may result in too long time consumption, generate certain energy consumption, which is unfavorable for improving the operation efficiency, reduce the user experience, and for such scenarios that the relocation needs to be completed quickly, besides adjusting the constraint condition, the relocation may be assisted by calibration information, so as to achieve the quick relocation. Thus, step S102 may include S102 a-S102 b.
S102a, acquiring a reference image containing calibration information through a visual inertial meter.
S102b, determining characteristic points matched with calibration information from the three-dimensional image according to the reference image, and repositioning the carrier based on the positions of the characteristic points in the three-dimensional image.
In this embodiment, in order to improve the operation efficiency and achieve the effect of quick repositioning, calibration boards (such as checkerboard) which are easy to extract features may be placed at the repositioning starting point and the key positions which may become the repositioning starting point, and meanwhile, the calibration boards in the mapping mode and the repositioning mode need to be ensured to be stationary, so that the same feature points are extracted in an auxiliary manner, and quick repositioning is achieved.
In this embodiment, the calibration information may be information on the calibration board that facilitates feature extraction, such as a checkerboard, etc. Acquiring a reference image containing calibration information through the VIO, acquiring pose information of an unmanned vehicle when the reference image is acquired through the VIO, and further acquiring key frame information associated with the pose information from a three-dimensional map; then, determining feature points matched with the calibration information from the key frame information acquired in the previous step according to the reference image; thereafter, the carrier is repositioned based on the position of the feature point in the three-dimensional image, for example, if the distance between the position of the feature point in the three-dimensional image and the position of the feature point determined by the reference image is within a set distance (for example, 0.3 m), it is determined that the repositioning is successful.
When it should be noted that, the process of implementing the quick repositioning through steps S102a to S102b may be implemented by adopting the scheme at the beginning, or may be implemented by using the scheme after the repositioning fails multiple times, and specifically may be flexibly selected by the user according to the actual situation, which is not limited in any way in the embodiment of the present application.
And S103, navigating the carrier according to pose information, the three-dimensional map and navigation path points obtained by the visual inertial odometer.
In this embodiment, after the carrier is successfully repositioned, the pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map, and then a navigation mode is entered, in which the pose information of the carrier is obtained in real time by the VIO, and the control module navigates the carrier according to the pose information of the carrier, the three-dimensional map and the navigation path point.
Referring to fig. 3, after the unmanned vehicle originally travels along the current straight path and enters a navigation mode, real-time pose information of the unmanned vehicle is obtained through the VIO, wherein the real-time pose information comprises real-time positioning points; then, based on the three-dimensional map, generating an adjustment planning path between the real-time positioning point and the navigation path point, and controlling the unmanned vehicle to run according to the adjustment planning path; with this circulation, the driving track of the unmanned vehicle is more and more close to the map building path, and finally driving according to the map building path.
In the process of navigating according to the scheme introduced in step S103, whether the navigation is finished is detected in real time, if not, the navigation is continued according to the scheme, otherwise, the navigation is finished. Here, the navigation ending means that the unmanned vehicle completes the current task, and the detecting whether the navigation ends may be: detecting whether the distance between the real-time positioning point of the unmanned vehicle and the set operation end point is smaller than a preset value (for example, 0.1 m), and if so, ending the navigation task by the control module; and otherwise, continuing to navigate.
In some possible scenarios, the unmanned vehicle may not complete the current task, and needs to stop the operation halfway, for example, when no electricity or no material is present during the operation of the unmanned vehicle, in this case, the user may send an end instruction to the unmanned vehicle through the input device or APP, or the unmanned vehicle automatically stops the operation, returns to the charging or charging of the replenishment point, and then returns to the breakpoint position to continue the operation, that is, the unmanned vehicle performs repositioning and navigation again until the navigation is ended.
The above describes the process that the control module navigates the carrier according to the pose information of the carrier, the three-dimensional map and the navigation path points, besides the mode, the three-dimensional map can be used for repositioning in the navigation mode, and the VIO positioning information and the repositioning information are subjected to positioning fusion, so that the positioning accuracy is further improved.
Thus, referring to fig. 4 on the basis of fig. 1, step S103 may include S1031 to S1034.
S1031, based on the image acquired by the visual inertial odometer in real time, acquiring the running information of the carrier relative to the initialization completion time or the last positioning fusion time.
In the present embodiment, the initialization completion time refers to the time at which the VIO is initialized when the fixed-point navigation is performed next time. The last positioning fusion time refers to the time of last positioning fusion of the VIO positioning information and the repositioning information.
Alternatively, the running information may include at least one of a running time length, a running distance, and a turning curvature, which is not limited in any way by the embodiment of the present application.
Optionally, the operation duration can be calculated according to the current image acquisition time of the VIO and the initialization completion time or the last positioning fusion time; calculating the running distance according to the current image acquired by the VIO and the image acquired at the moment of initialization completion or the image acquired at the last positioning fusion moment; and calculating turning curvature according to the same characteristic point in the image currently acquired by the VIO and the images of the previous frames of the image.
S1032, when the running information meets the set condition, a matching map area is determined from the three-dimensional map according to the current pose information obtained by the visual inertial odometer.
In the present embodiment, the setting conditions may include: the run-time length is at least one of greater than a set time threshold, the run-distance is greater than a set distance threshold, and the turning curvature is greater than a set curvature threshold.
Optionally, the set time threshold may be 5s, the set distance threshold may be 3m, the set curvature threshold may be 0.01, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited in any way in the embodiment of the present application.
That is, if any one of the running time length being greater than the set time threshold (for example, 5 s), the running distance being greater than the set distance threshold (for example, 3 m), and the turning curvature being greater than the set curvature threshold (for example, 0.01) is established, it is determined that the running information satisfies the set condition.
In this embodiment, when the running information satisfies the setting condition, a three-dimensional point corresponding to the current pose information is determined from the three-dimensional map according to the current pose information obtained by the VIO, and a map area within a set distance range (for example, 1 m) of the three-dimensional point is obtained as a matching map area.
S1033, carrying out positioning fusion processing on the current pose information and the repositioning pose information obtained by repositioning the matched map area, and obtaining fusion positioning information.
In this embodiment, after the matching map area is determined from the three-dimensional map, repositioning pose information can be obtained through repositioning the matching map area, that is, the current position point of the unmanned vehicle is obtained through the VIO, and pose information associated with the current position point is obtained from the matching map area according to the current position point, that is, repositioning pose information.
Optionally, the process of performing the positioning fusion processing on the current pose information and the repositioning pose information in step S1033 may include S10331 to S10333.
S10331, determining the distance between the current pose information and the repositioning pose information.
S10332, if the distance is smaller than the preset distance threshold, the current pose information is used as fusion positioning information.
S10333, if the distance is greater than or equal to a preset distance threshold, obtaining a matching result of the image acquired by the visual inertial odometer and the three-dimensional map, and determining fusion positioning information according to the matching result.
Optionally, the preset distance threshold may be 0.3m, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited in the embodiment of the present application.
That is, if the distance between the current pose information and the repositioning pose information is smaller than a preset distance threshold (e.g., 0.3 m), it is indicated that the VIO positioning accuracy is high, and the current pose information is directly used as the fusion positioning information. If the distance between the current pose information and the repositioning pose information is greater than or equal to a preset distance threshold (e.g., 0.3 m), the VIO positioning accuracy is lower, and the fusion positioning information needs to be determined by combining a three-dimensional map.
In one possible implementation, the process of obtaining the matching result of the image acquired by the visual odometer and the three-dimensional map in step S10333 may include S11 to S13.
S11, determining target pose information matched with the current pose information from the three-dimensional map.
S12, acquiring target key frame information associated with the target pose information.
And S13, determining the number of matching feature points and the average re-projection error between the currently acquired image of the visual inertial odometer and the target key frame information, and obtaining a matching result, namely, the matching result comprises the number of matching feature points and the average re-projection error.
It should be noted that, the manner of calculating the number of matching feature points and the average re-projection error between the currently acquired image of the VIO and the target key frame information is in the prior art, which is not described in detail in this embodiment of the present application.
In one possible implementation, the process of determining the fused positioning information according to the matching result in step S10333 may include S21 to S22.
And S21, if the number of the matched feature points is larger than a set number threshold and the average re-projection error is smaller than the initial average re-projection error, the repositioning pose information is used as fusion positioning information.
And S22, if the number of the matched feature points is not greater than a set number threshold, or the average re-projection error is not less than the initial average re-projection error, taking the current pose information as fusion positioning information.
The initial average re-projection error is obtained by matching the acquired image with the three-dimensional map when the visual inertial odometer is initialized. Here, initialization refers to initialization of the VIO when the fixed point navigation is performed next time.
Optionally, the set number threshold may be 50, and the specific value may be flexibly set by the user according to the actual requirement, which is not limited in the embodiment of the present application.
And S1034, navigating the carrier according to the fusion positioning information, the three-dimensional map and the navigation path point.
In this embodiment, the fused positioning information obtained according to steps S1031 to S1034 is obtained by performing positioning fusion processing on pose information obtained by the VIO and repositioning pose information obtained by partial map information, so that the defect that the precision of positioning the VIO in some scenes (for example, complex illumination) is not high can be overcome, and the positioning precision is further improved.
Optionally, the VIO positioning information and the repositioning information are subjected to positioning fusion, and after the fusion positioning information is obtained, the control module carries out navigation on the carrier according to the fusion positioning information, the three-dimensional map and the navigation path point. The method comprises the steps of generating an adjustment planning path between fusion positioning information and navigation path points based on a three-dimensional map, controlling the unmanned vehicle to run according to the adjustment planning path, enabling the running track of the unmanned vehicle to be closer to a map building path, and finally running according to the map building path.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
firstly, navigation is performed by combining a three-dimensional map and navigation path points, so that a set of system can be used for fixed-point navigation indoors and outdoors, and meanwhile, the accuracy of fixed-point navigation can be improved due to the fact that the visual inertial odometer fuses inertial measurement data and image data.
Secondly, the pose information obtained by the VIO and the repositioning pose information obtained by partial map information are subjected to positioning fusion processing, so that the defect of low precision of the VIO positioned in certain scenes (for example, complex illumination) can be overcome, and the positioning precision is further improved.
Thirdly, after the repositioning is started or the repositioning fails for a plurality of times, the repositioning is performed through the calibration information, so that the quick repositioning is realized, and the operation efficiency can be improved.
In order to perform the corresponding steps in the above-described method embodiments and in each possible implementation, an implementation of a navigation device is given below.
Referring to fig. 5, fig. 5 is a block diagram of a navigation device 100 according to an embodiment of the disclosure. The navigation device 100 is applied to an electronic apparatus, and includes: an acquisition module 101, a repositioning module 102 and a navigation module 103.
The acquisition module 101 is configured to acquire a three-dimensional map of the target scene and navigation path points in the three-dimensional map, where the three-dimensional map is obtained by mapping the target scene by using a visual inertial odometer of the carrier.
And the repositioning module 102 is used for repositioning the carrier according to the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system with the three-dimensional map.
And the navigation module 103 is used for navigating the carrier according to the pose information, the three-dimensional map and the navigation path points obtained by the visual inertial odometer.
In one possible implementation, the relocation module 102 is specifically configured to:
acquiring an image through a visual inertial odometer, and matching the image with a three-dimensional map to obtain a matching result;
if the matching result meets the preset constraint condition, determining that the repositioning is successful;
if the matching result does not meet the constraint condition, determining that repositioning fails, and continuously collecting images through the visual inertial odometer to match until repositioning is successful.
In another possible implementation, the relocation module 102 is specifically configured to:
acquiring a reference image containing calibration information through a visual inertial odometer;
And determining characteristic points matched with the calibration information from the three-dimensional image according to the reference image, and repositioning the carrier based on the positions of the characteristic points in the three-dimensional image.
Optionally, the navigation module 103 is specifically configured to:
based on the image acquired by the visual inertial odometer in real time, acquiring the running information of the carrier relative to the initialization completion time or the last positioning fusion time;
when the running information meets the set conditions, determining a matching map area from the three-dimensional map according to the current pose information obtained by the visual inertial odometer;
positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matched map area to obtain fused positioning information;
and navigating the carrier according to the fusion positioning information, the three-dimensional map and the navigation path points.
Optionally, the manner in which the navigation module 103 performs positioning fusion processing on the current pose information and the repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information includes:
determining a distance between the current pose information and the repositioning pose information;
if the distance is smaller than the preset distance threshold, the current pose information is used as fusion positioning information;
And if the distance is greater than or equal to the preset distance threshold, a matching result of the image acquired by the visual inertial odometer and the three-dimensional map is obtained, and fusion positioning information is determined according to the matching result.
Optionally, the three-dimensional map includes a plurality of three-dimensional points, key frame information and pose information associated with each three-dimensional point;
the navigation module 103 performs a manner of obtaining a matching result of an image acquired by the visual odometer and the three-dimensional map, including:
determining target pose information matched with the current pose information from the three-dimensional map;
acquiring target key frame information associated with target pose information;
and determining the number of matching feature points and average re-projection errors between the currently acquired image of the visual inertial odometer and the target key frame information, and obtaining a matching result.
Optionally, the matching result includes a number of matching feature points and an average reprojection error, and the navigation module 103 performs a manner of determining the fused positioning information according to the matching result, including:
if the number of the matched feature points is larger than a set number threshold and the average re-projection error is smaller than the initial average re-projection error, the repositioning pose information is used as fusion positioning information;
if the number of the matched feature points is not greater than a set number threshold, or the average re-projection error is not less than the initial average re-projection error, the current pose information is used as fusion positioning information;
The initial average re-projection error is obtained by matching the acquired image with the three-dimensional map when the visual inertial odometer is initialized.
Optionally, the running information includes at least one of a running duration, a running distance, and a turning curvature;
the setting conditions include: the run-time length is at least one of greater than a set time threshold, the run-distance is greater than a set distance threshold, and the turning curvature is greater than a set curvature threshold.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the navigation device 100 described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Referring to fig. 6, fig. 6 is a block diagram of an electronic device 10 according to an embodiment of the disclosure. The electronic device 10 includes a processor 11, a memory 12, and a bus 13, and the processor 11 is connected to the memory 12 through the bus 13.
The memory 12 is used for storing a program, such as the navigation device 100 shown in fig. 5, and the navigation device 100 includes at least one software functional module that may be stored in the memory 12 in the form of software or firmware (firmware), and the processor 11 executes the program after receiving the execution instruction to implement the navigation method disclosed in the foregoing embodiment.
The memory 12 may include high-speed random access memory (Random Access Memory, RAM) and may also include non-volatile memory (NVM).
The processor 11 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 11 or by instructions in the form of software. The processor 11 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a micro control unit (Microcontroller Unit, MCU), a complex programmable logic device (Complex Programmable Logic Device, CPLD), a field programmable gate array (Field Programmable Gate Array, FPGA), an embedded ARM, and the like.
The present embodiment also provides a computer readable storage medium having a computer program stored thereon, which when executed by the processor 11 implements the navigation method disclosed in the above embodiment.
In summary, the embodiment of the present application provides one kind of method. The navigation method, the device, the electronic equipment and the storage medium adopt the visual inertial odometer to build the map and navigate, build the map and generate the navigation path point through the visual inertial odometer, and after the three-dimensional map is repositioned successfully, the pose information is obtained according to the visual inertial odometer, and the navigation is performed by combining the three-dimensional map and the navigation path point, so that a set of systems can be used for fixed-point navigation indoors and outdoors, and meanwhile, the accuracy of fixed-point navigation can be improved because the visual inertial odometer fuses inertial measurement data and image data.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (13)

1. A method of navigation, the method comprising:
acquiring a three-dimensional map of a target scene and navigation path points in the three-dimensional map, wherein the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of a carrier;
repositioning the carrier according to the three-dimensional map so that pose information obtained by the visual inertial odometer is in the same coordinate system as the three-dimensional map;
and navigating the carrier according to the pose information obtained by the visual inertial odometer, the three-dimensional map and the navigation path point.
2. The method of claim 1, wherein the step of navigating the carrier based on pose information obtained by the visual odometer, the three-dimensional map and the navigation waypoints comprises:
Acquiring running information of the carrier relative to the initialization completion time or the last positioning fusion time based on the image acquired by the visual inertial odometer in real time;
when the running information meets the set condition, determining a matching map area from a three-dimensional map according to the current pose information obtained by the visual inertial odometer;
positioning and fusing the current pose information and repositioning pose information obtained by repositioning the matching map area to obtain fused positioning information;
and navigating the carrier according to the fusion positioning information, the three-dimensional map and the navigation path point.
3. The method of claim 2, wherein the step of performing a positioning fusion process on the current pose information and the repositioning pose information obtained by repositioning the matching map region to obtain fused positioning information includes:
determining a distance between the current pose information and the repositioning pose information;
if the distance is smaller than a preset distance threshold, the current pose information is used as the fusion positioning information;
and if the distance is greater than or equal to the preset distance threshold, a matching result of the image acquired by the visual inertial odometer and the three-dimensional map is obtained, and the fusion positioning information is determined according to the matching result.
4. The method of claim 3, wherein the three-dimensional map comprises a plurality of three-dimensional points and keyframe information and pose information associated with each of the three-dimensional points;
the step of obtaining the matching result of the image acquired by the visual inertial odometer and the three-dimensional map comprises the following steps:
determining target pose information matched with the current pose information from the three-dimensional map;
acquiring target key frame information associated with the target pose information;
and determining the number of matched feature points and average reprojection errors between the currently acquired image of the visual inertial odometer and the target key frame information, and obtaining the matching result.
5. The method of claim 3, wherein the matching result includes a number of matching feature points and an average re-projection error;
the step of determining the fusion positioning information according to the matching result comprises the following steps:
if the number of the matched feature points is larger than a set number threshold and the average re-projection error is smaller than the initial average re-projection error, the repositioning pose information is used as the fusion positioning information;
if the number of the matched feature points is not larger than the set number threshold, or the average re-projection error is not smaller than the initial average re-projection error, the current pose information is used as the fusion positioning information;
The initial average re-projection error is obtained by matching the acquired image with the three-dimensional map when the visual inertial odometer is initialized.
6. The method of claim 2, wherein the operational information includes at least one of an operational duration, an operational distance, and a turning curvature;
the setting conditions include: the length of travel is greater than at least one of a set time threshold, the distance of travel is greater than a set distance threshold, and the turning curvature is greater than a set curvature threshold.
7. The method of claim 1, wherein the three-dimensional map and navigation path points in the three-dimensional map are obtained by:
acquiring image data and inertial measurement data acquired by the visual inertial odometer aiming at the target scene;
generating a three-dimensional map of the target scene and corresponding trajectory information based on the image data and the inertial measurement data;
the three-dimensional map is obtained by carrying out three-dimensional reconstruction on all key frames in the image data; the track information comprises pose coordinates of each key frame relative to a first key frame;
And converting the track information into an equipment matrix coordinate system to obtain a map building path, and extracting the navigation path point from the map building path.
8. The method of claim 7, wherein the mapping path comprises a plurality of mapping path points;
the step of extracting the navigation path point from the map-building path comprises the following steps:
taking the starting point of the map building path as a navigation path point, and calculating the curvature of a path segment where the navigation path point is located;
if the curvature is smaller than a set threshold value, taking the map building path point at a first set distance behind the navigation path point as a new navigation path point;
if the curvature is greater than or equal to a set threshold, taking the map-building path point at a second set distance behind the navigation path point as a new navigation path point, wherein the first set distance is greater than the second set distance;
and replacing the navigation path point by the new navigation path point, returning to the step of executing the calculation of the curvature of the path section where the navigation path point is located until the map building path is traversed, and taking the map building path end point as the last navigation path point.
9. The method of claim 1, wherein repositioning the carrier according to the three-dimensional map comprises:
Acquiring an image through the visual inertial odometer, and matching the image with the three-dimensional map to obtain a matching result;
if the matching result meets the preset constraint condition, determining that the repositioning is successful;
if the matching result does not meet the constraint condition, determining that repositioning fails, and continuing to acquire images through the visual inertial odometer to match until repositioning is successful.
10. The method of claim 1, wherein repositioning the carrier according to the three-dimensional map comprises:
acquiring a reference image containing calibration information through the visual inertial odometer;
and determining characteristic points matched with the calibration information from the three-dimensional image according to the reference image, and repositioning the carrier based on the positions of the characteristic points in the three-dimensional image.
11. A navigation device, the device comprising:
the acquisition module is used for acquiring a three-dimensional map of a target scene and navigation path points in the three-dimensional map, wherein the three-dimensional map is obtained by mapping the target scene through a visual inertial odometer of a carrier;
The repositioning module is used for repositioning the carrier according to the three-dimensional map so that pose information obtained through the visual inertial odometer is in the same coordinate system as the three-dimensional map;
and the navigation module is used for navigating the carrier according to the pose information obtained by the visual inertial odometer, the three-dimensional map and the navigation path point.
12. An electronic device comprising a processor and a memory, the memory for storing a program, the processor for implementing the navigation method of any of claims 1-10 when the program is executed.
13. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the navigation method of any of claims 1-10.
CN202211714011.7A 2022-12-29 2022-12-29 Navigation method, navigation device, electronic equipment and storage medium Pending CN116045961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211714011.7A CN116045961A (en) 2022-12-29 2022-12-29 Navigation method, navigation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211714011.7A CN116045961A (en) 2022-12-29 2022-12-29 Navigation method, navigation device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116045961A true CN116045961A (en) 2023-05-02

Family

ID=86117616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211714011.7A Pending CN116045961A (en) 2022-12-29 2022-12-29 Navigation method, navigation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116045961A (en)

Similar Documents

Publication Publication Date Title
US11105638B2 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
CN111174799B (en) Map construction method and device, computer readable medium and terminal equipment
CN109755995B (en) Robot automatic charging docking method based on ROS robot operating system
US20210063577A1 (en) Robot relocalization method and apparatus and robot using the same
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111784835B (en) Drawing method, drawing device, electronic equipment and readable storage medium
CN111553844B (en) Method and device for updating point cloud
CN111693059B (en) Navigation method, device and equipment for roundabout and storage medium
CN111986261A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN112652062A (en) Point cloud map construction method, device, equipment and storage medium
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
Liu A robust and efficient lidar-inertial-visual fused simultaneous localization and mapping system with loop closure
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN111260722B (en) Vehicle positioning method, device and storage medium
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN116576859A (en) Path navigation method, operation control method and related device
CN116626700A (en) Robot positioning method and device, electronic equipment and storage medium
CN116045961A (en) Navigation method, navigation device, electronic equipment and storage medium
CN115290066A (en) Error correction method and device and mobile equipment
CN115542896A (en) Robot path generation method, system and storage medium
CN114285114A (en) Charging control method and device, electronic equipment and storage medium
CN111932611A (en) Object position acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination