Disclosure of Invention
The inventors have found that the related SLAM systems rely on a single sensor data, such as laser-based GMapping, monocular vision-based ORB (organized FAST and Rotated BRIEF, FAST feature point extraction and description algorithm) -SLAM, RGBD-SLAM based on RGBD (Red Green Blue and Depth ) cameras. When the operating environment meets certain conditions, the system can well complete map construction and positioning tasks. For example, if a robot is equipped with a lidar with high enough accuracy, this task can be performed by the robot to a large extent by means of an existing SLAM system, such as gmaping, for a two-dimensional map construction task in a relatively static indoor scene. However, when the SLAM system needs to operate for a long time, especially when the scene change is faced during the operation, such as the illumination condition and the environmental openness change, the scene change problem cannot be handled. In the practical application process, scene change is very common. Therefore, the mapping technology in multiple scenes is an important aspect of SLAM from theory to practical application.
Some embodiments of the present disclosure aim to improve the adaptability of mapping to the environment.
According to some aspects of the present disclosure, a map construction method is provided, including: determining the environment type of the environment where the unmanned equipment is located, wherein the environment type comprises an indoor bright environment, an outdoor bright environment or a dark environment; constructing a map by adopting a related map construction mode according to the environment type; and when the environment type changes, switching the used map construction mode and calibrating the map before and after switching.
Optionally, the map construction mode includes at least two of the following: under the condition that the environment type is an indoor bright environment, a map is constructed according to depth data detected by a depth camera; under the condition that the environment type is an outdoor bright environment, a map is constructed according to image data detected by a visual detector; and under the condition that the environment type is a dark environment, a map is constructed by adopting the distance data detected by the laser detector.
Optionally, calibrating the map before and after switching comprises: and unifying the maps constructed before and after switching to the same coordinate system according to the continuity of the pose of the unmanned equipment at the switching moment.
Optionally, calibrating the map before and after switching comprises: acquiring the final pose of the unmanned equipment in a global coordinate system in a map construction mode stopped at the switching moment, wherein the global coordinate system takes the position of the unmanned equipment when the unmanned equipment is started as a coordinate origin; acquiring an initial pose of the unmanned equipment in a local coordinate system in a map construction mode started at a switching moment; determining a coordinate system transformation matrix for transforming the initial pose into the final pose according to the final pose and the initial pose; unifying the constructed map after switching to a global coordinate system according to the coordinate system transformation matrix.
Optionally, determining a coordinate system transformation matrix for transforming the initial pose into the final pose according to the final pose and the initial pose comprises:
according to the formula
Tglobal=Tconversion·Tlocal
Determining a coordinate transformation matrix TconversionWherein, TglobalFor the pose, T, of the unmanned device in the global coordinate systemlocalThe pose of the unmanned equipment under the local coordinate system is obtained; coordinate transformation matrix TconversionComprising a rotation matrix RconversionAnd a translation vector tconversionAccording to the formula
cglobal=Rconversion·clocal+tconversion
Determining a rotation matrix RconversionAnd a translation vector tconversion,cglobalCoordinates of the unmanned aerial vehicle in a global coordinate system, clocalCoordinates of the unmanned equipment in a local coordinate system.
Optionally, calibrating the map before and after switching further comprises: when the drone enters an outdoor bright environment: determining a scale factor of image data detected by a visual detector according to coordinates of the unmanned equipment in a map construction mode stopped at the switching moment; and constructing a map under a local coordinate system according to the scale factors and the image data.
Optionally, the map construction method further includes: if the environment type when the unmanned equipment is started is an outdoor bright environment, determining a scale factor of the image data according to detection data of the distance sensor; and constructing a map under a local coordinate system according to the scale factors and the image data.
Optionally, the map construction method further includes: if the environment type when the unmanned equipment is started is an outdoor bright environment, constructing a map under a local coordinate system according to the image data; when environment switching occurs, determining a scale factor of image data according to coordinates of the unmanned equipment in a map construction mode started at the switching moment; and correcting the scale of the map constructed according to the image data according to the scale factor to generate a map under a global coordinate system, and unifying the constructed maps before and after switching under the global coordinate system.
The method can be used for switching to a related map construction mode in time to construct a map when the environment changes, fully considers the problem of map construction fracture caused by inconsistent detector dimensions and positions, executes map calibration to connect the maps before and after switching, and improves the adaptability of map construction to the environment change.
According to other aspects of the present disclosure, a map building apparatus is provided, including: an environment determination module configured to determine an environment type of an environment in which the unmanned device is located, the environment type including an indoor bright environment, an outdoor bright environment, or a dark environment; the map building module is configured to build a map by adopting an associated map building mode according to the environment type; the switching module is configured to switch the used map construction mode when the environment type changes; a calibration module configured to calibrate the map before and after switching.
Optionally, the map construction mode includes at least two of the following: under the condition that the environment type is an indoor bright environment, a map is constructed according to depth data detected by a depth camera; under the condition that the environment type is an outdoor bright environment, a map is constructed according to image data detected by a visual detector; and under the condition that the environment type is a dark environment, a map is constructed by adopting the distance data detected by the laser detector.
Optionally, the calibration module is configured to unify the maps constructed before and after the switching into the same coordinate system according to the continuity of the pose of the unmanned aerial vehicle at the switching time.
Optionally, the calibration module is configured to: acquiring the final pose of the unmanned equipment in a global coordinate system in a map construction mode stopped at the switching moment, wherein the global coordinate system takes the position of the unmanned equipment when the unmanned equipment is started as a coordinate origin; acquiring an initial pose of the unmanned equipment in a local coordinate system in a map construction mode started at a switching moment; determining a coordinate system transformation matrix for transforming the initial pose into the final pose according to the final pose and the initial pose; unifying the constructed map after switching to a global coordinate system according to the coordinate system transformation matrix.
Optionally, determining a coordinate system transformation matrix for transforming the initial pose into the final pose according to the final pose and the initial pose comprises: according to the formula
Tglobal=Tconversion·Tlocal
Determining a coordinate transformation matrix TconversionWherein, TglobalFor the pose, T, of the unmanned device in the global coordinate systemlocalThe pose of the unmanned equipment under the local coordinate system is obtained; coordinate transformation matrix TconversionComprising a rotation matrix RconversionAnd a translation vector tconversionAccording to the formula
cglobal=Rconversion·clocal+tconversion
Determining a rotation matrix RconversionAnd a translation vector tconversion,cglobalCoordinates of the unmanned aerial vehicle in a global coordinate system, clocalCoordinates of the unmanned equipment in a local coordinate system.
Optionally, the calibration module is further configured to: when the drone enters an outdoor bright environment: determining a scale factor of image data detected by a visual detector according to coordinates of the unmanned equipment in a map construction mode stopped at the switching moment; and constructing a map under a local coordinate system according to the scale factors and the image data.
Optionally, the map building module is further configured to determine a scale factor of the image data according to the detection data of the distance sensor if the environment type when the unmanned device is started is an outdoor bright environment, and build a map in the local coordinate system according to the scale factor and the image data.
Optionally, the calibration module is further configured to: if the environment type when the unmanned equipment is started is an outdoor bright environment, constructing a map under a local coordinate system according to the image data; when environment switching occurs, determining a scale factor of image data according to coordinates of the unmanned equipment in a map construction mode started at the switching moment; and correcting the scale of the map constructed according to the image data according to the scale factor to generate a map under a global coordinate system, and unifying the constructed maps before and after switching under the global coordinate system.
According to still further aspects of the present disclosure, a map construction apparatus is provided, including: a memory; and a processor coupled to the memory, the processor configured to perform any of the above map construction methods based on instructions stored in the memory.
The device can be switched to a related map construction mode in time to construct a map when the environment changes, the problem of map construction fracture caused by inconsistent detector dimensions and positions is fully considered, map calibration is executed to enable the maps before and after switching to be connected, and the adaptability of map construction to the environment change is improved.
According to still further aspects of the disclosure, a computer-readable storage medium is proposed, on which computer program instructions are stored, which instructions, when executed by a processor, realize the steps of performing any of the above map building methods.
By executing the instructions on the computer-readable storage medium, the map can be constructed by switching to the related map construction mode in time when the environment changes, the influence of map construction fracture caused by inconsistent detector dimensions and positions is fully considered, the map calibration is executed to connect the maps before and after switching, and the adaptability of the map construction to the environment change is improved.
Further, according to some aspects of the present disclosure, there is provided an unmanned aerial vehicle, comprising: an environment sensor configured to detect an environment in which the unmanned device is located; at least two of a laser detector, a depth camera, or a visual detector configured to provide detection data for constructing a map; and, a map building apparatus of any of the above.
The unmanned equipment can detect the environment where the unmanned equipment is located, timely switches to a related map construction mode to construct a map when the environment changes, fully considers the problem of map construction fracture caused by inconsistent detector dimensions and positions, executes map calibration to connect the maps before and after switching, and improves the adaptability of the map construction to the environment change.
Detailed Description
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
A flow diagram of some embodiments of a mapping method of the present disclosure is shown in fig. 1.
In step 101, the environment type of the environment in which the unmanned device is located is determined. In some embodiments, the environment type can be divided according to the wide range and the illumination degree of the environment, for example, the environment type is divided into an indoor bright environment, an outdoor bright environment and a dark environment, and the environment in which the unmanned device is positioned is determined by the sensor. In some embodiments, the type of environment may be determined by a photosensitive sensing device in conjunction with a distance detection device.
In step 102, a map is constructed according to the environment type by adopting an associated map construction mode.
Because the laser detector is interfered under the condition of strong light and is more suitable for the dark environment, the distance data detected by the laser detector is adopted to construct a map, for example GMaping is adopted to construct the map under the condition that the environment type is the dark environment.
The detection process can be better performed by adopting the camera under the condition of stronger light. Cameras commonly used at present include a camera for taking a two-dimensional image, and a camera having a depth detection function (e.g., a binocular camera, a 3D camera, an RGBD camera, a kinect camera, etc.).
Based on the characteristic that the depth camera can directly acquire depth information, the depth camera can acquire distance data more directly and accurately in an indoor environment, so that a map is constructed according to the depth data detected by the depth camera under the condition that the environment type is an indoor bright environment, for example, the map construction is realized by adopting RGBD-SLAM.
In an outdoor environment, the environment is too wide, and the depth detection capability of the depth camera is limited, so that the method is more suitable for constructing a map by adopting a camera for shooting a two-dimensional image through image processing, for example, the ORB-SLAM is adopted to realize map construction.
In step 103, when the environment type changes, the map construction method to be used is switched and the maps before and after the switching are calibrated. In some embodiments, since the positions of the probes are not completely the same in different mapping manners, and different mapping manners have their own coordinate systems, it is necessary to calibrate the maps before and after switching in order to implement seamless docking of the maps after switching. In some embodiments, maps constructed before and after switching can be unified to the same coordinate system according to the continuity of the pose of the unmanned device at the switching moment.
The method can be used for switching to a related map construction mode in time to construct a map when the environment changes, fully considers the influence of map construction fracture caused by inconsistent detector dimensions and positions, executes map calibration to connect the maps before and after switching, and improves the adaptability of map construction to the environment change.
In some embodiments, a scene classification algorithm based on a convolutional neural network and Bayesian filtering optimization facing robot scene detection can be adopted, so that efficient and accurate detection of three environments, namely indoor, outdoor and dark environments, can be realized. The environment detection model obtains an image of a current environment, then the image of the current environment is input into an environment classifier based on a convolutional neural network to obtain an environment classification result, finally time and space correlation information between continuous images is added to the environment classification result through Bayesian filtering, and the stability and accuracy of the classification result are improved.
By the method, the accuracy of determining the environment type can be improved, the error influence is reduced, the repeated switching of the map construction mode is avoided, the calculation amount is reduced, and the accuracy of map construction is also improved.
A flow diagram of some embodiments of map calibration in a mapping method of the present disclosure is shown in fig. 2.
In step 201, a final pose of the unmanned aerial vehicle in the map construction mode stopped at the switching time is obtained in a global coordinate system, and the global coordinate system takes a position of the unmanned aerial vehicle when the unmanned aerial vehicle is started as a coordinate origin.
In some embodiments, the local coordinate system used at startup may be used as the global coordinate system; in another embodiment, the local coordinates used during startup may be calibrated in scale, and the local coordinate system is used as the global coordinate system after the local coordinate system is consistent with the real size.
In step 202, the initial pose of the unmanned aerial vehicle in the local coordinate system in the map construction mode which is turned on at the switching moment is acquired.
Due to the continuity of the states of the unmanned equipment, the final pose in the map construction mode closed at the switching time is the same as or very similar to the initial pose in the map construction mode opened, so that the transformation relation of the coordinate systems before and after switching can be obtained by taking the pose at the switching time as a reference.
In step 203, a coordinate system transformation matrix for transforming the initial pose to the final pose is determined according to the final pose and the initial pose.
In some embodiments, the final pose is expressed in global coordinates and the initial pose is expressed in switched local coordinates, according to a formula
Tglobal=Tconversion·Tlocal
Capable of determining a coordinate transformation matrix TconversionWherein, TglobalFor the pose, T, of the unmanned device in the global coordinate systemlocalThe pose of the unmanned equipment under the local coordinate system is obtained; coordinate transformation matrix TconversionComprising a rotation matrix RconversionAnd a translation vector tconversionAccording to the formula
cglobal=Rconversion·clocal+tconversion
Capable of determining a rotation matrix RconversionAnd a translation vector tconversion,cglobalCoordinates of the unmanned aerial vehicle in a global coordinate system, clocalCoordinates of the unmanned equipment in a local coordinate system.
In step 204, unifying the switched constructed map to the global coordinate system according to the coordinate system transformation matrix. In some embodiments, the pose of the unmanned aerial vehicle recorded in the coordinates in the local coordinate system may be gradually converted into the global coordinate system with the construction of the map, or the map construction may be performed based on the global coordinate by correcting the switched local coordinate system with the obtained coordinate system transformation matrix.
By the method, the maps constructed before and after switching can be unified under the global coordinate system by utilizing the continuity of the pose of the unmanned equipment at the switching moment, so that the continuous maps are constructed under the global coordinate system, and the unmanned equipment can be positioned more accurately.
In some embodiments, the scale of the map construction is affected because the scale of the visual image is not clear when the map is constructed using the visual image. A flow chart of further embodiments of map calibration in the mapping method of the present disclosure is shown in fig. 3.
In step 301, it is determined whether to switch from a dark environment or an indoor bright environment to an outdoor bright environment. If the switched target is an outdoor bright environment, it is determined that a map is to be constructed using image data detected by the visual detector, and step 302 is performed.
In step 302, a scale factor of the image data detected by the vision detector is determined based on the coordinates of the drone in the map construction mode stopped at the switching time. In some embodiments, if the map is constructed using the distance data detected by the laser detector before switching, the detection distance is determined according to the detection result of the laser detector; and before switching, if a map is constructed according to the depth data detected by the depth camera, determining the detection distance according to the distance detection result of the depth camera. The detected distance may be the height of the drone, the distance from an obstacle, or a z-axis coordinate. In the image data detected by the vision detector, the detected distance is used for calibrating the height of the unmanned equipment or the distance between the unmanned equipment and the obstacle, so that the scale factor of the image data can be obtained.
In step 303, a map in a local coordinate system is constructed from the scale factors and the image data such that the local coordinate system has an absolute scale that is consistent with the real situation.
In step 304, the map constructed after switching is unified into a global coordinate system.
By the method, the defect that the distance information cannot be directly obtained by the visual image can be made up, the uniform scale before and after switching is realized, and the accuracy and continuity of map construction are improved.
In some embodiments, if the start-up environment of the drone is an outdoor bright environment, the scale factor cannot be directly acquired from the visual image. In some embodiments, the distance data can be detected by using the distance sensor, and then the scale factor is obtained, so that the size of the constructed map in the local coordinate system conforms to the real situation as much as possible, and the map with a more accurate scale can be constructed even under the condition that the unmanned equipment does not perform environment switching.
In some embodiments, when the starting environment of the unmanned aerial vehicle is an outdoor bright environment, and when the unmanned aerial vehicle is switched to a dark environment or an indoor bright environment, the scale of the global coordinate system before switching can be corrected according to the distance information obtained by the switched map construction mode by using the continuity of the pose of the unmanned aerial vehicle at the switching time, so that the scale of the global coordinate system is more accurate, and the switched map construction mode is unified to the global coordinate system to construct a map. In some embodiments, the map constructed in the outdoor bright environment before can be corrected according to the scale factor of the corrected global coordinate system, so that the accuracy of the map at each stage is improved.
A schematic diagram of some embodiments of the mapping apparatus of the present disclosure is shown in fig. 4.
The environment determination module 401 can determine the type of environment in which the drone is located. In some embodiments, the environment type can be divided according to the wide range and the illumination degree of the environment, for example, the environment type is divided into an indoor bright environment, an outdoor bright environment and a dark environment, and the environment in which the unmanned device is positioned is determined by the sensor. In some embodiments, the type of environment may be determined by a photosensitive sensing device in conjunction with a distance detection device.
The map building module 402 can build a map according to the environment type using an associated map building approach. Because the laser detector can be disturbed under the condition of strong illumination, the method is more suitable for the dark environment, and therefore, the distance data detected by the laser detector is adopted to construct a map under the condition that the environment type is the dark environment. Based on the characteristic that the depth camera can directly acquire depth information, the depth camera can acquire distance data more directly and accurately in an indoor environment, and therefore a map is constructed according to the depth data detected by the depth camera under the condition that the environment type is an indoor bright environment. In an outdoor environment, the environment is too wide, and the detection capability of the depth camera is limited, so that the method is more suitable for constructing a map by adopting a camera for shooting a two-dimensional image through image processing.
The switching module 403 can switch the map building method used when the environment type changes, including: constructing a map by adopting distance data detected by a laser detector in a dark environment; constructing a map according to image data detected by a visual detector in an outdoor bright environment; a map is constructed from depth data detected by a depth camera in an indoor bright environment. And when the type of the environment where the unmanned equipment is located is changed, switching to a corresponding map construction mode.
The calibration module 404 can calibrate the map before and after the map construction mode is switched when the environment type changes. Because the positions of the detectors of different map building modes are not completely the same, and the different map building modes have their own coordinate systems, the maps before and after switching need to be calibrated to realize seamless docking of the switched maps. In some embodiments, maps constructed before and after switching can be unified to the same coordinate system according to the continuity of the pose of the unmanned device at the switching moment.
The device can be switched to a related map construction mode in time to construct a map when the environment changes, the problem of map construction fracture caused by inconsistent detector dimensions and positions is fully considered, map calibration is executed to enable the maps before and after switching to be connected, and the adaptability of map construction to the environment change is improved.
In some embodiments, the environment determination module 401 includes a sensing data acquisition unit, a convolutional neural network unit, and a bayesian filter optimization unit, and can implement efficient and accurate detection on three environment types, namely indoor, outdoor, and dark, by using an environment classification algorithm based on a convolutional neural network and bayesian filter optimization for robot environment detection. The environment determination module 401 obtains an image in the current environment through the sensing data obtaining unit, then inputs the image in the current environment into the environment classifier of the convolutional neural network unit to obtain an environment classification result, and finally adds time and space correlation information between continuous images to the environment classification result through the bayesian filtering unit to improve the stability and accuracy of the classification result.
The device can improve the accuracy of determining the environment type, reduce the error influence, avoid the repeated switching of the map construction mode, reduce the calculation amount and improve the accuracy of map construction.
In some embodiments, the calibration module 404 can obtain a final pose of the drone in the global coordinate system in the mapping mode that was stopped at the switch time, and obtain an initial pose of the drone in the local coordinate system in the mapping mode that was turned on at the switch time.
Due to the continuity of the states of the unmanned equipment, the final pose in the map construction mode closed at the switching time is the same as or very similar to the initial pose in the map construction mode opened, so that the transformation relation of the coordinate systems before and after switching can be obtained by taking the pose at the switching time as a reference.
In some embodiments, a coordinate system transformation matrix that transforms the initial pose to the final pose may be determined from the final pose and the initial pose described above. E.g. the final pose is expressed in global coordinates and the initial pose is expressed in switched local coordinates, according to the formula
Tglobal=Tconversion·Tlocal
Determining a coordinate transformation matrix TconversionWherein, TglobalFor the pose, T, of the unmanned device in the global coordinate systemlocalThe pose of the unmanned equipment under the local coordinate system is obtained; coordinate transformation matrix TconversionComprising a rotation matrix RconversionAnd a translation vector tconversionAccording to the formula
cglobal=Rconversion·clocal+tconversion
Determining a rotation matrix RconversionAnd a translation vector tconversion,cglobalCoordinates of the unmanned aerial vehicle in a global coordinate system, clocalCoordinates of the unmanned equipment in a local coordinate system.
Unifying the constructed map after switching to a global coordinate system according to the coordinate system transformation matrix. In some embodiments, the pose of the unmanned aerial vehicle recorded in the coordinates in the local coordinate system may be gradually converted into the global coordinate system as the map is constructed, or the map construction may be performed based on the global coordinate by correcting the switched local coordinate system with the coordinate system transformation matrix obtained above.
The device can unify maps constructed before and after switching to a global coordinate system by using the continuity of the pose of the unmanned equipment at the switching moment, so that a continuous map is constructed in the global coordinate system, and the unmanned equipment can be positioned more accurately.
In some embodiments, when the target mode of the switching is to construct a map by using image data detected by a visual detector, since the scale of the visual image is ambiguous, the calibration module 404 first determines a scale factor of the image data detected by the visual detector according to the coordinates of the unmanned device in the map construction mode stopped at the switching time, then constructs a map in a local coordinate system according to the scale factor and the image data, so that the local coordinate system has an absolute scale according with the real situation, and unifies the constructed map after the switching into a global coordinate system. The device can make up the defect that the distance information cannot be directly obtained from the visual image, realizes the uniform scale before and after switching, and improves the accuracy and continuity of map construction.
In some embodiments, when the starting environment of the unmanned device is an outdoor bright environment, the map construction module may detect distance data by using the distance sensor, and further obtain a scale factor, so that the size of the constructed map in the local coordinate system conforms to the real situation as much as possible, and even when the unmanned device does not perform environment switching, a more accurate map can be constructed.
In some embodiments, when the environment determination module determines that the unmanned aerial vehicle is switched from the outdoor bright environment to the dark environment or the indoor bright environment when the start environment of the unmanned aerial vehicle is the outdoor bright environment, the calibration module may correct the scale of the global coordinate system before switching according to the distance information obtained by the switched map construction method by using the continuity of the pose of the unmanned aerial vehicle at the switching time, so that the scale of the global coordinate system is more accurate, and the switched map construction method is unified to the map construction under the global coordinate system. In some embodiments, the calibration module may further correct the map constructed in the outdoor bright environment according to the scale factor of the rectified global coordinate system, so as to improve the accuracy of the map at each stage.
A schematic structural diagram of an embodiment of the map building apparatus of the present disclosure is shown in fig. 5. The mapping means comprises a memory 501 and a processor 502. Wherein: the memory 501 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is for storing the instructions in the corresponding embodiments of the map construction method above. The processor 502 is coupled to the memory 501 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 502 is configured to execute instructions stored in the memory, which can improve the adaptability of the mapping to environmental changes.
In some embodiments, as also shown in fig. 6, the map building apparatus 600 includes a memory 601 and a processor 602. The processor 602 is coupled to the memory 601 by a BUS 603. The map building apparatus 600 may also be connected to an external storage 605 via a storage interface 604 for calling external data, and may also be connected to a network or another computer system (not shown) via a network interface 606. And will not be described in detail herein.
In this embodiment, the data instructions are stored in the memory, and the instructions are processed by the processor, so that the adaptability of the map construction to the environmental changes can be improved.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiment of the mapping method. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
A schematic diagram of some embodiments of the unmanned aerial device of the present disclosure is shown in fig. 7. The map building apparatus 70 may be any of the map building apparatuses described above. The laser detector 72 can provide detection data for the constructed map in a dark environment, the depth camera 73 can provide detection data for the constructed map in an indoor light environment, and the vision detector 74 can provide detection data for the constructed map in an outdoor light environment.
The environment sensor 71 can determine the type of environment in which the unmanned aerial vehicle is located according to the breadth and the degree of illumination of the environment. In some embodiments, the environment types may include indoor bright environments, outdoor bright environments, and dark environments. In some embodiments, the environmental sensor 71 may include a light-sensitive sensing device and a distance detection device, in some embodiments, the distance detection device may be a laser detector 72 or a depth camera 73, and the light-sensitive sensing device may be a vision detector 74. In another embodiment, the environment sensor 71 may also be an image acquisition device, which can acquire the environment where the unmanned device is located, and implement efficient and accurate detection of three environments, namely indoor, outdoor and dark, by using a scene classification algorithm based on a convolutional neural network and bayesian filtering optimization for robot-oriented scene detection. In some embodiments, vision detector 74 may be set to a continuously active state, providing detection data for map construction in bright outdoor environments, and providing a data basis for environment type determination in other environment types, thereby reducing the number of devices on which the drone may be mounted.
The unmanned equipment can detect the environment where the unmanned equipment is located, timely switches to a related map construction mode to construct a map when the environment changes, fully considers the problem of map construction fracture caused by inconsistent detector dimensions and positions, executes map calibration to connect the maps before and after switching, and improves the adaptability of the map construction to the environment change.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Finally, it should be noted that: the above examples are intended only to illustrate the technical solutions of the present disclosure and not to limit them; although the present disclosure has been described in detail with reference to preferred embodiments, those of ordinary skill in the art will understand that: modifications to the specific embodiments of the disclosure or equivalent substitutions for parts of the technical features may still be made; all such modifications are intended to be included within the scope of the claims of this disclosure without departing from the spirit thereof.