CN113465614A - Unmanned aerial vehicle and generation method and device of navigation map thereof - Google Patents

Unmanned aerial vehicle and generation method and device of navigation map thereof Download PDF

Info

Publication number
CN113465614A
CN113465614A CN202010244115.0A CN202010244115A CN113465614A CN 113465614 A CN113465614 A CN 113465614A CN 202010244115 A CN202010244115 A CN 202010244115A CN 113465614 A CN113465614 A CN 113465614A
Authority
CN
China
Prior art keywords
obstacle
unmanned aerial
aerial vehicle
points
navigation map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010244115.0A
Other languages
Chinese (zh)
Other versions
CN113465614B (en
Inventor
陈鹏旭
庞勃
郭彦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010244115.0A priority Critical patent/CN113465614B/en
Publication of CN113465614A publication Critical patent/CN113465614A/en
Application granted granted Critical
Publication of CN113465614B publication Critical patent/CN113465614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses an unmanned aerial vehicle and a method and a device for generating a navigation map of the unmanned aerial vehicle, wherein the method comprises the steps of obtaining obstacle information; determining barrier points and non-barrier points contained in a visual space of the unmanned aerial vehicle according to the barrier information; and determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points. The beneficial effect of this application lies in: by adopting a probability updating calculation mode, the algorithm is simple, the calculation speed is high, the fluctuation of the detection result of the obstacle and the wrong planning of the flight path of the unmanned aerial vehicle caused by the false detection are avoided to the great extent, and the application scene of the unmanned aerial vehicle is obviously expanded.

Description

Unmanned aerial vehicle and generation method and device of navigation map thereof
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle and a method and a device for generating a navigation map of the unmanned aerial vehicle.
Background
At present, unmanned aerial vehicle is more and more used for business such as take-out delivery, express delivery put in, and unmanned aerial vehicle often can touch barriers such as high-rise, trees at the in-process of flight, and at this moment unmanned aerial vehicle just needs make the planning to the flight path according to the environment map, avoids the barrier to fly to the destination. The environment map may take many different forms depending on the type of sensor data, the application scenario, etc. In the prior art, a global map is read in advance, and then an environment map is generated according to the global map, and in the method, when environment information changes, an unmanned aerial vehicle cannot well avoid obstacles. The other method is to generate an environment map in a manner of constructing a local three-dimensional octree map in real time, and the method has large data volume, is limited by hardware resource conditions of the unmanned aerial vehicle, has a small map range and ensures that the unmanned aerial vehicle is easy to fall into local predicament.
Disclosure of Invention
In view of the above, the present application is proposed to provide a method and apparatus for generating a navigation map of a drone and a drone that overcome or at least partially solve the above problems.
According to an aspect of the application, a method for generating a navigation map of an unmanned aerial vehicle is provided, and the method includes:
acquiring obstacle information;
determining barrier points and non-barrier points contained in a visual space of the unmanned aerial vehicle according to the barrier information;
and determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
Optionally, in the method, acquiring the obstacle information includes:
determining an initial obstacle point according to spatial information detected by an unmanned aerial vehicle sensor;
and determining the coordinates of the initial barrier point in the three-dimensional space coordinate system according to the pose of the unmanned aerial vehicle.
Optionally, in the above method, the visual space is obtained by projecting the field of view of the drone into the three-dimensional space coordinate system;
determining the obstacle points and non-obstacle points contained in the visual space of the unmanned aerial vehicle according to the obstacle information comprises:
dividing a visual space into a plurality of angle spaces according to a preset angle resolution by taking a viewpoint as a base point;
if the angle space does not contain the initial obstacle point, all points in the angle space are used as non-obstacle points;
and if the angle space contains the initial obstacle point, taking the initial obstacle point closest to the viewpoint coordinate as an obstacle point, and taking all points between the obstacle point and the viewpoint in the angle space as non-obstacle points.
Optionally, in the above method, the determining the probability of occupation of the obstacle in each block in the unmanned aerial vehicle navigation map according to the obstacle point and the non-obstacle point includes:
and respectively projecting the barrier points and the non-barrier points into the unmanned aerial vehicle navigation map, increasing a first weight value for the barrier occupation probability of the block where the projection points of the barrier points are located, and reducing a second weight value for the barrier occupation probability of the block where the projection points of the non-barrier points are located.
Optionally, in the above method, the method further includes:
counting three-dimensional coordinate information of barrier points projected into a target block within a preset time period;
and determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the statistical three-dimensional coordinate information, and taking the maximum height in the statistical three-dimensional coordinate information as the obstacle height attribute of the target block.
Optionally, in the above method, the method further includes:
and clearing the obstacle height attribute of the block with the obstacle occupation probability not greater than 0.
Optionally, in the above method, the unmanned aerial vehicle navigation map is a local planning map related to the position of the unmanned aerial vehicle; the method further comprises the following steps:
and under the condition that the distance between the position coordinate of the unmanned aerial vehicle and the central point coordinate of the unmanned aerial vehicle navigation map is greater than a preset threshold value, translating the unmanned aerial vehicle navigation map so as to enable the position coordinate of the unmanned aerial vehicle to be consistent with the central point coordinate of the unmanned aerial vehicle navigation map.
According to another aspect of the application, a device for generating a navigation map of a drone is provided, the device comprising:
an acquisition unit configured to acquire obstacle information;
the execution unit is used for determining barrier points and non-barrier points contained in the visual space of the unmanned aerial vehicle according to the barrier information; and the method is used for determining the obstacle occupation probability of each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
Optionally, in the apparatus, the obtaining unit is configured to determine an initial obstacle point according to spatial information detected by the unmanned aerial vehicle sensor; and the coordinates of the initial barrier point in the three-dimensional space coordinate system are determined according to the pose of the unmanned aerial vehicle.
Optionally, in the above apparatus, the visual space is obtained by projecting the field of view of the drone into the three-dimensional space coordinate system; the system comprises an execution unit, a display unit and a display unit, wherein the execution unit is used for dividing a visual space into a plurality of angle spaces by taking a viewpoint as a base point according to a preset angle resolution; if the angle space does not contain the initial obstacle point, all points in the angle space are used as non-obstacle points; and if the angle space contains the initial obstacle point, taking the initial obstacle point closest to the viewpoint coordinate as an obstacle point, and taking all points between the obstacle point and the viewpoint in the angle space as non-obstacle points.
Optionally, in the above apparatus, the unmanned aerial vehicle navigation map is a two-dimensional map, where the execution unit is configured to project the obstacle point and the non-obstacle point into the unmanned aerial vehicle navigation map respectively, increase a first weight value for the obstacle occupation probability of the block where the projection point of the obstacle point is located, and decrease a second weight value for the obstacle occupation probability of the block where the projection point of the non-obstacle point is located.
Optionally, in the apparatus, the execution unit is configured to count three-dimensional coordinate information of an obstacle point projected into the target block within a preset time period; and determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the statistical three-dimensional coordinate information, and taking the maximum height in the statistical three-dimensional coordinate information as the obstacle height attribute of the target block.
Optionally, in the apparatus, the execution unit is configured to clear the obstacle height attribute of the block with the obstacle occupation probability not greater than 0.
Optionally, in the apparatus, the unmanned aerial vehicle navigation map is a local planning map related to the position of the unmanned aerial vehicle; and the execution unit is also used for translating the unmanned aerial vehicle navigation map under the condition that the distance between the unmanned aerial vehicle position coordinate and the central point coordinate of the unmanned aerial vehicle navigation map is greater than a preset threshold value, so that the unmanned aerial vehicle position coordinate is consistent with the central point coordinate of the unmanned aerial vehicle navigation map.
According to yet another aspect of the present application, there is provided a drone, wherein the drone comprises: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method as any one of above.
According to yet another aspect of the application, a computer readable storage medium is provided, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method as any of the above.
According to the technical scheme, the obstacle information is acquired; determining barrier points and non-barrier points contained in a visual space of the unmanned aerial vehicle according to the barrier information; and determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points. The beneficial effect of this application lies in: by adopting a probability updating calculation mode, the algorithm is simple, the calculation speed is high, the fluctuation of the detection result of the obstacle and the wrong planning of the flight path of the unmanned aerial vehicle caused by the false detection are avoided to the great extent, and the application scene of the unmanned aerial vehicle is obviously expanded.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow diagram of a method of generating a drone navigation map according to one embodiment of the present application;
fig. 2 shows a flow diagram of a method of generating a drone navigation map according to another embodiment of the present application;
fig. 3 shows a schematic structural diagram of an apparatus for generating a navigation map of a drone according to an embodiment of the present application;
figure 4 shows a schematic structural diagram of a drone according to one embodiment of the present application;
FIG. 5 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic flow diagram of a method for generating a drone navigation map according to an embodiment of the present application, the method including:
step S110, obstacle information is acquired.
Adopt unmanned aerial vehicle to carry out delivery of takeaway or express delivery can practice thrift a large amount of manpowers to show and improve delivery efficiency. However, the unmanned aerial vehicle often encounters obstacles such as high buildings, trees and the like in the flight process, and particularly in a complex scene, the unmanned aerial vehicle needs to plan a flight path according to a navigation map to avoid the obstacles so as to fly to a destination. The navigation map generated in this embodiment may be a map that is continuously established and maintained according to information of obstacles detected by a sensor of the unmanned aerial vehicle during the flight of the unmanned aerial vehicle.
The information of the obstacle is acquired by the sensor of the unmanned aerial vehicle, and the used technology can be any one or a combination of several in the prior art, including but not limited to radar detection technology and photogrammetry technology, wherein the radar detection technology includes but not limited to laser radar, millimeter wave radar and the like, and the photogrammetry technology includes but not limited to a depth camera method, a binocular camera method and a multi-view camera method.
The radar detection technology is an unmanned aerial vehicle which detects a target by using electromagnetic waves, and the radar emits electromagnetic waves to irradiate the target and receive echoes of the target so as to obtain information of the detected target, and is the most common detection means. The frequency of the radar is not limited, the laser radar uses a laser beam, and compared with a common microwave radar, the laser radar has the advantages of high resolution, good concealment, strong active interference resistance, good low-altitude detection performance, small volume and light weight, and the working frequency is much higher than that of microwaves; the millimeter wave radar is a radar which works in a millimeter wave band for detection, has the advantages of a microwave radar and a photoelectric radar, has the advantages of small antenna aperture, narrow beam, large bandwidth, high Doppler frequency, good stealth resistance and the like, and can be used as an optimal scheme.
Photogrammetry refers to the processing of the resulting digital image using image processing techniques to obtain information about the object being measured. Photogrammetry techniques include, but are not limited to, depth camera methods, binocular camera methods, and multi-view camera methods. Compared with the traditional camera, the depth camera has the advantages that the depth measurement is added in the aspect of function, so that the surrounding environment and changes can be sensed more conveniently and accurately, and the depth camera is mainly applied to three-dimensional modeling, unmanned driving, robot navigation, mobile phone face unlocking, motion sensing games and the like. The depth camera mainly includes: a binocular depth camera, a structured light depth camera, and a Tof (Time of Flight) depth camera. The binocular camera method and the multi-view camera method use a plurality of images of different viewing angles, solve through parallax, and train a model for estimating image depth, such as a gaussian markov random field model, by means of machine learning. The photogrammetry technology has the advantages of strong functions, easy operation, high precision, intellectualization and easy carrying and moving; the earthquake resistance is good, non-contact operation is realized, and highly automatic evaluation and dynamic measurement can be performed; an interface with computer aided design and analysis software is provided, and results are rapidly obtained; the temperature-sensitive sensor is slightly influenced by temperature, and is suitable for measuring products with complex shapes in an environment with large temperature change.
The method of detecting the obstacle is not limited in the present application, and the above techniques may be used alone or in combination. If the millimeter wave radar technology is combined with the depth camera technology, the defect that the detection performance of the radar on the non-metal object is relatively poor can be overcome.
In this embodiment, the information about the obstacle includes, but is not limited to, information about a distance, a distance change rate, or a radial velocity, an orientation, and a height from the obstacle to the drone, and information about a curved surface structure, an external dimension, and a relative position of the object.
And step S120, determining barrier points and non-barrier points contained in the visual space of the unmanned aerial vehicle according to the barrier information.
The visible space of the drone may be the maximum range of airspace that the sensors of the drone can detect, or may be a predetermined range of airspace relative to the drone within the maximum range of airspace that can be detected.
The maximum range of airspace that can be detected by the unmanned aerial vehicle can be determined according to the Field of View (FOV) of the unmanned aerial vehicle, in this embodiment, a Field Angle (Field Angle) can be used to describe the Field of View of the unmanned aerial vehicle, and further describe the visible space of the unmanned aerial vehicle, and an Angle formed by two edges of the maximum range that an object image of a target to be detected can pass through the lens is referred to as a Field Angle, with a sensor as a vertex. The field of view scope of sensor has been decided to the size of angle of view, and the field of view is bigger, and the field of view is just bigger, and unmanned aerial vehicle's visual space is bigger promptly. In general, the unmanned aerial vehicle detects the information of the obstacle in real time in the flight process, and if the position of the obstacle relative to the unmanned aerial vehicle exceeds the angle range, the obstacle cannot be collected in the lens.
The airspace within the predetermined range relative to the unmanned aerial vehicle may be a part of the maximum range of airspace that the unmanned aerial vehicle can detect, and if a preset distance is set in advance, the maximum range of airspace that the unmanned aerial vehicle can detect within the preset distance from the unmanned aerial vehicle is used as the visible space of the unmanned aerial vehicle.
In this application, unmanned aerial vehicle is in visual space within range, further generates the navigation map through the information that obtains the barrier, consequently does not add the consideration to the barrier outside the visual space of unmanned aerial vehicle scope. And determining barrier points and non-barrier points contained in the visual space of the unmanned aerial vehicle according to the barrier information, namely marking the points in the visual space of the unmanned aerial vehicle as the barrier points and the non-barrier points.
The unmanned visual space can be regarded as a set of countless points, and the point representation method is not limited in the application, such as a cartesian coordinate form and a vector form. In the unmanned visible space, if obstacles exist at certain points, the points are marked as obstacle points, and the points without the obstacles exist are marked as non-obstacle points, if the points can be marked by adopting a binary system, the obstacle points are marked as 0, and the non-obstacle points are marked as 1.
And step S130, determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
As mentioned above, after obtaining the obstacle point and the non-obstacle point, the navigation map may be generated according to the information of the obstacle point and the non-obstacle point, and the generation of the navigation map base map may adopt one or a combination of several in the prior art, for example, it is determined according to the obstacle point and the non-obstacle point which positions in the navigation map have the obstacle and which positions have no obstacle, and then the navigation map is divided into the obstacle area and the non-obstacle area.
In this embodiment, the multiple detection results may be combined, and a calculation method of probability update is adopted to determine whether the point is the final probability and result of the obstacle point. For example, if a certain point is detected 5 times, the number of times of judging the certain point as an obstacle point is 3, the probability of judging the certain point as a non-obstacle point is 2, according to the basic principle of probability theory, the probability of judging the certain point as an obstacle point is 60%, and the certain point is judged as an obstacle point.
Furthermore, in order to improve the accuracy of the navigation map, the navigation map may be divided into a plurality of blocks, and the plurality of blocks form the navigation map according to a certain sequence and a connection relationship. The navigation map can be divided by any one or more of the prior art, such as a cell decomposition method, which is briefly described as scanning a dotted line perpendicular to an X-axis of absolute coordinates from a left boundary to a right boundary of a local planning map, generating a plurality of blocks by judging connectivity changes of the scanning lines, and dividing the navigation map into the plurality of blocks after division.
The method shown in fig. 1 shows that the method adopts a probability updating calculation mode, the algorithm is simple, the calculation speed is high, the fluctuation of the detection result of the obstacle and the wrong planning of the flight path of the unmanned aerial vehicle caused by false detection are avoided to a great extent, and the application scene of the unmanned aerial vehicle is remarkably expanded.
In one embodiment of the present application, in the above method, the acquiring obstacle information includes: determining an initial obstacle point according to spatial information detected by an unmanned aerial vehicle sensor; and determining the coordinates of the initial barrier point in the three-dimensional space coordinate system according to the pose of the unmanned aerial vehicle.
Spatial Information (Spatial Information) is Information that reflects the Spatial distribution characteristics of a geographic entity. Geography reveals the rule of regional spatial distribution and change through acquisition, perception, processing, analysis and synthesis of spatial information. The spatial information is transferred by means of a spatial information carrier, such as images and maps. The graph is a main form for representing spatial information, and the geographic entity can be described as a point, a line, a plane and other basic graphic elements.
The initial obstacle point is an obstacle point which is not updated according to the probability and is determined according to the initial obstacle detection result of the unmanned aerial vehicle. And then determining the coordinates of the initial barrier point in a three-dimensional space coordinate system according to the pose of the unmanned aerial vehicle.
The three-dimensional coordinates can visually represent the relative positions of the point and the origin in the coordinate system, subsequent calculation is facilitated, the three-dimensional coordinates comprise Cartesian coordinates, cylindrical coordinates and spherical coordinates, and the Cartesian representation method is simple, so that the Cartesian coordinates are recommended to be used as a preferred scheme.
In an embodiment of the present application, in the above method, the visual space is obtained by projecting the field of view of the drone into a three-dimensional space coordinate system; determining the obstacle points and non-obstacle points contained in the visual space of the unmanned aerial vehicle according to the obstacle information comprises: dividing a visual space into a plurality of angle spaces according to a preset angle resolution by taking a viewpoint as a base point; if the angle space does not contain the initial obstacle point, all points in the angle space are used as non-obstacle points; and if the angle space contains the initial obstacle point, taking the initial obstacle point closest to the viewpoint coordinate as an obstacle point, and taking all points between the obstacle point and the viewpoint in the angle space as non-obstacle points.
The visual field of the unmanned aerial vehicle is projected to the three-dimensional space coordinate system to obtain the visual space of the unmanned aerial vehicle, so that all points in the visual space of the unmanned aerial vehicle are projected to the three-dimensional space coordinate system, and each point can be represented by the three-dimensional coordinate.
Using the viewpoint as the base point, according to predetermined angular resolution, divide visual space into a plurality of angle spaces, wherein, the viewpoint can be regarded as the point at unmanned aerial vehicle sensor place, and the concrete process of dividing can be: determining the boundary of the local planning map according to the field of view of the unmanned aerial vehicle, specifically, determining the field of view according to the description parameter of the field of view, wherein two edges of the field of view are two boundaries of the local planning map; the method comprises the steps of equally dividing the field angle of the unmanned aerial vehicle according to a preset resolution ratio, wherein the preset resolution ratio is 50PPD, PPD (Pixel Per degree) is an angular resolution ratio or a space resolution ratio, and the quantity of pixel points filled in each 1-degree included angle in the field angle is designated, so that the field angle is equally divided into a plurality of small angles with equal size, the side with the small angle is an angle equally dividing line of the field angle, and the local planning map is divided according to the angle equally dividing line to obtain a plurality of angle spaces.
Detecting whether an obstacle exists in each angle space, and if an initial obstacle point does not exist in a certain angle space, marking all points in the angle space as non-obstacle points; if the initial obstacle point exists in a certain angle space, the initial obstacle point closest to the viewpoint coordinate is marked as an obstacle point, all points between the obstacle point and the viewpoint in the angle space are taken as non-obstacle points, and the point on the side, far away from the viewpoint, of the closest initial obstacle point is not processed.
By adopting the block segmentation method of the embodiment, when the obstacle is marked, only the obstacle which is relatively closest to the viewpoint is reserved, all points in the angle space can be quickly marked, the algorithm is extremely simple, the calculation cost is low, the time period required by calculation is short, and the efficiency is high.
In an embodiment of the application, in the method, the unmanned aerial vehicle navigation map is a two-dimensional map, and determining the probability of occupation of the obstacle in each block of the unmanned aerial vehicle navigation map according to the obstacle point and the non-obstacle point includes: and respectively projecting the barrier points and the non-barrier points into the unmanned aerial vehicle navigation map, increasing a first weight value for the barrier occupation probability of the block where the projection points of the barrier points are located, and reducing a second weight value for the barrier occupation probability of the block where the projection points of the non-barrier points are located.
In this embodiment, unmanned aerial vehicle navigation map is two-dimensional map, when unmanned aerial vehicle detected and stored barrier information, store the height information of barrier as an attribute of barrier, can directly transfer the use when the height information of barrier needs, need not to detect again, handle the information of barrier like this, when make full use of environmental information, the data bulk can be reduced as far as possible, the requirement of unmanned aerial vehicle autonomous generation navigation map to hardware has greatly been reduced, unmanned aerial vehicle's application scene has been expanded.
When the navigation map generated by the unmanned aerial vehicle is a two-dimensional map, when the probability of occupation of obstacles in each block in the unmanned aerial vehicle navigation map is determined according to the obstacle points and the non-obstacle points, the obstacle points and the non-obstacle points can be respectively projected into the unmanned aerial vehicle navigation map, wherein the projection is actually a conversion mode from three-dimensional information to a two-dimensional plane, and aims to establish a one-to-one correspondence relationship between the obstacle points and points on the two-dimensional plane, and the projection method can adopt any one of the prior art, such as equal-angle projection, equal-product projection or arbitrary projection. After projection, each barrier point and each non-barrier point on the navigation map are provided with a corresponding projection point, the probability of whether a barrier exists in a block where the projection points are located is increased or decreased according to the properties of the projection points, and when the projection points are the barrier points, a first weight value is increased for the probability of the barrier occupied in the block where the projection points are located, and if the first weight value is 0.05%; when the projection point is a non-obstacle point, the probability of occupying the obstacle in the block where the point is located is reduced by a second weight value, for example, the second weight value is 0.05%. The original obstacle occupation probability of each block in the navigation map is zero, and the final obstacle occupation probability of the block can be obtained after increasing or decreasing the weight value.
By adopting the probability updating mode in the implementation, the block occupation probability of the obstacles can be simply and quickly obtained.
In an embodiment of the application, in the above method, the method further comprises: counting three-dimensional coordinate information of barrier points projected into a target block within a preset time period; and determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the statistical three-dimensional coordinate information, and taking the maximum height in the statistical three-dimensional coordinate information as the obstacle height attribute of the target block.
Setting a certain block in a visual space of the unmanned aerial vehicle as a target block, if the block with the obstacle occupation probability larger than zero is set as the target block, counting three-dimensional coordinate information of an obstacle point projected into the target block within a preset time period, such as 50ms, then calculating a weighted mean value of each three-dimensional coordinate information, wherein the weighted mean value is obtained by multiplying each numerical value by a corresponding weight, then adding the weights and summing to obtain a total value, and then dividing the total value by the total number of units.
The weighted average value is not only determined by the size of the sign value, i.e. the variable value, of each unit in the population, but also by the number or frequency of occurrence of each sign value, and since the number of occurrence of each sign value plays a role in balancing the weight of the sign value in the average, the weighted average value is called the weight number, the higher the weighted average value is, the higher the possibility that the point is an obstacle point is, and then the weighted average value of all the three-dimensional coordinate information is counted.
After the two-dimensional coordinates of the obstacle of the target block are obtained, height information in all three-dimensional coordinate information is counted, and the maximum height is selected as the obstacle height attribute of the target block.
In an embodiment of the application, in the above method, the method further comprises: and clearing the obstacle height attribute of the block with the obstacle occupation probability not greater than 0.
As mentioned above, after statistical calculation, the probability of the obstacle occupying a certain block is not greater than 0, that is, the probability of the obstacle existing in the block is very low, the obstacle height attribute of the block is removed, so that more space can be made for caching data or calculating, and the calculation speed is increased.
In one embodiment of the present application, in the above method, the drone navigation map is a local planning map related to the location of the drone; the method further comprises the following steps: and under the condition that the distance between the position coordinate of the unmanned aerial vehicle and the central point coordinate of the unmanned aerial vehicle navigation map is greater than a preset threshold value, translating the unmanned aerial vehicle navigation map so as to enable the position coordinate of the unmanned aerial vehicle to be consistent with the central point coordinate of the unmanned aerial vehicle navigation map.
The navigation map in this embodiment is a map that is centered on the unmanned aerial vehicle and ranges from tens of meters to hundreds of meters according to the processing capability of hardware of the unmanned aerial vehicle. Due to the fact that the map is a local map, when the position of the unmanned aerial vehicle changes greatly, the map needs to be moved, and therefore the position coordinate of the unmanned aerial vehicle is consistent with the coordinate of the central point of the unmanned aerial vehicle navigation map.
The above embodiments may be implemented individually or in combination, and fig. 2 shows a flowchart of a method for generating a navigation map of a drone according to another embodiment of the present application.
Firstly, determining an initial obstacle point according to spatial information detected by an unmanned aerial vehicle sensor; and determining the coordinates of the initial barrier point in the three-dimensional space coordinate system according to the pose of the unmanned aerial vehicle. The visual field of the unmanned aerial vehicle is projected into a three-dimensional space coordinate system to obtain a visual space of the unmanned aerial vehicle, and the visual space is divided into a plurality of angle spaces according to preset angle resolution by taking a viewpoint as a reference.
Detecting whether an initial obstacle point exists in each angle space, and if so, taking all points in the angle space as non-obstacle points; if the initial obstacle point is not present, the initial obstacle point closest to the viewpoint coordinate is taken as an obstacle point, all points between the obstacle point and the viewpoint in the angle space are taken as non-obstacle points, and information of the obstacle point and the non-obstacle points is obtained.
And respectively projecting the barrier points and the non-barrier points into the unmanned aerial vehicle navigation map when the unmanned aerial vehicle navigation map is a two-dimensional map, increasing a first weight value for the barrier occupation probability of a block where the projection points of the barrier points are located, and reducing a second weight value for the barrier occupation probability of the block where the projection points of the non-barrier points are located to obtain the barrier occupation probability of each angle space.
Judging whether the occupation probability of the obstacles in the angle space is greater than or equal to zero, and if the occupation probability of the obstacles in a certain angle space is greater than or equal to zero, setting the angle space as a target angle space; otherwise, judging that no obstacle exists in the angle, and clearing the obstacle height attribute of the block.
After the target block is determined, counting the three-dimensional coordinate information of the barrier points projected into the target block in a preset time period; determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the three-dimensional coordinate information, taking the maximum height in the three-dimensional coordinate information as the height attribute of the obstacle of the target block, and finally generating a navigation map according to the two-dimensional coordinate information of the obstacle and the corresponding height attribute.
Fig. 3 is a schematic structural diagram of a generation apparatus of a drone navigation map according to an embodiment of the present application, and as shown in fig. 3, the generation apparatus 300 of the drone navigation map includes:
an obtaining unit 310 is configured to obtain obstacle information.
Adopt unmanned aerial vehicle to carry out delivery of takeaway or express delivery can practice thrift a large amount of manpowers to show and improve delivery efficiency. However, the unmanned aerial vehicle often encounters obstacles such as high buildings, trees and the like in the flight process, and particularly in a complex scene, the unmanned aerial vehicle needs to plan a flight path according to a navigation map to avoid the obstacles so as to fly to a destination. The navigation map generated in this embodiment may be a map that is continuously established and maintained according to information of obstacles detected by a sensor of the unmanned aerial vehicle during the flight of the unmanned aerial vehicle.
The information of the obstacle is acquired by the sensor of the unmanned aerial vehicle, and the used technology can be any one or a combination of several in the prior art, including but not limited to radar detection technology and photogrammetry technology, wherein the radar detection technology includes but not limited to laser radar, millimeter wave radar and the like, and the photogrammetry technology includes but not limited to a depth camera method, a binocular camera method and a multi-view camera method.
The radar detection technology is an unmanned aerial vehicle which detects a target by using electromagnetic waves, and the radar emits electromagnetic waves to irradiate the target and receive echoes of the target so as to obtain information of the detected target, and is the most common detection means. The frequency of the radar is not limited, the laser radar uses a laser beam, and compared with a common microwave radar, the laser radar has the advantages of high resolution, good concealment, strong active interference resistance, good low-altitude detection performance, small volume and light weight, and the working frequency is much higher than that of microwaves; the millimeter wave radar is a radar which works in a millimeter wave band for detection, has the advantages of a microwave radar and a photoelectric radar, has the advantages of small antenna aperture, narrow beam, large bandwidth, high Doppler frequency, good stealth resistance and the like, and can be used as an optimal scheme.
Photogrammetry refers to the processing of the resulting digital image using image processing techniques to obtain information about the object being measured. Photogrammetry techniques include, but are not limited to, depth camera methods, binocular camera methods, and multi-view camera methods. Compared with the traditional camera, the depth camera has the advantages that the depth measurement is added in the aspect of function, so that the surrounding environment and changes can be sensed more conveniently and accurately, and the depth camera is mainly applied to three-dimensional modeling, unmanned driving, robot navigation, mobile phone face unlocking, motion sensing games and the like. The depth camera mainly includes: a binocular depth camera, a structured light depth camera, and a Tof (Time of Flight) depth camera. The binocular camera method and the multi-view camera method use a plurality of images of different viewing angles, solve through parallax, and train a model for estimating image depth, such as a gaussian markov random field model, by means of machine learning. The photogrammetry technology has the advantages of strong functions, easy operation, high precision, intellectualization and easy carrying and moving; the earthquake resistance is good, non-contact operation is realized, and highly automatic evaluation and dynamic measurement can be performed; an interface with computer aided design and analysis software is provided, and results are rapidly obtained; the temperature-sensitive sensor is slightly influenced by temperature, and is suitable for measuring products with complex shapes in an environment with large temperature change.
The method of detecting the obstacle is not limited in the present application, and the above techniques may be used alone or in combination. If the millimeter wave radar technology is combined with the depth camera technology, the defect that the detection performance of the radar on the non-metal object is relatively poor can be overcome.
In this embodiment, the information about the obstacle includes, but is not limited to, information about a distance, a distance change rate, or a radial velocity, an orientation, and a height from the obstacle to the drone, and information about a curved surface structure, an external dimension, and a relative position of the object.
The execution unit 320 is configured to determine an obstacle point and a non-obstacle point included in a visual space of the unmanned aerial vehicle according to the obstacle information; and the method is used for determining the obstacle occupation probability of each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
The visible space of the drone may be the maximum range of airspace that the sensors of the drone can detect, or may be a predetermined range of airspace relative to the drone within the maximum range of airspace that can be detected.
The maximum range of airspace that can be detected by the unmanned aerial vehicle can be determined according to the Field of View (FOV) of the unmanned aerial vehicle, in this embodiment, a Field Angle (Field Angle) can be used to describe the Field of View of the unmanned aerial vehicle, and further describe the visible space of the unmanned aerial vehicle, and an Angle formed by two edges of the maximum range that an object image of a target to be detected can pass through the lens is referred to as a Field Angle, with a sensor as a vertex. The field of view scope of sensor has been decided to the size of angle of view, and the field of view is bigger, and the field of view is just bigger, and unmanned aerial vehicle's visual space is bigger promptly. In general, the unmanned aerial vehicle detects the information of the obstacle in real time in the flight process, and if the position of the obstacle relative to the unmanned aerial vehicle exceeds the angle range, the obstacle cannot be collected in the lens. In this application, unmanned aerial vehicle is in visual space within range, further generates the navigation map through the information that obtains the barrier, consequently does not add the consideration to the barrier outside the visual space of unmanned aerial vehicle scope.
The airspace within the predetermined range relative to the unmanned aerial vehicle may be a part of the maximum range of airspace that the unmanned aerial vehicle can detect, and if a preset distance is set in advance, the maximum range of airspace that the unmanned aerial vehicle can detect within the preset distance from the unmanned aerial vehicle is used as the visible space of the unmanned aerial vehicle. And determining barrier points and non-barrier points contained in the visual space of the unmanned aerial vehicle according to the barrier information, namely marking the points in the visual space of the unmanned aerial vehicle as the barrier points and the non-barrier points.
The unmanned visual space can be regarded as a set of countless points, and the point representation method is not limited in the application, such as a cartesian coordinate form and a vector form. In the unmanned visible space, if obstacles exist at certain points, the points are marked as obstacle points, and the points without the obstacles exist are marked as non-obstacle points, if the points can be marked by adopting a binary system, the obstacle points are marked as 0, and the non-obstacle points are marked as 1.
As mentioned above, after obtaining the obstacle point and the non-obstacle point, the navigation map may be generated according to the information of the obstacle point and the non-obstacle point, and the generation of the navigation map base map may adopt one or a combination of several in the prior art, for example, it is determined according to the obstacle point and the non-obstacle point which positions in the navigation map have the obstacle and which positions have no obstacle, and then the navigation map is divided into the obstacle area and the non-obstacle area.
In this embodiment, the multiple detection results may be combined, and a calculation method of probability update is adopted to determine whether the point is the final probability and result of the obstacle point. For example, if a certain point is detected 5 times, the number of times of judging the certain point as an obstacle point is 3, the probability of judging the certain point as a non-obstacle point is 2, according to the basic principle of probability theory, the probability of judging the certain point as an obstacle point is 60%, and the certain point is judged as an obstacle point.
Furthermore, in order to improve the accuracy of the navigation map, the navigation map may be divided into a plurality of blocks, and the plurality of blocks form the navigation map according to a certain sequence and a connection relationship. The navigation map can be divided by any one or more of the prior art, such as a cell decomposition method, which is briefly described as scanning a dotted line perpendicular to an X-axis of absolute coordinates from a left boundary to a right boundary of a local planning map, generating a plurality of blocks by judging connectivity changes of the scanning lines, and dividing the navigation map into the plurality of blocks after division.
In an embodiment of the present application, in the above apparatus, the obtaining unit 310 is configured to determine an initial obstacle point according to spatial information detected by the unmanned aerial vehicle sensor; and the coordinates of the initial barrier point in the three-dimensional space coordinate system are determined according to the pose of the unmanned aerial vehicle.
In an embodiment of the present application, in the above apparatus, the visual space is obtained by projecting the field of view of the drone into the three-dimensional space coordinate system; the execution unit 320 is configured to divide the visible space into a plurality of angle spaces according to a preset angle resolution with the viewpoint as a base point; if the angle space does not contain the initial obstacle point, all points in the angle space are used as non-obstacle points; and if the angle space contains the initial obstacle point, taking the initial obstacle point closest to the viewpoint coordinate as an obstacle point, and taking all points between the obstacle point and the viewpoint in the angle space as non-obstacle points.
In an embodiment of the application, in the above apparatus, the unmanned aerial vehicle navigation map is a two-dimensional map, wherein the executing unit 320 is configured to project the obstacle point and the non-obstacle point into the unmanned aerial vehicle navigation map respectively, increase a first weight value for the obstacle occupation probability of the block where the projection point of the obstacle point is located, and decrease a second weight value for the obstacle occupation probability of the block where the projection point of the non-obstacle point is located.
In an embodiment of the present application, in the above apparatus, the executing unit 320 is further configured to count three-dimensional coordinate information of an obstacle point projected into the target block within a preset time period; and determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the statistical three-dimensional coordinate information, and taking the maximum height in the statistical three-dimensional coordinate information as the obstacle height attribute of the target block.
In an embodiment of the present application, in the above apparatus, the executing unit 320 is further configured to clear the obstacle height attribute of the block with the obstacle occupation probability not greater than 0.
In one embodiment of the present application, in the above apparatus, the drone navigation map is a local planning map related to the location of the drone; the executing unit 320 is further configured to translate the unmanned aerial vehicle navigation map under the condition that the distance between the unmanned aerial vehicle position coordinate and the central point coordinate of the unmanned aerial vehicle navigation map is greater than a preset threshold value, so that the unmanned aerial vehicle position coordinate is consistent with the central point coordinate of the unmanned aerial vehicle navigation map.
It should be noted that the generation devices of the drone navigation map in the foregoing embodiments may be respectively used to execute the generation methods of the drone navigation map in the foregoing embodiments, and therefore, detailed description is not given one by one.
According to the technical scheme, the obstacle information is acquired; determining barrier points and non-barrier points contained in a visual space of the unmanned aerial vehicle according to the barrier information; and determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points. The beneficial effect of this application lies in: the method has the advantages that the probability updating calculation mode is adopted, the algorithm is simple, the calculation speed is high, the fluctuation of the obstacle detection result and the wrong planning of the flight path of the unmanned aerial vehicle caused by the false detection are avoided to the great extent, and the application scene of the unmanned aerial vehicle is obviously expanded.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various application aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, application is directed to less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the generation apparatus of the drone navigation map according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 4 shows a schematic structural diagram of a drone according to one embodiment of the present application. The drone 400 includes a processor 410 and a memory 420 arranged to store computer executable instructions (computer readable program code). The memory 420 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 420 has a storage space 430 storing computer readable program code 431 for performing any of the method steps described above. For example, the storage space 430 for storing the computer readable program code may include respective computer readable program codes 431 for respectively implementing various steps in the above method. The computer readable program code 431 can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 5. FIG. 5 shows a schematic diagram of a computer-readable storage medium according to an embodiment of the present application. The computer readable storage medium 500 stores computer readable program code 431 for performing the steps of the method according to the present application, which is readable by the processor 410 of the drone 400, which when the computer readable program code 431 is executed by the drone 400, causes the drone 400 to perform the steps of the method described above, and in particular the computer readable program code 431 stored by the computer readable storage medium may perform the method shown in any of the embodiments described above. The computer readable program code 431 may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A generation method of an unmanned aerial vehicle navigation map is characterized by comprising the following steps:
acquiring obstacle information;
determining obstacle points and non-obstacle points contained in a visual space of the unmanned aerial vehicle according to the obstacle information;
and determining the occupation probability of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
2. The method of claim 1, wherein the obtaining obstacle information comprises:
determining an initial obstacle point according to spatial information detected by an unmanned aerial vehicle sensor;
and determining the coordinates of the initial barrier point in the three-dimensional space coordinate system according to the pose of the unmanned aerial vehicle.
3. The method of claim 2, wherein the visual space is derived from projecting a field of view of a drone into the three-dimensional space coordinate system;
the determining of the obstacle points and the non-obstacle points contained in the visual space of the unmanned aerial vehicle according to the obstacle information includes:
dividing the visual space into a plurality of angle spaces according to a preset angle resolution by taking the viewpoint as a base point;
if the angle space does not contain the initial obstacle point, all points in the angle space are used as non-obstacle points;
and if the angle space contains the initial obstacle point, taking the initial obstacle point closest to the viewpoint coordinate as an obstacle point, and taking all points between the obstacle point and the viewpoint in the angle space as non-obstacle points.
4. The method of claim 1, wherein the drone navigation map is a two-dimensional map, and wherein determining the probability of occupancy of an obstacle for each block of the drone navigation map based on the obstacle points and non-obstacle points comprises:
respectively projecting the barrier points and the non-barrier points into the unmanned aerial vehicle navigation map, increasing a first weight value for the barrier occupation probability of a block where the projection points of the barrier points are located, and decreasing a second weight value for the barrier occupation probability of the block where the projection points of the non-barrier points are located.
5. The method of claim 4, wherein the method further comprises:
counting three-dimensional coordinate information of barrier points projected into a target block within a preset time period;
and determining the two-dimensional coordinates of the obstacle of the target block according to the weighted average of the statistical three-dimensional coordinate information, and taking the maximum height in the statistical three-dimensional coordinate information as the obstacle height attribute of the target block.
6. The method of claim 5, wherein the method further comprises:
and clearing the obstacle height attribute of the block with the obstacle occupation probability not greater than 0.
7. The method of any of claims 1-6, wherein the drone navigation map is a local planning map related to a location of a drone; the method further comprises the following steps:
under the condition that the distance between the unmanned aerial vehicle position coordinate and the central point coordinate of the unmanned aerial vehicle navigation map is greater than a preset threshold value, the unmanned aerial vehicle navigation map is translated to enable the unmanned aerial vehicle position coordinate to be consistent with the central point coordinate of the unmanned aerial vehicle navigation map.
8. The utility model provides a generation device of unmanned aerial vehicle navigation map which characterized in that includes:
an acquisition unit configured to acquire obstacle information;
the execution unit is used for determining barrier points and non-barrier points contained in the visual space of the unmanned aerial vehicle according to the barrier information; and determining the probability of occupation of the obstacles in each block in the unmanned aerial vehicle navigation map according to the obstacle points and the non-obstacle points.
9. A drone, wherein the drone includes: a processor; and a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform the method of any one of claims 1-7.
10. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-7.
CN202010244115.0A 2020-03-31 2020-03-31 Unmanned aerial vehicle and generation method and device of navigation map thereof Active CN113465614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244115.0A CN113465614B (en) 2020-03-31 2020-03-31 Unmanned aerial vehicle and generation method and device of navigation map thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244115.0A CN113465614B (en) 2020-03-31 2020-03-31 Unmanned aerial vehicle and generation method and device of navigation map thereof

Publications (2)

Publication Number Publication Date
CN113465614A true CN113465614A (en) 2021-10-01
CN113465614B CN113465614B (en) 2023-04-18

Family

ID=77865532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244115.0A Active CN113465614B (en) 2020-03-31 2020-03-31 Unmanned aerial vehicle and generation method and device of navigation map thereof

Country Status (1)

Country Link
CN (1) CN113465614B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10112000A (en) * 1996-10-03 1998-04-28 Suzuki Motor Corp Obstacle recognizer
CN102189557A (en) * 2010-03-16 2011-09-21 索尼公司 Control apparatus, control method and program
KR20130061250A (en) * 2011-12-01 2013-06-11 현대자동차주식회사 Tracking method of vehicle's surrounding obstacles
US20180172451A1 (en) * 2015-08-14 2018-06-21 Beijing Evolver Robotics Co., Ltd Method and system for mobile robot to self-establish map indoors
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
CN109154823A (en) * 2016-06-10 2019-01-04 凯斯纽荷兰工业美国有限责任公司 Utonomous working vehicle barrier detection system
CN109708636A (en) * 2017-10-26 2019-05-03 广州极飞科技有限公司 Navigation picture configuration method, barrier-avoiding method and device, terminal, unmanned vehicle
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10112000A (en) * 1996-10-03 1998-04-28 Suzuki Motor Corp Obstacle recognizer
CN102189557A (en) * 2010-03-16 2011-09-21 索尼公司 Control apparatus, control method and program
KR20130061250A (en) * 2011-12-01 2013-06-11 현대자동차주식회사 Tracking method of vehicle's surrounding obstacles
US20180172451A1 (en) * 2015-08-14 2018-06-21 Beijing Evolver Robotics Co., Ltd Method and system for mobile robot to self-establish map indoors
CN109154823A (en) * 2016-06-10 2019-01-04 凯斯纽荷兰工业美国有限责任公司 Utonomous working vehicle barrier detection system
CN109708636A (en) * 2017-10-26 2019-05-03 广州极飞科技有限公司 Navigation picture configuration method, barrier-avoiding method and device, terminal, unmanned vehicle
CN108550318A (en) * 2018-03-12 2018-09-18 浙江大华技术股份有限公司 A kind of method and device of structure map
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus

Also Published As

Publication number Publication date
CN113465614B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US11455565B2 (en) Augmenting real sensor recordings with simulated sensor data
US10260892B2 (en) Data structure of environment map, environment map preparing system and method, and environment map updating system and method
AU2016327918B2 (en) Unmanned aerial vehicle depth image acquisition method, device and unmanned aerial vehicle
JP6239664B2 (en) Ambient environment estimation apparatus and ambient environment estimation method
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
Schmid et al. Dynamic level of detail 3d occupancy grids for automotive use
CN111275816B (en) Method for acquiring point cloud data and related equipment
CN113378760A (en) Training target detection model and method and device for detecting target
CN111257882B (en) Data fusion method and device, unmanned equipment and readable storage medium
CN112631266A (en) Method and device for mobile robot to sense obstacle information
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN109903367A (en) Construct the method, apparatus and computer readable storage medium of map
CN113465614B (en) Unmanned aerial vehicle and generation method and device of navigation map thereof
CN112405526A (en) Robot positioning method and device, equipment and storage medium
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN113448340B (en) Unmanned aerial vehicle path planning method and device, unmanned aerial vehicle and storage medium
Díaz-Vilariño et al. From point clouds to 3D isovists in indoor environments
CN116863325A (en) Method for multiple target detection and related product
KR20220000331A (en) Apparatus and Method for Creating Indoor Spatial Structure Map through Dynamic Object Filtering
US20240087094A1 (en) Systems And Methods For Combining Multiple Depth Maps
CN111414848B (en) Full-class 3D obstacle detection method, system and medium
CN112130569B (en) Ultrasonic wave range setting method and system
CN117213463A (en) Grid map updating method, device and storage medium based on point cloud detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant