CN113126640B - Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium - Google Patents
Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium Download PDFInfo
- Publication number
- CN113126640B CN113126640B CN201911422360.XA CN201911422360A CN113126640B CN 113126640 B CN113126640 B CN 113126640B CN 201911422360 A CN201911422360 A CN 201911422360A CN 113126640 B CN113126640 B CN 113126640B
- Authority
- CN
- China
- Prior art keywords
- distance
- unmanned aerial
- aerial vehicle
- determining
- reference interval
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 65
- 238000005286 illumination Methods 0.000 claims abstract description 95
- 230000002159 abnormal effect Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000004888 barrier function Effects 0.000 claims abstract description 19
- 230000008859 change Effects 0.000 claims description 28
- 230000008901 benefit Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an obstacle detection method and device for an unmanned aerial vehicle, the unmanned aerial vehicle and a storage medium. The method comprises the following steps: determining an illumination state according to the image data; under the condition of abnormal illumination state, acquiring multi-frame radar point cloud data detected in a first time period; predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data; and determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the reference interval. The technical scheme has the advantages that under the condition that the illumination state is determined to be abnormal according to the image data, the method can be quickly switched to an obstacle detection mode based on the radar point cloud data, the distance is determined through multi-frame radar point cloud data instead of single-frame radar point cloud data, and accurate obstacle detection under the illumination complex scene is effectively achieved.
Description
Technical Field
The application relates to the field of unmanned aerial vehicles, in particular to an obstacle detection method and device for an unmanned aerial vehicle, the unmanned aerial vehicle and a storage medium.
Background
In recent years, indoor flying of the unmanned aerial vehicle is more and more frequent, and large-scale indoor field often has the characteristics that illumination changes obviously, the structure of the upper barrier is complicated, for example, fig. 5 shows an upper view shot by the unmanned aerial vehicle in the room, and it is visible that the steel beam structure above the unmanned aerial vehicle is complicated to there is a strong light source.
In the existing obstacle detection means commonly used for unmanned aerial vehicles, the traditional infrared distance measurement is easily influenced by illumination and is not suitable for being applied to the scenes with over-strong illumination or over-dark illumination; in the conventional multi-view vision, under the condition that the illumination change is obvious, one or more frames of ranging data are lost due to the influence of exposure adjustment.
Therefore, there is a need for an improved obstacle detection scheme for drones.
Disclosure of Invention
In view of the above, the present application is proposed in order to provide an obstacle detection method, apparatus, drone and storage medium for a drone that overcome or at least partially solve the above problems.
According to an aspect of the present application, there is provided an obstacle detection method for a drone, including:
determining an illumination state according to the image data;
under the condition of abnormal illumination state, acquiring multi-frame radar point cloud data detected in a first time period;
Predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data;
and determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the reference interval.
Optionally, the determining the illumination state from the image data includes:
generating a gray level histogram of a gray level image, determining a gray level change interval according to the gray level histogram, and determining an illumination state according to the gray level change interval and a gray level threshold;
and/or the presence of a gas in the gas,
generating a depth histogram of the depth image, determining the number of effective detection points according to the depth histogram, and determining the illumination state according to the number of effective detection points and an effective threshold value.
Optionally, the gray level threshold includes an excessively bright threshold and an excessively dark threshold, and the determining the illumination state according to the gray level change interval and the illumination threshold includes:
if the lower limit value of the gray scale change interval is larger than the over-brightness threshold value, the illumination state is abnormal over-brightness;
if the upper limit value of the gray scale change interval is smaller than the over-dark threshold, the illumination state is over-dark abnormal;
the method further comprises the following steps:
and correspondingly adjusting the exposure value and/or the gain of the camera for acquiring the gray-scale image according to the determined illumination state.
Optionally, the generating the depth histogram of the depth image includes:
and generating a depth histogram of the distance between the unmanned aerial vehicle and the obstacle in the specified direction based on the attitude and the depth image of the unmanned aerial vehicle.
Optionally, the predicting, according to the multi-frame radar point cloud data, a reference interval of a distance between the drone and a nearest obstacle in a specified direction includes:
and respectively determining the minimum value of the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction according to each frame of point cloud, and taking the minimum interval containing a plurality of the minimum values as the reference interval.
Optionally, the determining, according to the reference interval, a distance between the drone and a nearest obstacle in a specified direction includes:
taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction;
or,
determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment in the second time period and the nearest barrier in the designated direction; if the predicted distance value is larger than the upper limit value of the reference interval, taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction; if the predicted distance value is smaller than the lower limit value of the reference interval, taking the lower limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; and if the distance predicted value falls into the reference interval, taking the distance predicted value as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction.
Optionally, the method further comprises:
according to the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction, the safety distance of the unmanned aerial vehicle in the designated direction is determined, so that the unmanned aerial vehicle flies in the safety distance.
According to another aspect of the application, there is provided an obstacle detection device for a drone, comprising:
the illumination state unit is used for determining an illumination state according to the image data;
the acquisition unit is used for acquiring multi-frame radar point cloud data detected in a first time period under the condition that the illumination state is abnormal;
the prediction unit is used for predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data;
and the determining unit is used for determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the reference interval.
Optionally, the illumination state unit is configured to generate a gray level histogram of a gray level image, determine a gray level change interval according to the gray level histogram, and determine an illumination state according to the gray level change interval and a gray level threshold; and/or generating a depth histogram of the depth image, determining the number of effective detection points according to the depth histogram, and determining the illumination state according to the number of effective detection points and an effective threshold value.
Optionally, the grayscale threshold includes an over-bright threshold and an over-dark threshold, and the illumination state unit is configured to determine that the illumination state is an over-bright abnormal state if the lower limit of the grayscale variation interval is greater than the over-bright threshold; if the upper limit value of the gray scale change interval is smaller than the over-dark threshold, the illumination state is over-dark abnormal;
the device also includes:
and the camera adjusting unit is used for correspondingly adjusting the exposure value and/or the gain of the camera for acquiring the gray-scale image according to the determined illumination state.
Optionally, the illumination status unit is configured to generate a depth histogram of a distance between the drone and an obstacle in the specified direction based on the pose and the depth image of the drone.
Optionally, the prediction unit is configured to determine a minimum value of a distance between the unmanned aerial vehicle and a nearest obstacle in the designated direction according to each frame of point cloud, and use a minimum interval including a plurality of the minimum values as the reference interval.
Optionally, the determining unit is configured to use an upper limit value of the reference interval as a distance between the unmanned aerial vehicle and a nearest obstacle in a specified direction; or determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment and the nearest obstacle in the specified direction in the second time period; if the predicted distance value is larger than the upper limit value of the reference interval, taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction; if the predicted distance value is smaller than the lower limit value of the reference interval, taking the lower limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; and if the distance predicted value falls into the reference interval, taking the distance predicted value as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction.
Optionally, the apparatus further comprises: and the obstacle avoidance unit is used for determining the safety distance of the unmanned aerial vehicle in the designated direction according to the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction so that the unmanned aerial vehicle flies in the safety distance.
According to yet another aspect of the application, there is provided a drone comprising: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method as any one of the above.
According to a further aspect of the application, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement a method as in any above.
According to the technical scheme, under the condition that the illumination state is determined to be abnormal according to the image data, the reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is predicted through the multi-frame radar point cloud data obtained through detection in the first time period, and then the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is determined according to the reference interval. The technical scheme has the advantages that under the condition that the illumination state is determined to be abnormal according to the image data, the method can be quickly switched to an obstacle detection mode based on the radar point cloud data, the distance is determined through multi-frame radar point cloud data instead of single-frame radar point cloud data, accurate obstacle detection in an illumination complex scene is effectively achieved, the guarantee is provided for real-time continuous output of the distance between obstacles in the specified direction, the data continuity and robustness are excellent, and technical support is provided for the business fields of logistics, takeaway distribution and the like.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Various additional advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic flow diagram of an obstacle detection method for a drone according to one embodiment of the present application;
fig. 2 shows a schematic structural diagram of an obstacle detecting device for a drone according to one embodiment of the present application;
fig. 3 shows a schematic structural diagram of a drone according to one embodiment of the present application;
FIG. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application;
fig. 5 shows a top view taken indoors by the drone.
Detailed Description
In consideration of the limitations of the traditional infrared obstacle avoidance and multi-view vision obstacle avoidance in the complex illumination environment, the scheme for detecting the obstacle by combining image data and radar point cloud data is provided, the illumination state is mainly judged by using the image data, and the distance between the obstacles is determined by using the radar point cloud data. Wherein, the image data can be obtained by a multi-view camera, and the radar point cloud data can be obtained by a millimeter wave radar.
The reason why only a millimeter wave radar is not adopted here is that although the millimeter wave radar has very strong robustness to illumination changes, under a complex environment, the millimeter wave radar simply depends on the millimeter wave radar, and returns to the barrier distances at different positions along with the intensity of a reflected signal, so that the closest distance to the barrier above the distance cannot be accurately positioned.
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic flow diagram of an obstacle detection method for a drone according to one embodiment of the present application. As shown in fig. 1, the method includes:
step S110, the illumination state is determined according to the image data. The image data may be acquired by means of a binocular or other type of multi-purpose camera, and the various cameras may be deployed on a drone.
Specifically, the image data may be several frames of images taken by a camera in a certain direction, and the lighting state may be a lighting state in the direction, and in an indoor scene, may be a lighting state in a vertically upward direction.
Step S120, under the condition that the illumination state is abnormal, multi-frame radar point cloud data obtained through detection in the first time period are obtained.
For example, if the purpose is to determine the distance between the drone at the current time and the nearest obstacle in the specified direction, radar point cloud data may be acquired for a period of time before. The radar point cloud data can be obtained through millimeter wave radar detection, if an obstacle exists in the specified direction, corresponding detection points exist, the collection of the detection points is the point cloud, and the point cloud data specifically can include information such as three-dimensional coordinates and reflection intensity of the detection points.
And S130, predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data.
The multi-frame radar point cloud data are respectively obtained by detection at different moments and are analyzed, and a reference interval of the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction can be determined. One of the innovation points of the technical scheme is that a reference interval is determined, the distance is determined according to the reference interval, the calculation capacity and the required prediction precision of the unmanned aerial vehicle are comprehensively measured, and the finally determined distance is good in continuity and robustness.
And step S140, determining the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the reference interval.
Therefore, the method shown in fig. 1 has the advantages that under the condition that the illumination state is determined to be abnormal according to the image data, the method can be quickly switched to an obstacle detection mode based on the radar point cloud data, the distance is determined through multi-frame radar point cloud data instead of single-frame radar point cloud data, accurate obstacle detection in complex illumination scenes is effectively achieved, the distance between real-time continuous output and obstacles in the designated direction is guaranteed, the data continuity and robustness are excellent, and technical support is provided for the business fields of logistics, takeaway distribution and the like.
In an embodiment of the application, the determining the illumination state according to the image data includes: generating a gray level histogram of a gray level image, determining a gray level change interval according to the gray level histogram, and determining an illumination state according to the gray level change interval and a gray level threshold; and/or generating a depth histogram of the depth image, determining the number of effective detection points according to the depth histogram, and determining the illumination state according to the number of effective detection points and an effective threshold value.
The grayscale image can be directly captured by a camera, or converted from an original color image directly captured by the camera. The gray scale values ranged from 0 to 255, with 255 for white and 0 for black. As can be seen from fig. 5, the grey value is substantially close to 255 near the light source.
Therefore, determination of the illumination state can be efficiently performed from the grayscale image. The embodiment of the application also provides a plurality of better judging modes.
In one mode, a gray level histogram of a gray level image is generated firstly, and the distribution of gray level values is determined; and determining a gray level change interval according to the gray level histogram, and finally determining an illumination state according to the gray level change interval and a gray level threshold value.
Alternatively, the number of valid detection points may also be determined from a depth histogram of the depth image, for example, which detection points are valid detection points may be determined by a depth threshold. If the number of the effective detection points is small, it is difficult to obtain a more accurate obstacle detection result according to the effective detection points. For example, an effective threshold may be set to the total number of detection points in the depth image and k 1Product of (a), k1It may be an empirical parameter less than 1, and if the number of valid checkpoints is less than the valid threshold, it indicates that there are too few valid checkpoints.
When the depth histogram is generated, the depth histogram may be downsampled or clipped to reduce the amount of calculation.
In an embodiment of the application, in the method, the gray threshold includes an excessively bright threshold and an excessively dark threshold, and determining the illumination state according to the gray change interval and the illumination threshold includes: if the lower limit value of the gray level change interval is greater than the over-brightness threshold value, the illumination state is over-brightness abnormal; if the upper limit value of the gray level change interval is smaller than the over-dark threshold value, the illumination state is over-dark abnormal; the method further comprises the following steps: and adjusting the exposure value and/or the gain of the camera for acquiring the gray-scale image according to the determined illumination state.
For example, an over-bright threshold of 200 and an over-dark threshold of 50 are set. If the lower limit value of the gray scale change interval of one image is 220, the whole image is supposed to be almost whitish, and if the lower limit value indicates that the illumination is too strong, the brightness is abnormal. Similarly, if the upper limit value of the gray scale change interval of one image is 49, it is conceivable that the entire image is substantially dark, and it is an excessively dark anomaly. It is easy to understand that whether the abnormal image is too bright or too dark, the current environment state cannot be truly reflected by the image, and the accuracy of multi-view visual obstacle avoidance according to the image is not high. It should be noted that the above-mentioned example of the over-bright threshold 200 and the over-dark threshold 50 is only an exemplary illustration, and in other examples, the over-bright threshold and the over-dark threshold may be set according to actual requirements.
After determining whether the abnormal condition is an excessively bright abnormal condition or an excessively dark abnormal condition, the exposure value and/or the gain of the camera for acquiring the gray scale image may be adjusted, for example, if the abnormal condition is an excessively dark abnormal condition, the exposure value and the camera gain may be increased, and if the abnormal condition is an excessively bright abnormal condition, the exposure value and the camera gain may be decreased.
In an embodiment of the application, the generating the depth histogram of the depth image includes: and generating a depth histogram of the distance between the unmanned aerial vehicle and the obstacle in the specified direction based on the attitude and the depth image of the unmanned aerial vehicle.
Because the unmanned aerial vehicle cannot constantly keep a completely horizontal flying state during flying, the unmanned aerial vehicle cannot constantly be parallel to any coordinate axis in a world coordinate system, so that in order to obtain an accurate distance in a specified direction, a depth histogram can be generated according to the attitude of the unmanned aerial vehicle and a depth image, specifically, an included angle between a shooting visual angle and each coordinate axis of the world coordinate system can be determined according to the attitude of the unmanned aerial vehicle, and the depth image is subjected to projection transformation.
It should be noted here that the camera and the radar may be calibrated in advance, for example, a rotation matrix between a camera coordinate system and a radar coordinate system may be determined, which facilitates subsequent data processing, for example, radar point cloud data is projected in the camera coordinate system to obtain a corresponding relationship between a detection point and a pixel point.
In an embodiment of the application, the method for predicting, according to the multi-frame radar point cloud data, a reference interval of a distance between the drone and a nearest obstacle in a specified direction includes: and respectively determining the minimum value of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to each frame of point cloud, and taking the minimum interval containing a plurality of minimum values as a reference interval.
For example, for n frames of radar point cloud data, the minimum value Smin of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction in each frame of radar point cloud data is determined in sequence, and this can be determined according to the coordinates and attitude of the unmanned aerial vehicle and the three-dimensional coordinates in the radar point cloud data, so Smin1 and Smin2 … … Sminn can be obtained. Then, Smin1 and Smin2 … … Sminn are ranked, and an interval [ Smin _ min, Smin _ max ] can be determined.
In an embodiment of the application, the determining, according to the reference interval, a distance between the drone and a nearest obstacle in the designated direction includes: taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; or determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment in the second time period and the nearest barrier in the designated direction; if the distance predicted value is larger than the upper limit value of the reference interval, taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; if the distance predicted value is smaller than the lower limit value of the reference interval, taking the lower limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; and if the distance predicted value falls into the reference interval, taking the distance predicted value as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction.
Several ways of determining the distance between the drone and the closest obstacle in the given direction from the reference interval are presented here. Since the distance between the drone and the nearest obstacle in the designated direction can be determined at each time, the distance between the drone and the nearest obstacle in the designated direction, which is determined in a period of time before, can be referred to as a reference for the current time.
Then for the previous period of time, the following may be the case: the light status anomaly has persisted for a period of time or has just begun at this time. It is easy to understand that if the illumination state is normal, the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction can be detected more accurately by using multi-view vision, and for example, the distance can be determined according to the depth point in the depth image.
Of course, in any of the above methods, it is inevitable that there are points that are erroneously detected in the corresponding data, and therefore, in the case where the illumination state is normal, the product of one empirical parameter k2 and the total depth point number may be used for filtering, and the distance may be determined using the depth points remaining after filtering.
Consider the situation where the illumination state anomaly has persisted for a period of time. In this case, the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is determined according to the reference interval, and if the data at the previous moment has an error, the distance is determined at this moment in the same manner, so that the error may be accumulated, and therefore, the upper limit value of the reference interval can be directly used as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction.
For the case that the illumination state is abnormal only at the current moment, since the distance at the previous moment is determined according to the multi-view vision, the distance can be used as a reference, and then a distance predicted value can be determined according to a linear relation formed by the distances between the unmanned aerial vehicle at each moment and the nearest obstacle in the designated direction in the second time period, for example, the distance determined at the t-1 moment is taken as a basis, and a linear increasing value of the distance at the t moment is added to be taken as the distance determined at the t moment, and the linear increasing value of the distance can be calculated according to the linear relation.
For the sake of simplicity of calculation, it can be considered approximately that the linear increasing value of the distance at the time t-1 is the same as the linear increasing value of the distance at the time t-1, and the linear increasing value of the distance at the time t-1 can be obtained by subtracting the distance determined at the time t-2 and the distance determined at the time t-1, that is, the distance at the time t can be determined by using the distance determined at the time t-2 and the distance determined at the time t-1. In a specific example, if the distance determined at time t-2 is 15cm and the distance determined at time t-1 is 14cm, the distance determined at time t is 14+ (14-15) ═ 13 cm.
Then, considering the relationship between the distance prediction value and the reference interval, it can be understood that the distance reference value is limited by the reference interval so as not to cause larger error due to accumulation of error at the previous time. Specifically, if the distance reference value is within the reference interval, the distance reference value is directly used; if the distance reference value is outside the reference interval, the upper limit value of the reference interval is used when the distance reference value is larger, and the lower limit value of the reference interval is used when the distance reference value is smaller.
Of course, since the illumination state of the actual scene is very complex, the above modes can be freely combined and implemented according to the requirements, and the application does not limit the above modes.
In an embodiment of the present application, the method further includes: and determining the safety distance of the unmanned aerial vehicle in the designated direction according to the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction so as to enable the unmanned aerial vehicle to fly in the safety distance.
The determined distance between the drone and the nearest obstacle in the given direction, and the difference between the safe distances, may be an empirical parameter. Therefore, the obstacle avoidance of the unmanned aerial vehicle in the specified direction is realized.
Fig. 2 shows a schematic structural diagram of an obstacle detection apparatus for a drone according to one embodiment of the present application. As shown in fig. 2, the obstacle detecting apparatus 200 for the unmanned aerial vehicle includes:
an illumination status unit 210 for determining an illumination status according to the image data. The image data may be acquired by means of binocular or other types of depth cameras, and the cameras may be deployed on a drone.
Specifically, the image data may be several frames of images taken by the camera in a certain direction, the lighting state may be a lighting state in the direction, and in an indoor scene, may be a lighting state in a vertically upward direction.
The obtaining unit 220 is configured to obtain multi-frame radar point cloud data detected in the first time period under the condition that the illumination state is abnormal.
For example, if the purpose is to determine the distance between the drone at the current time and the nearest obstacle in the specified direction, then radar point cloud data may be acquired over a period of time before. The radar point cloud data can be obtained through millimeter wave radar detection, if an obstacle exists in the specified direction, corresponding detection points exist, the collection of the detection points is the point cloud, and the point cloud data specifically can include information such as three-dimensional coordinates and reflection intensity of the detection points.
And the prediction unit 230 is configured to predict a reference interval of a distance between the unmanned aerial vehicle and a nearest obstacle in the designated direction according to the multi-frame radar point cloud data.
The multi-frame radar point cloud data are respectively obtained by detection at different moments, and the reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction can be determined by analyzing the multi-frame radar point cloud data. One of the innovation points of the technical scheme is that a reference interval is determined, the distance is determined according to the reference interval, the calculation capacity and the required prediction precision of the unmanned aerial vehicle are comprehensively measured, and the finally determined distance is good in continuity and robustness.
And a determining unit 240, configured to determine, according to the reference interval, a distance between the unmanned aerial vehicle and a nearest obstacle in the designated direction.
Therefore, the device shown in fig. 2 has the advantages that under the condition that the illumination state is determined to be abnormal according to the image data, the device can be quickly switched to an obstacle detection mode based on the radar point cloud data, the distance is determined through multi-frame radar point cloud data instead of single-frame radar point cloud data, accurate obstacle detection in complex illumination scenes is effectively achieved, the distance between real-time continuous output and obstacles in the designated direction is guaranteed, the data continuity and robustness are excellent, and technical support is provided for the business fields of logistics, takeaway distribution and the like.
In an embodiment of the present application, in the above apparatus, the illumination state unit 210 is configured to generate a grayscale histogram of a grayscale image, determine a grayscale variation interval according to the grayscale histogram, and determine an illumination state according to the grayscale variation interval and a grayscale threshold; and/or generating a depth histogram of the depth image, determining the number of effective detection points according to the depth histogram, and determining the illumination state according to the number of effective detection points and an effective threshold value.
The gray-scale image can be obtained by directly shooting by a camera, or can be obtained by converting an original color image directly shot by the camera. The gray values ranged from 0-255, 255 for white and 0 for black. As can be seen from fig. 5, the gray value is substantially close to 255 near the light source.
Therefore, the determination of the illumination state can be efficiently performed from the grayscale image. The embodiment of the application also provides a plurality of better judging modes.
In one mode, a gray level histogram of a gray level image is generated firstly, and the distribution of gray level values is determined; and determining a gray level change interval according to the gray level histogram, and finally determining an illumination state according to the gray level change interval and a gray level threshold value.
Alternatively, the number of valid detection points may also be determined from a depth histogram of the depth image, for example, which detection points are valid detection points may be determined by a depth threshold. If the number of the effective detection points is small, it is difficult to obtain a more accurate obstacle detection result according to the effective detection points. For example, an effective threshold may be set to the total number of detection points in the depth image and k1Product of (a), k1It may be an empirical parameter less than 1, and if the number of valid detection points is less than the valid threshold, it indicates that there are too few valid detection points.
When the depth histogram is generated, the depth histogram may be downsampled or clipped to reduce the amount of calculation.
In an embodiment of the application, in the above apparatus, the grayscale threshold includes an over-bright threshold and an over-dark threshold, and the illumination state unit is configured to determine that the illumination state is an over-bright exception if a lower limit value of the grayscale variation interval is greater than the over-bright threshold; if the upper limit value of the gray level change interval is smaller than the over-dark threshold value, the illumination state is over-dark abnormal; the device also includes: and the camera adjusting unit is used for correspondingly adjusting the exposure value and/or the gain of the camera for acquiring the gray level image according to the determined illumination state.
For example, an over-bright threshold of 200 and an over-dark threshold of 50 are set. If the lower limit value of the gray scale change interval of an image is 220, the whole image is supposed to be substantially whitish, which means that if the illumination is too strong, the image is too bright and abnormal. Similarly, if the upper limit value of the gray scale change interval of one image is 49, it is conceivable that the entire image is substantially dark, and it is an excessively dark anomaly. It is easy to understand that whether the abnormal image is too bright or too dark, the current environment state cannot be truly reflected by the image, and the accuracy of multi-view visual obstacle avoidance according to the image is not high. It should be noted that the above-mentioned example of the over-bright threshold 200 and the over-dark threshold 50 is only an exemplary illustration, and in other examples, the over-bright threshold and the over-dark threshold may be set according to actual requirements.
After determining whether the gray scale image is an excessively bright anomaly or an excessively dark anomaly, the exposure value and/or the gain of the camera for acquiring the gray scale image may be adjusted, for example, if the gray scale image is an excessively dark anomaly, the exposure value and the camera gain may be increased, and if the gray scale image is an excessively bright anomaly, the exposure value and the camera gain may be decreased.
In an embodiment of the present application, in the above apparatus, the illumination status unit 210 is configured to generate a depth histogram of a distance between the drone and an obstacle in a specified direction based on the pose and the depth image of the drone.
Because the unmanned aerial vehicle cannot constantly keep a completely horizontal flying state during flying, the unmanned aerial vehicle cannot constantly be guaranteed to be parallel to any coordinate axis in a world coordinate system, and therefore in order to obtain an accurate distance in a specified direction, a depth histogram can be generated according to the attitude and the depth image of the unmanned aerial vehicle.
It should be noted here that the camera and the radar may be calibrated in advance, for example, a rotation matrix between a camera coordinate system and a radar coordinate system may be determined, which facilitates subsequent data processing, for example, radar point cloud data is projected in the camera coordinate system to obtain a corresponding relationship between a detection point and a pixel point.
In an embodiment of the application, in the above apparatus, the prediction unit 230 is configured to determine a minimum value of a distance between the drone and a nearest obstacle in a specified direction according to each frame of point cloud, and use a minimum interval including a plurality of minimum values as a reference interval.
For example, for n frames of radar point cloud data, the minimum value S of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction in each frame of radar point cloud data is determined in sequenceminThis can be determined from the coordinates, attitude of the drone and the three dimensional coordinates in the radar point cloud data, from which S can be derivedmin1,Smin2……Sminn. Then to Smin1,Smin2……SminnSorting is carried out, namely an interval [ S ] can be determinedmin_min,Smin_max]。
In an embodiment of the present application, in the above apparatus, the determining unit 240 is configured to use an upper limit value of the reference interval as a distance between the unmanned aerial vehicle and a nearest obstacle in the designated direction; or determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment in the second time period and the nearest barrier in the designated direction; if the predicted distance value is larger than the upper limit value of the reference interval, taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; if the distance predicted value is smaller than the lower limit value of the reference interval, taking the lower limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; and if the distance predicted value falls into the reference interval, taking the distance predicted value as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction.
Several ways of determining the distance between the drone and the closest obstacle in the given direction from the reference interval are presented here. Since the distance between the drone and the nearest obstacle in the designated direction can be determined at each time, the distance between the drone and the nearest obstacle in the designated direction, which is determined in a period of time before, can be referred to as a reference for the current time.
Then for the previous period of time, the following may be the case: the light status anomaly has persisted for a period of time or has just begun at this time. It is easy to understand that if the illumination state is normal, the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction can be detected more accurately by using multi-view vision, and for example, the distance can be determined according to the depth point in the depth image.
Of course, in any way, it is inevitable that there are points of misdetection in the corresponding data, and therefore, for the case of normal illumination condition, one empirical parameter k may be used2And (4) filtering by multiplying the total depth point number, and determining the distance by using the depth points left after filtering.
Consider the situation where the illumination state anomaly has persisted for a period of time. In this case, the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is determined according to the reference interval, and if the data at the previous moment has an error, the distance is determined at this moment in the same manner, so that the error may be accumulated, and therefore, the upper limit value of the reference interval can be directly used as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction.
For the case that the illumination state is abnormal only at the current moment, since the distance at the previous moment is determined according to the multi-view vision, the distance can be used as a reference, and then a distance predicted value can be determined according to a linear relation formed by the distances between the unmanned aerial vehicle at each moment and the nearest obstacle in the designated direction in the second time period, for example, the distance determined at the t-1 moment is taken as a basis, and a linear increasing value of the distance at the t moment is added to be taken as the distance determined at the t moment, and the linear increasing value of the distance can be calculated according to the linear relation.
For the sake of simplicity of calculation, it can be considered approximately that the linear increasing value of the distance at the time t-1 is the same as the linear increasing value of the distance at the time t-1, and the linear increasing value of the distance at the time t-1 can be obtained by subtracting the distance determined at the time t-2 and the distance determined at the time t-1, that is, the distance at the time t can be determined by using the distance determined at the time t-2 and the distance determined at the time t-1. In a specific example, if the distance determined at time t-2 is 15cm and the distance determined at time t-1 is 14cm, the distance determined at time t is 14+ (14-15) ═ 13 cm.
Then, considering the relationship between the distance prediction value and the reference interval, it can be understood that the distance reference value is limited by the reference interval so as not to cause larger error due to accumulation of error at the previous time. Specifically, if the distance reference value is within the reference interval, the distance reference value is directly used; if the distance reference value is outside the reference interval, the upper limit value of the reference interval is used when the distance reference value is larger, and the lower limit value of the reference interval is used when the distance reference value is smaller.
Of course, since the illumination state of the actual scene is very complex, the above modes can be freely combined and implemented according to the requirements, and the application does not limit the above modes.
In an embodiment of the present application, the apparatus further includes: and the obstacle avoidance unit is used for determining the safety distance of the unmanned aerial vehicle in the designated direction according to the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction so that the unmanned aerial vehicle flies in the safety distance.
The determined distance between the drone and the nearest obstacle in the given direction, and the difference between the safe distances, may be an empirical parameter. Therefore, the obstacle avoidance of the unmanned aerial vehicle in the specified direction is realized.
To sum up, according to the technical scheme of the application, under the condition that the illumination state is determined to be abnormal according to the image data, the reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is predicted through the multi-frame radar point cloud data obtained through detection in the first time period, and then the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction is determined according to the reference interval. The technical scheme has the advantages that under the condition that the illumination state is determined to be abnormal according to the image data, the method can be quickly switched to the obstacle detection mode based on the radar point cloud data, the distance is determined through multi-frame radar point cloud data instead of single-frame radar point cloud data, accurate detection of the obstacles in the illumination complex scene is effectively achieved, the distance between real-time continuous output and the obstacles in the specified direction is guaranteed, the continuity and robustness of the data are excellent, and technical support is provided for the business fields of logistics, takeaway distribution and the like.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for constructing an arrangement of this type will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an obstacle detecting device for a drone according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 3 shows a schematic structural diagram of a drone according to one embodiment of the present application. The drone 300 includes a processor 310 and a memory 320 arranged to store computer executable instructions (computer readable program code). The memory 320 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 320 has a storage space 330 storing computer readable program code 331 for performing any of the method steps described above. For example, the storage space 330 for storing the computer readable program code may comprise respective computer readable program codes 331 for respectively implementing various steps in the above method. The computer readable program code 331 may be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 4. FIG. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application. The computer readable storage medium 400 stores computer readable program code 331 for performing the steps of the method according to the present application, readable by the processor 310 of the drone 300, which when executed by the drone 300 causes the drone 300 to perform the steps of the method described above, and in particular the computer readable program code 331 stored by the computer readable storage medium may perform the method shown in any of the embodiments described above. The computer readable program code 331 may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Claims (10)
1. An obstacle detection method for a drone, comprising:
determining an illumination state according to the image data;
under the condition of abnormal illumination state, acquiring multi-frame radar point cloud data detected in a first time period;
Predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data;
determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the reference interval, wherein when the illumination state is abnormal and starts to appear at the current moment, determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the reference interval comprises:
determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment in the second time period and the nearest barrier in the designated direction;
and determining the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction according to the distance predicted value and the reference interval.
2. The method of claim 1, wherein determining an illumination state from image data comprises:
generating a gray level histogram of a gray level image, determining a gray level change interval according to the gray level histogram, and determining an illumination state according to the gray level change interval and a gray level threshold;
and/or the presence of a gas in the gas,
generating a depth histogram of the depth image, determining the number of effective detection points according to the depth histogram, and determining the illumination state according to the number of effective detection points and an effective threshold value.
3. The method of claim 2, wherein the gray level threshold comprises an excessively bright threshold and an excessively dark threshold, and wherein determining the illumination state based on the gray level change interval and the illumination threshold comprises:
if the lower limit value of the gray scale change interval is larger than the over-brightness threshold value, the illumination state is abnormal over-brightness;
if the upper limit value of the gray scale change interval is smaller than the over-dark threshold, the illumination state is over-dark abnormal;
the method further comprises the following steps:
and correspondingly adjusting the exposure value and/or the gain of the camera for acquiring the gray-scale image according to the determined illumination state.
4. The method of claim 2, wherein generating the depth histogram for the depth image comprises:
and generating a depth histogram of the distance between the unmanned aerial vehicle and the obstacle in the specified direction based on the attitude and the depth image of the unmanned aerial vehicle.
5. The method of claim 1, wherein predicting a reference interval of distance between the drone and a nearest obstacle in a specified direction from the plurality of frames of radar point cloud data comprises:
and respectively determining the minimum value of the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction according to each frame of point cloud, and taking the minimum interval containing a plurality of the minimum values as the reference interval.
6. The method of claim 1, wherein said determining a distance between the drone and a nearest obstacle in a specified direction from the reference interval comprises:
taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction;
or,
determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction at each moment in the second time period; if the predicted distance value is larger than the upper limit value of the reference interval, taking the upper limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction; if the predicted distance value is smaller than the lower limit value of the reference interval, taking the lower limit value of the reference interval as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction; and if the distance predicted value falls into the reference interval, taking the distance predicted value as the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction.
7. The method of any one of claims 1-6, further comprising:
according to the distance between the unmanned aerial vehicle and the nearest barrier in the designated direction, the safety distance of the unmanned aerial vehicle in the designated direction is determined, so that the unmanned aerial vehicle flies in the safety distance.
8. An obstacle detection device for an unmanned aerial vehicle, comprising:
an illumination state unit for determining an illumination state according to the image data;
the acquisition unit is used for acquiring multi-frame radar point cloud data detected in a first time period under the condition that the illumination state is abnormal;
the prediction unit is used for predicting a reference interval of the distance between the unmanned aerial vehicle and the nearest obstacle in the specified direction according to the multi-frame radar point cloud data;
a determining unit, configured to determine, according to the reference interval, a distance between the unmanned aerial vehicle and a nearest obstacle in the designated direction, where, when the illumination state is abnormal and starts to appear at the current time, the determining unit is configured to:
determining a distance predicted value according to a linear relation formed by the distance between the unmanned aerial vehicle at each moment in the second time period and the nearest barrier in the designated direction;
and determining the distance between the unmanned aerial vehicle and the nearest obstacle in the designated direction according to the distance predicted value and the reference interval.
9. A drone, wherein the drone includes: a processor; and a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform the method of any one of claims 1-7.
10. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911422360.XA CN113126640B (en) | 2019-12-31 | 2019-12-31 | Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911422360.XA CN113126640B (en) | 2019-12-31 | 2019-12-31 | Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113126640A CN113126640A (en) | 2021-07-16 |
CN113126640B true CN113126640B (en) | 2022-06-28 |
Family
ID=76769683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911422360.XA Active CN113126640B (en) | 2019-12-31 | 2019-12-31 | Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113126640B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102540193A (en) * | 2010-12-24 | 2012-07-04 | 无锡物联网产业研究院 | Laser radar monitoring system |
CN103153743A (en) * | 2010-10-01 | 2013-06-12 | 丰田自动车株式会社 | Obstacle recognition system and method for a vehicle |
WO2017025521A1 (en) * | 2015-08-07 | 2017-02-16 | Institut De Recherche Technologique Jules Verne | Device and method for detecting obstacles suitable for a mobile robot |
CN106558053A (en) * | 2015-09-25 | 2017-04-05 | 株式会社理光 | Object segmentation methods and Object Segmentation device |
CN106597472A (en) * | 2016-12-21 | 2017-04-26 | 深圳市镭神智能系统有限公司 | Intelligent vehicle collision avoidance system and method based on laser radar |
CN107102310A (en) * | 2017-06-26 | 2017-08-29 | 沈阳航空航天大学 | A kind of multi-path laser radar detection method |
CN108037765A (en) * | 2017-12-04 | 2018-05-15 | 国网山东省电力公司电力科学研究院 | A kind of unmanned plane obstacle avoidance system for polling transmission line |
US10173773B1 (en) * | 2016-02-23 | 2019-01-08 | State Farm Mutual Automobile Insurance Company | Systems and methods for operating drones in response to an incident |
WO2019039733A1 (en) * | 2017-08-21 | 2019-02-28 | (주)유진로봇 | Moving object and combined sensor using camera and lidar |
CN109407071A (en) * | 2018-12-13 | 2019-03-01 | 广州极飞科技有限公司 | Radar range finding method, radar range unit, unmanned plane and storage medium |
CN109634282A (en) * | 2018-12-25 | 2019-04-16 | 奇瑞汽车股份有限公司 | Automatic driving vehicle, method and apparatus |
CN109683170A (en) * | 2018-12-27 | 2019-04-26 | 驭势科技(北京)有限公司 | A kind of image traveling area marking method, apparatus, mobile unit and storage medium |
CN110174105A (en) * | 2019-06-14 | 2019-08-27 | 西南科技大学 | Intelligent body Autonomous Navigation Algorithm and system under a kind of complex environment |
CN110531377A (en) * | 2019-10-08 | 2019-12-03 | 北京邮电大学 | Data processing method, device, electronic equipment and the storage medium of radar system |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4433045B2 (en) * | 2007-12-26 | 2010-03-17 | 株式会社デンソー | Exposure control device and exposure control program |
AT507035B1 (en) * | 2008-07-15 | 2020-07-15 | Airbus Defence & Space Gmbh | SYSTEM AND METHOD FOR AVOIDING COLLISION |
US10120385B2 (en) * | 2016-03-30 | 2018-11-06 | Intel Corporation | Comfort ride vehicle control system |
CN109145680B (en) * | 2017-06-16 | 2022-05-27 | 阿波罗智能技术(北京)有限公司 | Method, device and equipment for acquiring obstacle information and computer storage medium |
US10459445B2 (en) * | 2017-09-28 | 2019-10-29 | Intel IP Corporation | Unmanned aerial vehicle and method for operating an unmanned aerial vehicle |
CN107995962B (en) * | 2017-11-02 | 2021-06-22 | 深圳市道通智能航空技术股份有限公司 | Obstacle avoidance method and device, movable object and computer readable storage medium |
-
2019
- 2019-12-31 CN CN201911422360.XA patent/CN113126640B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103153743A (en) * | 2010-10-01 | 2013-06-12 | 丰田自动车株式会社 | Obstacle recognition system and method for a vehicle |
CN102540193A (en) * | 2010-12-24 | 2012-07-04 | 无锡物联网产业研究院 | Laser radar monitoring system |
WO2017025521A1 (en) * | 2015-08-07 | 2017-02-16 | Institut De Recherche Technologique Jules Verne | Device and method for detecting obstacles suitable for a mobile robot |
CN106558053A (en) * | 2015-09-25 | 2017-04-05 | 株式会社理光 | Object segmentation methods and Object Segmentation device |
US10173773B1 (en) * | 2016-02-23 | 2019-01-08 | State Farm Mutual Automobile Insurance Company | Systems and methods for operating drones in response to an incident |
CN106597472A (en) * | 2016-12-21 | 2017-04-26 | 深圳市镭神智能系统有限公司 | Intelligent vehicle collision avoidance system and method based on laser radar |
CN107102310A (en) * | 2017-06-26 | 2017-08-29 | 沈阳航空航天大学 | A kind of multi-path laser radar detection method |
WO2019039733A1 (en) * | 2017-08-21 | 2019-02-28 | (주)유진로봇 | Moving object and combined sensor using camera and lidar |
CN108037765A (en) * | 2017-12-04 | 2018-05-15 | 国网山东省电力公司电力科学研究院 | A kind of unmanned plane obstacle avoidance system for polling transmission line |
CN109407071A (en) * | 2018-12-13 | 2019-03-01 | 广州极飞科技有限公司 | Radar range finding method, radar range unit, unmanned plane and storage medium |
CN109634282A (en) * | 2018-12-25 | 2019-04-16 | 奇瑞汽车股份有限公司 | Automatic driving vehicle, method and apparatus |
CN109683170A (en) * | 2018-12-27 | 2019-04-26 | 驭势科技(北京)有限公司 | A kind of image traveling area marking method, apparatus, mobile unit and storage medium |
CN110174105A (en) * | 2019-06-14 | 2019-08-27 | 西南科技大学 | Intelligent body Autonomous Navigation Algorithm and system under a kind of complex environment |
CN110531377A (en) * | 2019-10-08 | 2019-12-03 | 北京邮电大学 | Data processing method, device, electronic equipment and the storage medium of radar system |
Non-Patent Citations (2)
Title |
---|
A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars;Ziyi Liu,等;《Engineering》;20181231;第479-537页 * |
Collision Avoidance Radar for UAV;Radar Signal Processing Lab,等;《IEEE》;20061231;第1-4页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113126640A (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110032949B (en) | Target detection and positioning method based on lightweight convolutional neural network | |
CN109949372B (en) | Laser radar and vision combined calibration method | |
CN110163904B (en) | Object labeling method, movement control method, device, equipment and storage medium | |
KR102269750B1 (en) | Method for Real-time Object Detection Based on Lidar Sensor and Camera Using CNN | |
CN108020825B (en) | Fusion calibration system and method for laser radar, laser camera and video camera | |
WO2021072696A1 (en) | Target detection and tracking method and system, and movable platform, camera and medium | |
US9635237B2 (en) | Method and camera for determining an image adjustment parameter | |
CN105049784B (en) | Method and apparatus for the sighting distance estimation based on image | |
CN114637023A (en) | System and method for laser depth map sampling | |
CN111753609A (en) | Target identification method and device and camera | |
CN112613336B (en) | Method and apparatus for generating object classification of object | |
CN109946703A (en) | A kind of sensor attitude method of adjustment and device | |
CN111257882B (en) | Data fusion method and device, unmanned equipment and readable storage medium | |
CN105190288A (en) | Method and device for determining a visual range in daytime fog | |
JP2021077350A5 (en) | ||
CN112361990A (en) | Laser pattern extraction method and device, laser measurement equipment and system | |
JP2008059260A (en) | Movement detection image creating device | |
CN112689097A (en) | Automatic brightness control method and system for line laser and storage medium | |
CN115187941A (en) | Target detection positioning method, system, equipment and storage medium | |
CN117392423A (en) | Laser radar-based true value data prediction method, device and equipment for target object | |
Pohl et al. | Depth map improvements for stereo-based depth cameras on drones | |
CN113126640B (en) | Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium | |
CN116389901B (en) | On-orbit intelligent exposure and focusing method and system for space camera and electronic equipment | |
US20220383146A1 (en) | Method and Device for Training a Machine Learning Algorithm | |
CA3148404A1 (en) | Information processing device, data generation method, and non-transitory computer-readable medium storing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |