CN114359714A - Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body - Google Patents

Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body Download PDF

Info

Publication number
CN114359714A
CN114359714A CN202111532883.7A CN202111532883A CN114359714A CN 114359714 A CN114359714 A CN 114359714A CN 202111532883 A CN202111532883 A CN 202111532883A CN 114359714 A CN114359714 A CN 114359714A
Authority
CN
China
Prior art keywords
obstacle
event
moving
obstacle avoidance
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111532883.7A
Other languages
Chinese (zh)
Inventor
陈博文
徐庶
高爽
刘庆杰
倪文辉
马金艳
管达志
吴天皓
宫成业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanhu Research Institute Of Electronic Technology Of China
Original Assignee
Nanhu Research Institute Of Electronic Technology Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanhu Research Institute Of Electronic Technology Of China filed Critical Nanhu Research Institute Of Electronic Technology Of China
Priority to CN202111532883.7A priority Critical patent/CN114359714A/en
Publication of CN114359714A publication Critical patent/CN114359714A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an event camera-based unmanned obstacle avoidance method, an event camera-based unmanned obstacle avoidance device and an intelligent unmanned body, wherein the method comprises the following steps: acquiring an event image of a moving obstacle in real time by using a binocular event camera; calculating and storing the coordinates of the central point of the moving obstacle in the left side event camera image and the right side camera event image at each moment, and the size information and the depth information of the moving obstacle at each moment; reading the coordinates of the central point of the moving obstacle in the continuous frame event image of any camera, the size information and the depth information of the moving obstacle, and calculating the motion vector of the target obstacle; and setting an unmanned obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance. The invention can realize the obstacle avoidance of the fast moving obstacle, effectively avoids unnecessary excessive obstacle avoidance maneuvering behavior, and has low calculation cost.

Description

Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an unmanned obstacle avoidance method and device based on an event camera, and an intelligent unmanned body.
Background
Unmanned aerial vehicle, unmanned car etc. intelligence do not have human body to have characteristics such as light flexibility, mobility are strong, disguise is good, are being applied to civilian and military field widely. With the continuous development of the unmanned body, the safety problem of the unmanned body in the autonomous task execution process cannot be completely guaranteed by the aid of the complex and changeable task environment due to the preset global track, and therefore the unmanned body autonomous obstacle avoidance technology is a key component of the unmanned body system when the unmanned body encounters sudden obstacles.
At present, a plurality of target detection schemes based on a traditional vision camera exist in a computer vision task, however, the traditional vision camera generates motion blur for high-speed moving objects, the target objects cannot be obviously presented in a scene with low illumination brightness, and the targets cannot be accurately found in the scene where the target objects are similar to the background, so that the target detection method based on the traditional camera picture is difficult to accurately detect the obstacles in time in the scene where the environment is complex and sudden obstacles appear.
Compared with the traditional vision camera, the event camera has the characteristics of low delay, high dynamic range, no motion blur and ultralow power consumption, and is commonly used in tasks of low illumination, high dynamic or capturing high-speed moving objects. For example, chinese patent document CN112200856A discloses a visual ranging method based on an event camera, which uses tiny yolov3 algorithm to detect a target, obtains the category and the position of the target on an image, and calculates the distance between the target and the camera by using a similar triangle algorithm.
However, although this method can provide a high ranging accuracy, a priori information of the target, such as height, width, etc., is still required. Meanwhile, the method only acquires the target distance, but cannot solve the problem of how to avoid the obstacle aiming at the moving obstacle.
Disclosure of Invention
The invention aims to disclose an intelligent human-body-free obstacle avoidance method and device based on an event camera, which are used for solving the problems of size and distance measurement of a moving obstacle with unknown size and obstacle avoidance of a high-speed moving obstacle target.
According to the 1 st aspect of the invention, an obstacle avoidance method is disclosed, which is used for an unmanned body, the unmanned body comprises a binocular event camera, and the method comprises the following steps:
acquiring an event image of a moving obstacle in real time by using a binocular event camera;
calculating and storing the coordinates of the central point of the moving obstacle in the left event camera image and the right event camera image at each moment, and the size information and the depth information of the moving obstacle at each moment;
reading the coordinates of the central point of the moving obstacle in the continuous frame event image of any camera, the size information and the depth information of the moving obstacle, and calculating the motion vector of the target obstacle; and
and setting an unmanned obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance.
In other examples, event images of the moving obstacle are acquired in real time by using a binocular event camera, and the left event image and the right event image obtained at each moment are respectively input into a neural network, so that the vertex coordinates of the target frame of the moving obstacle at the same moment in the respective event images are obtained.
In some other examples, the coordinates of the vertex of the target frame are used to calculate the coordinates of the center point of the moving obstacle in the respective event images, and the length and width dimensions and the depth information of the moving obstacle are calculated based on the principle of similar triangles.
In other examples, two warning regions with different sizes and centered on the center point of the event image are set, and the obstacle avoidance triggering condition is set according to the relative position between the center point of the moving obstacle and the two warning regions.
In some further examples, unmanned obstacle avoidance is triggered when a center point of the moving obstacle is located in the event image, but outside the large alert area, and the depth is less than a first threshold.
In some other examples, when the central point of the moving obstacle is located in the large alert area, the depth variation value is less than 0, and the depth is less than the second threshold, satisfying any one of the following conditions triggers no human body obstacle avoidance: (i) the moving obstacle moves towards the direction of the unmanned body in any dimension direction of the imaging plane; (i i) the center point of the moving obstacle is located within the small alert zone.
In other examples, after the obstacle avoidance triggering, the orthogonal vector of the motion vector of the moving obstacle is taken as the unmanned obstacle avoidance direction.
In other examples, after the obstacle avoidance is triggered, the obstacle avoidance movement distance is the sum of the larger value of the actual physical size of the unmanned body and the length and width size of the moving obstacle.
According to the 2 nd aspect of the invention, also disclose an obstacle avoidance device, which is used for no human body, and comprises:
the binocular event camera is used for acquiring event images of moving obstacles in real time;
the first calculation unit is used for calculating and storing the coordinates of the central point of the moving obstacle in the left event camera image and the right event camera image at each moment, and the size information and the depth information of the moving obstacle at each moment;
the second calculation unit is used for reading the coordinates of the center point of the moving obstacle, the size information and the depth information of the moving obstacle in any camera continuous frame event image and calculating the motion vector of the target obstacle; and
and the obstacle avoidance decision unit is used for setting an unmanned obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance.
According to the 3 rd aspect of the present invention, the invention further discloses an intelligent unmanned body, which comprises a body, a driving device for driving the body to move, a control device, and a binocular event camera for collecting information of moving obstacles, wherein the control device comprises a processor and a memory, a computer program is stored in the memory, and the processor is used for executing the computer program to implement the obstacle avoidance method according to any one of the above schemes.
Compared with the prior art, the obstacle avoidance method can achieve obstacle avoidance on the fast moving obstacle by means of the characteristics of high dynamic property, low delay and high dynamic property of the event camera. Meanwhile, the unnecessary excessive obstacle avoidance maneuvering behavior can be effectively avoided while the reliable obstacle avoidance is ensured. The method has the characteristic of low calculation cost, can output obstacle avoidance decisions within dozens of microseconds, is suitable for a lightweight platform, and can realize autonomous obstacle avoidance by unmanned body airborne computing resources.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is a schematic method flow of training a neural network;
FIG. 2 is a schematic process flow of a method for calculating obstacle size and depth information using parallax;
FIG. 3 is a schematic flow chart of an unmanned obstacle avoidance method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of setting a no-human-body obstacle avoidance triggering condition according to the present invention;
fig. 5 is a schematic composition diagram of an unmanned obstacle avoidance apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an intelligent no-body composition according to an embodiment of the invention;
fig. 7 is an implementation example of unmanned obstacle avoidance using the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The intelligent unmanned body in the invention can be an unmanned device running on land, in the air or in water, such as an unmanned aerial vehicle, an automatic driving vehicle, a robot, an unmanned ship, an underwater unmanned boat and the like. These smart inanimate bodies have a drive means for driving the inanimate body movement, a sensor for collecting environmental information, and a control means for controlling the inanimate body movement, which typically includes a processor, memory, and the like.
In the invention, the unmanned body is provided with binocular event cameras, namely, the binocular event cameras comprise a left side camera and a right side camera, and the central points of the two cameras are separated by a preset distance, namely, the length B of a base line. Typically, the binocular event cameras may be placed horizontally and directed straight ahead on an inanimate object and registered with horizontal corrections.
The event camera is a camera sensitive to pixel brightness change only, microsecond-level response signals can be provided, and an unmanned body carrying the binocular event camera can finish ranging of an emergent target (a moving obstacle) and effective sensing of the environment in complex scenes such as low illumination, high dynamics and the like, so that an obstacle avoidance task can be finished by using limited airborne computing resources, the success rate of the unmanned body independently executing complex tasks is increased, and the traffic safety of the unmanned body is improved.
Firstly, the binocular event camera is used for collecting event data, generating event images and constructing a data set to train a target detection neural network. Specifically, as shown in fig. 1, the method comprises the following steps:
s11, collecting an event stream generated by the rapid movement of the moving obstacle by using an event camera;
each collected event is represented by a quadruple (t, x, y, p), t represents the time when the event occurs, (x, y) represents the abscissa of the position where the event occurs, and p represents the polarity of the occurring event, wherein the brightness increase exceeds the threshold polarity by 1, the brightness decrease exceeds the threshold polarity by 0, and no event is generated when the brightness change does not exceed the threshold.
S12, generating an event image from the event stream according to a fixed time interval;
based on the generation mechanism of the event, each fixed time interval is divided into a plurality of fixed time intervals by adopting a fixed time interval method
Figure BDA0003412074160000051
All event streams in the event list generate event images, and the nth event image comprises
Figure BDA0003412074160000052
All events within the time period. The event image is generated by adopting the following method: and drawing the coordinates generated by polarity into white pixels according to the pixel positions generated by the event, wherein the background color of the image is black.
S13, marking the target position and the category in the event image, and constructing a training data set;
the target position can be represented by a quaternion array and respectively corresponds to the maximum value of the abscissa, the minimum value of the abscissa, the maximum value of the ordinate and the minimum value of the ordinate of the four vertexes of the target frame. The marked event image is divided into 3 parts, namely a training set, a testing set and a verification set. Wherein the training set accounts for 60%, and the test set and the validation set respectively account for 20% of the whole data set.
S14: training a YOLOV5 neural network by using a data set to obtain a weight file of the neural network with the best performance on a test set;
it is understood that other neural networks such as yolov3, yolov4, Faster R-CNN, SSD, etc. can be used in the invention for target detection.
Through the steps, the neural network capable of detecting the type and the position of the movement obstacle of the image acquired by the event camera is obtained. Specifically, the unmanned body acquires an event image generated by the movement barrier in real time by using a binocular event camera, loads the event image into a weight file, and obtains the type and position detection result of the movement barrier by using the neural network. Wherein the result of target position detection is also in quaternion array (x)right,xleft,ydown,yup) Is output, respectively corresponding to the abscissa maximum value, the abscissa minimum value, the ordinate maximum value and the ordinate minimum value of the four vertices of the result target frame, the coordinates (X, Y) of the center point of the moving obstacle in the event image can be represented by the coordinates of the vertices of the target detection frame as:
Figure BDA0003412074160000061
in the invention, because no human body is provided with the binocular event camera, the size of the movement barrier and the distance between the movement barrier and the human body can be calculated by adopting a parallax method. Specifically, as shown in fig. 2, the method includes the following steps:
s21: acquiring event images of the moving obstacles in real time by using a binocular event camera, and respectively inputting the left event image and the right event image obtained at each moment into a neural network to obtain the vertex coordinates of the moving obstacles in the target frames in the respective event images at the same moment;
s22: calculating the coordinates of the central point of the moving obstacle in each event image by using the vertex coordinates of the target frame, and calculating the length and width of the moving obstacle and the distance between the moving obstacle and the unmanned body based on the similar triangle principle;
the disparity dis between the left event image and the right event image is;
dis=B-(XL-XR)
wherein B is the base length, i.e. the distance between the center points of the left and right cameras, XLAbscissa, X, of center point of moving obstacle in event image acquired for left cameraRAnd the abscissa of the center point of the moving obstacle in the event image acquired by the right camera.
Based on the principle of similar triangles, we can obtain:
Figure BDA0003412074160000071
then:
Figure BDA0003412074160000072
wherein f is the focal length of the event camera, and Z is depth information, i.e. the distance between the moving obstacle and the center of the binocular camera, which is simplified in the present invention to the distance between the moving obstacle and the unmanned body, and the skilled person can easily understand that there is a fixed and simple conversion relationship between the calculated distance and the actual distance.
According to the position detection result of the moving obstacle in the event image acquired by any one of the left camera and the right camera, namely the vertex coordinates of the target frame, the length h of the moving obstacle can be calculated based on the similar triangle principle, and the width w is respectively as follows:
Figure BDA0003412074160000073
Figure BDA0003412074160000074
where u is the size length of one pixel in the image.
By the method and the binocular camera, the invention can acquire the coordinates of the center point of each moment of the unknown movement barrier, the length and width dimension information and the distance information (depth information) of the target distance from the unmanned body (the center of the binocular event camera), and at least store the current k moments (namely the nearest k moments)
Figure BDA0003412074160000075
) 2 ≦ k ≦ n, preferably k ≦ 2.
On this basis, the invention provides an unmanned obstacle avoidance method for a high-speed moving obstacle, as shown in fig. 3, the method comprises the following steps:
s31: calculating a motion vector of the target obstacle using the continuous frame event images;
the starting time of the acquisition of either the left camera or the right camera is respectively
Figure BDA0003412074160000081
And
Figure BDA0003412074160000082
the two adjacent event images, namely the event images acquired at the current moment and the previous moment.
Reading (reading from a memory or a storage) the coordinates of the center point of the moving obstacle in the event images, the size information of the moving obstacle, and the distance (depth information) between the moving obstacle and the unmanned body in two adjacent event images, and respectively recording the coordinates, the size information, and the distance (depth information) as the center point coordinates, the size information, and the distance between the moving obstacle and the unmanned body
Figure BDA0003412074160000083
Then the movement obstacle is in
Figure BDA0003412074160000084
Distance of movement X in horizontal and vertical directions during a time perioddis,n,Ydis,nAnd variation of depth
Figure BDA0003412074160000085
Can be respectively expressed as:
Figure BDA0003412074160000086
Figure BDA0003412074160000087
Figure BDA0003412074160000088
then the process of the first step is carried out,
Figure BDA0003412074160000089
motion vector of a momentarily moving obstacle
Figure BDA00034120741600000810
Expressed as:
Figure BDA00034120741600000811
s32: and setting an unmanned body obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance.
As shown in FIG. 4, two coordinates of the center point are set as
Figure BDA00034120741600000812
The alert zone of (2). The height and width of the warning region 1 are hwarning1,wwarning1Pixel, upper left of regionThe vertex and the lower right vertex coordinates are respectively
Figure BDA00034120741600000813
The height and width of the warning region 2 are hwarning2,wwarning2Each pixel having coordinates of upper left vertex and lower right vertex
Figure DA00034120741637030965
Figure BDA00034120741600000814
Wherein h iswarning1>hwarning2,wwarning1>wwarning2
The obstacle avoidance triggering conditions are as follows:
a. when the detected central point of the moving obstacle is positioned in the event picture but outside the warning area 1, and the depth Z is less than d1Triggering unmanned obstacle avoidance;
b. when the detected center point of the moving obstacle is located in the warning zone 1, i.e. when the center point of the moving obstacle is located in the warning zone
Figure BDA0003412074160000091
Figure BDA0003412074160000092
And is
Figure BDA0003412074160000093
Time, depth changed
Figure BDA0003412074160000094
(indicating measured movement of the obstacle to the unmanned body, with gradually decreasing depth) and a depth Z < d2When the method is used, the following conditions are met to trigger no human body obstacle avoidance:
b1. if it is
Figure BDA0003412074160000095
(representing the movement obstacle on the left side of the unmanned body) and (first term of movement vector of the movement obstacle) Xdis,n0 (representing the movement of the moving obstacle from left to right));
b2. If it is
Figure BDA0003412074160000096
(indicating that the movement obstacle is on the right side of the unmanned body) and (first term of movement vector of the movement obstacle) Xdis,n< 0 (indicating a moving obstacle moving from right to left);
b3. if it is
Figure BDA0003412074160000097
(representing the moving obstacle under the unmanned body) and (second term of motion vector of the moving obstacle) Ydis,n0 (representing the movement of the moving obstacle from bottom to top);
b4. if it is
Figure BDA0003412074160000098
(representing the movement obstacle above the unmanned body) and (first term of movement vector of the movement obstacle) Ydis,n< 0 (indicating that the moving obstacle is moving from top to bottom);
b5. when the detected center point of the moving obstacle is located within the surveillance zone 2, i.e. when the detected center point of the moving obstacle is located within the surveillance zone
Figure BDA0003412074160000099
Figure BDA00034120741600000910
And is
Figure BDA00034120741600000911
Then (c) is performed.
Wherein d is1And d2Distance thresholds are set for moving obstacles that are present outside the alert zone and within the alert zone, respectively. D is set because the probability of collision between the moving obstacle and the unmanned body when the moving obstacle appears in a specific area is sequentially increased according to the sequence, and the area with higher collision probability needs to make obstacle avoidance decisions at a longer distance, and the like1<d2
The invention firstly judges whether the moving obstacle is outside the warning area, and only sets a very small depth threshold value outside the warning area, because the probability of collision between an object appearing at the edge of the picture and an unmanned body is very small, and only obstacle avoidance measures are taken when the moving obstacle is very close to the unmanned body.
For the moving obstacle in the warning area, whether the moving obstacle is closer to the unmanned body in depth is judged first, whether the moving obstacle moves towards the unmanned body in the horizontal or vertical direction is further judged on the premise that the depth is gradually reduced, and when the conditions are met, the probability that the object collides with the unmanned body is high, and obstacle avoidance measures need to be taken when the distance from the unmanned body is long.
In addition to the above-mentioned scenes, the moving obstacle does not move directly towards the human body, or the moving obstacle does not move towards the human body in the horizontal or vertical direction, but because of the absence of the physical size of the human body, the moving obstacle does not move towards the human body from a small area right in front of the human body, and in this case, an obstacle avoidance measure needs to be taken at a longer distance.
If it is
Figure BDA0003412074160000101
When the obstacle avoidance is triggered at the moment, the orthogonal vector of the motion vector of the moving obstacle is used
Figure BDA0003412074160000102
As a direction of obstacle avoidance for the unmanned body, i.e.
Figure BDA0003412074160000103
For example, the obstacle avoidance direction vector may be expressed as:
Figure BDA0003412074160000104
alternatively, the obstacle avoidance direction may be set by a genetic algorithm, an artificial potential field method, an a-star algorithm, or the like.
After the unmanned obstacle avoidance is triggered, the obstacle avoidance movement distance is the sum of the actual physical size of the unmanned body and max (h, w), wherein max (h, w) is the larger value of the actual physical length and width of the moving obstacle.
The invention can real-timely complete the distance measurement and the size measurement of a moving target obstacle object with unknown size by combining a binocular event camera with a yolov5 target detection algorithm, can calculate the motion vector of the target obstacle object according to the measurement result when being deployed on unmanned planes such as unmanned planes, unmanned vehicles and the like, and can enable the unmanned planes to complete the obstacle avoidance of a fast moving obstacle by virtue of the characteristics of high dynamic, low delay and high dynamic of the event camera according to the design of further obstacle avoidance triggering conditions, obstacle avoidance directions and motion distances.
According to the invention, through calculation of the motion vector of the moving obstacle object and setting of the two warning areas in the event image, the unnecessary excessive obstacle avoidance maneuver can be effectively avoided while the reliable obstacle avoidance is ensured.
The invention has low calculation cost, can output obstacle avoidance decisions within tens of microseconds, is suitable for a lightweight platform, and can realize autonomous obstacle avoidance by unmanned body airborne computing resources.
According to another embodiment of the present invention, an obstacle avoidance apparatus is further disclosed, which is used for an unmanned body, and as shown in fig. 5, the apparatus includes:
a binocular event camera 501 for acquiring an event image of a moving obstacle in real time;
a first calculating unit 502, configured to calculate and store coordinates of a center point of the moving obstacle in the left event camera image and the right event camera image at each time, and size information and depth information of the moving obstacle at each time;
a second calculating unit 503, configured to read coordinates of a center point of a moving obstacle, size information of the moving obstacle, and depth information in any one of the camera continuous frame event images, and calculate a motion vector of the target obstacle; and
and an obstacle avoidance decision unit 504, configured to set an unmanned obstacle avoidance triggering condition according to the size information of the target obstacle, the motion vector, and the position of the target obstacle in the event image, and calculate an obstacle avoidance direction and a motion distance.
According to another embodiment of the present invention, an intelligent unmanned body 600 is further disclosed, as shown in fig. 6, including a body 601, a driving device 602 for driving the body to move, and a control device 603, and further including a binocular event camera 604 for collecting moving obstacle information, where the control device includes a processor 6031 and a memory 6032, the memory 6032 stores a computer program, and the processor 6031 is configured to execute the computer program to implement the obstacle avoidance method according to any one of the above schemes.
Fig. 7 shows an embodiment of the present invention, in which a thrown ball is used as a movement obstacle. After the ball enters the warning area of the event picture, when the depth is smaller than the threshold value, the algorithm gives an obstacle avoidance prompt and simultaneously outputs a moving obstacle motion vector and an unmanned body (unmanned aerial vehicle) obstacle avoidance direction vector.
The Processor may be a Central Processing Unit (CPU), or other general-purpose Processor, a Digital Signal Processor (DSP), an application specific integrated circuit (App I cat I on Spec I F I C I integrated C I rcu it, AS ic), a field programmable Gate Array (F I l d-programmable ab l Gate Array, FPGA) or other programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or a combination of the above chips.
The memory may be a transient memory or a non-transient memory.
Although the present invention has been described in more detail by the above embodiments, the present invention is not limited to the above embodiments, and modifications and equivalents may be made to the technical solutions of the embodiments without departing from the spirit and scope of the present invention.

Claims (10)

1. An obstacle avoidance method for an unmanned body including a binocular event camera, the method comprising:
acquiring an event image of a moving obstacle in real time by using a binocular event camera;
calculating and storing the coordinates of the central point of the moving obstacle in the left event camera image and the right event camera image at each moment, and the size information and the depth information of the moving obstacle at each moment;
reading the coordinates of the central point of the moving obstacle in the continuous frame event image of any camera, the size information and the depth information of the moving obstacle, and calculating the motion vector of the target obstacle; and
and setting an unmanned obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance.
2. An obstacle avoidance method according to claim 1, wherein event images of the moving obstacle are acquired in real time by using a binocular event camera, and the left event image and the right event image obtained at each time are respectively input to the neural network to obtain the vertex coordinates of the target frame of the moving obstacle at the same time in the respective event images.
3. The intelligent unmanned obstacle avoidance method of claim 2, wherein the coordinates of the center point of the moving obstacle in each event image are calculated by using the coordinates of the vertices of the target frame, and the length, width and depth information of the moving obstacle are calculated based on a similar triangle principle.
4. An obstacle avoidance method according to claim 1, wherein two warning regions of different sizes are provided with the center point of the event image as the center, and the obstacle avoidance triggering condition is set according to the relative position between the center point of the moving obstacle and the two warning regions.
5. An obstacle avoidance method according to claim 4, wherein unmanned obstacle avoidance is triggered when the central point of the moving obstacle is located in the event image, but outside the large alert area, and the depth is less than the first threshold value.
6. An obstacle avoidance method according to claim 4, wherein when the central point of the moving obstacle is located in a large alert area, the depth variation value is less than 0 and the depth is less than a second threshold value, any one of the following conditions is satisfied to trigger no-human obstacle avoidance: (i) the moving obstacle moves towards the direction of the unmanned body in any dimension direction of the imaging plane; (ii) the central point of the moving obstacle is positioned in the small warning area.
7. An obstacle avoidance method according to claim 5 or 6, wherein after the obstacle avoidance is triggered, the orthogonal vector of the motion vector of the moving obstacle is taken as the unmanned obstacle avoidance direction.
8. An obstacle avoidance method according to claim 7, wherein after the obstacle avoidance is triggered, the obstacle avoidance movement distance is the sum of the larger value of the actual physical size of the unmanned body and the length and width of the moving obstacle.
9. An obstacle avoidance device is used for no human body, and is characterized in that the device comprises:
the binocular event camera is used for acquiring event images of moving obstacles in real time;
the first calculation unit is used for calculating and storing the coordinates of the central point of the moving obstacle in the left event camera image and the right event camera image at each moment, and the size information and the depth information of the moving obstacle at each moment;
the second calculation unit is used for reading the coordinates of the center point of the moving obstacle, the size information and the depth information of the moving obstacle in any camera continuous frame event image and calculating the motion vector of the target obstacle; and
and the obstacle avoidance decision unit is used for setting an unmanned obstacle avoidance triggering condition according to the size information and the motion vector of the target obstacle and the position of the target obstacle in the event image, and calculating an obstacle avoidance direction and a motion distance.
10. An intelligent unmanned body, comprising a body, a driving device for driving the body to move, and a control device, and further comprising a binocular event camera for collecting information of moving obstacles, wherein the control device comprises a processor and a memory, the memory stores a computer program, and the processor is used for executing the computer program to realize the obstacle avoidance method according to any one of claims 1 to 8.
CN202111532883.7A 2021-12-15 2021-12-15 Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body Pending CN114359714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532883.7A CN114359714A (en) 2021-12-15 2021-12-15 Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532883.7A CN114359714A (en) 2021-12-15 2021-12-15 Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body

Publications (1)

Publication Number Publication Date
CN114359714A true CN114359714A (en) 2022-04-15

Family

ID=81098787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532883.7A Pending CN114359714A (en) 2021-12-15 2021-12-15 Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body

Country Status (1)

Country Link
CN (1) CN114359714A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911268A (en) * 2022-06-16 2022-08-16 山东大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation
RU2785822C1 (en) * 2022-11-14 2022-12-14 Ольга Дмитриевна Миронова Way to warn about the presence of an obstacle on the way
CN115576329A (en) * 2022-11-17 2023-01-06 西北工业大学 Obstacle avoidance method of unmanned AGV (automatic guided vehicle) based on computer vision
CN115631407A (en) * 2022-11-10 2023-01-20 中国石油大学(华东) Underwater transparent biological detection based on event camera and color frame image fusion
CN115996320A (en) * 2023-03-22 2023-04-21 深圳市九天睿芯科技有限公司 Event camera adaptive threshold adjustment method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911268A (en) * 2022-06-16 2022-08-16 山东大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on visual simulation
CN115631407A (en) * 2022-11-10 2023-01-20 中国石油大学(华东) Underwater transparent biological detection based on event camera and color frame image fusion
CN115631407B (en) * 2022-11-10 2023-10-20 中国石油大学(华东) Underwater transparent biological detection based on fusion of event camera and color frame image
RU2785822C1 (en) * 2022-11-14 2022-12-14 Ольга Дмитриевна Миронова Way to warn about the presence of an obstacle on the way
CN115576329A (en) * 2022-11-17 2023-01-06 西北工业大学 Obstacle avoidance method of unmanned AGV (automatic guided vehicle) based on computer vision
CN115996320A (en) * 2023-03-22 2023-04-21 深圳市九天睿芯科技有限公司 Event camera adaptive threshold adjustment method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7052663B2 (en) Object detection device, object detection method and computer program for object detection
CN110244322B (en) Multi-source sensor-based environmental perception system and method for pavement construction robot
CN107272021B (en) Object detection using radar and visually defined image detection areas
JP7147420B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND COMPUTER PROGRAM FOR OBJECT DETECTION
CN114359714A (en) Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
US7103213B2 (en) Method and apparatus for classifying an object
KR102530691B1 (en) Device and method for monitoring a berthing
US11010622B2 (en) Infrastructure-free NLoS obstacle detection for autonomous cars
US20210365699A1 (en) Geometry-aware instance segmentation in stereo image capture processes
JP6574611B2 (en) Sensor system for obtaining distance information based on stereoscopic images
JP7135665B2 (en) VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD AND COMPUTER PROGRAM
KR102265980B1 (en) Device and method for monitoring ship and port
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
JP2021165914A (en) Object state discrimination device, object state discrimination method, and computer program and control device for object state discrimination
JP7276282B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND COMPUTER PROGRAM FOR OBJECT DETECTION
CN110824495B (en) Laser radar-based drosophila visual inspired three-dimensional moving target detection method
Yoneda et al. Simultaneous state recognition for multiple traffic signals on urban road
TWI680898B (en) Light reaching detection device and method for close obstacles
US11120292B2 (en) Distance estimation device, distance estimation method, and distance estimation computer program
Baris et al. Classification and tracking of traffic scene objects with hybrid camera systems
JP4788399B2 (en) Pedestrian detection method, apparatus, and program
EP3855393B1 (en) A method for detecting moving objects
Huang et al. Rear obstacle warning for reverse driving using stereo vision techniques
Petković et al. Target detection for visual collision avoidance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination