CN117191047B - Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment - Google Patents

Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment Download PDF

Info

Publication number
CN117191047B
CN117191047B CN202311452969.8A CN202311452969A CN117191047B CN 117191047 B CN117191047 B CN 117191047B CN 202311452969 A CN202311452969 A CN 202311452969A CN 117191047 B CN117191047 B CN 117191047B
Authority
CN
China
Prior art keywords
virtual frame
potential field
point
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311452969.8A
Other languages
Chinese (zh)
Other versions
CN117191047A (en
Inventor
刘云平
潘慧婷
敖洋钒
牛天宇
王立喜
张柄棋
龚毅光
臧强
朱一辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202311452969.8A priority Critical patent/CN117191047B/en
Publication of CN117191047A publication Critical patent/CN117191047A/en
Application granted granted Critical
Publication of CN117191047B publication Critical patent/CN117191047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a self-adaptive active visual navigation method and device for an unmanned aerial vehicle in a low-light environment, comprising the following steps: screening event information output by an event camera to obtain activated pixel points; on the surface of an activation event, checking the time interval between the time stamp of the latest activation pixel point and the time stamp of the adjacent activation pixel point, and determining whether the latest activation pixel point is an event feature point or not; generating a virtual frame from the acquired event feature points; calculating potential field force of points on the virtual frame to the center of the virtual frame, and synthesizing the potential field force to obtain potential field force; and calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the weak light environment speed, and executing the weak light environment speed by the unmanned aerial vehicle. By adopting the technical scheme, the characteristic deflection speed is obtained based on the calculation of the potential field force in the virtual frame, the path with rich event characteristics is selected, the navigation accuracy is improved, and the navigation instantaneity is high due to the small calculated amount.

Description

Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment
Technical Field
The invention relates to the technical field of unmanned aerial vehicle visual navigation, in particular to an unmanned aerial vehicle self-adaptive active visual navigation method and device in a low-light environment.
Background
The scheme that unmanned aerial vehicle navigated usually adopts is, based on unmanned aerial vehicle carries the image recognition function of camera device, but can lead to the image that camera device obtained to underexpose when ambient light is weaker, causes the quantity reduction, the quality reduction of the feature point that acquire from the image, influences the feature point and matches the effect to reduce visual navigation's robustness and positioning accuracy.
In order to solve the problems, for example, (1) patent number CN2020103811827 discloses a microminiature unmanned aerial vehicle visual navigation method under a high dynamic scene, exposure control and compensation methods are adopted to perform luminosity pretreatment on an input image, random illumination change in the high dynamic scene is overcome, and navigation precision is improved through global pose diagram optimization, but the invention aims at the processing of output information of a traditional camera, and the method is poor in real-time performance because the image is segmented and reprocessed based on a neural network; (2) The invention patent number CN2019109920469 discloses an unmanned aerial vehicle visual navigation positioning method based on laser line assistance, which utilizes a laser emitter to emit laser lines with regular shapes, projects the laser lines on the surface of a building, then observes the laser lines through a binocular visual sensor carried by the unmanned aerial vehicle, and performs visual navigation according to the laser lines, so that navigation information is provided for the unmanned aerial vehicle, but extra laser emitters are required to be introduced in the scheme as visual references, and the overall robustness of the system is relatively poor.
Therefore, no unmanned aerial vehicle navigation scheme in the weak light environment with real-time performance, robustness and accuracy is available in the prior art.
Disclosure of Invention
The invention aims to: the invention provides a self-adaptive active visual navigation method and device for an unmanned aerial vehicle in a low-light environment, wherein a virtual frame is generated through event feature points obtained through screening, and the feature deflection speed is obtained based on potential field force calculation in the virtual frame, so that a path with rich event features can be actively selected, the navigation accuracy is improved, and the real-time performance is high due to small calculated amount; furthermore, the characteristic speed is adjusted by adding the time weight, and the unmanned aerial vehicle adaptively matches the speed according to the event characteristic distribution condition, so that the robustness is improved.
The technical scheme is as follows: the invention discloses a self-adaptive active visual navigation method of an unmanned aerial vehicle in a low-light environment, which comprises the following steps: acquiring event information output by an event camera, screening the event information, and taking the screened pixel points as activated pixel points; on the surface of an activation event, checking the time interval between the time stamp of the latest activation pixel point and the time stamp of the adjacent activation pixel point, and determining whether the latest activation pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points; generating a virtual frame from the acquired event feature points; calculating potential field force of points on the virtual frame to the center of the virtual frame, and synthesizing the potential field force to obtain potential field force; the potential field force refers to the attractive force of a point on the virtual frame to the center of the virtual frame; and calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the weak light environment speed, and executing the weak light environment speed by the unmanned aerial vehicle.
Specifically, the brightness of the pixel points in the event information is compared with the brightness of the pixel points at the same position, and if the brightness change reaches the contrast threshold, the pixel points in the event information are screened to be used as activated pixel points.
Specifically, the activation event surface is in a three-dimensional shape, each time an activation pixel point is obtained through screening, the latest activation pixel point is added to the activation event surface, and the x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the activation event surface correspond to the x-axis coordinate, the y-axis coordinate and the time stamp of the activation pixel point in the activation event surface respectively.
Specifically, a first circumference and a second circumference on the surface of an activation event are respectively obtained by taking the latest activation pixel point as a circle center and taking the first length as a radius and the second length as a radius; meanwhile, the following two conditions are satisfied, and the latest activated pixel point is considered as an event feature point: (1) On the first circumference, there are consecutive time stamps of a first standard number of activated pixels, which are greater than the first standard time stamp; (2) On the second circumference, there are consecutive second standard number of active pixel points with time stamps greater than the second standard time stamps.
Specifically, when the obtained event feature points reach the number required by virtual frame generation, a frame of virtual frame is generated, and then the obtained event feature points are used for generating a next frame of virtual frame.
Specifically, the virtual frame is divided into a potential field-free region, a weak potential field region and a strong potential field region; the point in the potential-free field area has no potential field force to the center of the virtual frame; calculating potential field force to the virtual frame center by the point in the weak field region through an included angle between the point and the virtual frame center and a distance between the point and the virtual frame center; and calculating potential field force to the center of the virtual frame by the point in the strong field region through the charge quantity carried by the point and the included angle between the point and the center of the virtual frame.
Specifically, the potential field force of the point on the virtual frame to the virtual frame center is calculated by adopting the following formula:
wherein f i Representing potential field force of a point on a virtual frame to a virtual frame center, d representing distance between the point on the virtual frame and the virtual frame center, a region with the virtual frame center as a center and r as a radius as a potential field-free region, a region with the virtual frame center as a center and between r and r+s as a weak potential field region, a region other than the potential field-free region and the weak potential field region as a strong potential field region, q i Representing the amount of charge carried by a point on the virtual frame, θ representing the angle between the point on the virtual frame and the virtual frame center;
the q is i The following formula is adopted for calculation:
wherein θ max Representing the maximum included angle.
Specifically, for the historical virtual frame and the current virtual frame, respectively acquiring the accumulation time for the event feature points to reach the number required by virtual frame generation, and adjusting the time weight of the unmanned aerial vehicle feature deflection speed in the process of combining the unmanned aerial vehicle feature deflection speed and the unmanned aerial vehicle target deflection speed according to the comparison result of the accumulation time.
Specifically, the calculation formula of the low light environment speed is as follows:
wherein V is pub Indicating the ambient speed of weak light, V g Representing unmanned plane target deflection speed, V f The unmanned aerial vehicle characteristic deflection speed is represented, alpha represents the weight of the unmanned aerial vehicle target deflection speed, and the time weight gamma of the unmanned aerial vehicle characteristic deflection speed t Is substituted into a calculation formula of the representation time weight limiting function epsilon (x);
time weight gamma t The calculation formula of (2) is as follows:
wherein t is i Represents the accumulated time of the virtual frame of the ith frame, k represents the current time, t k Representing the accumulated time of the current virtual frame;
the calculation formula of the time weight limiting function epsilon (x) is as follows:
wherein x represents a substituted time weight value;
the calculation formula of the unmanned aerial vehicle characteristic deflection speed is as follows:
where N represents the total number of event feature points in the virtual frame.
The invention also provides an unmanned aerial vehicle self-adaptive active visual navigation device in a low-light environment, which comprises a screening unit, an accumulating unit, a virtual frame generating unit, a potential field force calculating unit and a speed synthesizing unit, wherein: the screening unit is used for acquiring event information output by the event camera, screening the event information, and taking the screened pixel points as activated pixel points; the accumulating unit is used for checking the time interval between the time stamp of the latest activated pixel point and the time stamp of the adjacent activated pixel point on the surface of the activated event, and determining whether the latest activated pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points; the virtual frame generation unit is used for generating a virtual frame from the acquired event characteristic points; the potential field force calculation unit is used for calculating potential field force of points on the virtual frame to the center of the virtual frame and synthesizing the potential field force to obtain potential field force; the potential field force refers to the attractive force of a point on the virtual frame to the center of the virtual frame; the speed synthesis unit is used for calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the low-light environment speed, and executing the low-light environment speed by the unmanned aerial vehicle.
The beneficial effects are that: compared with the prior art, the invention has the following remarkable advantages: generating a virtual frame through the event feature points obtained through screening, and calculating the feature deflection speed based on potential field force in the virtual frame, so that a path with rich event features can be actively selected, the navigation accuracy is improved, and the real-time performance is high due to small calculated amount; furthermore, the characteristic speed is adjusted by adding the time weight, and the unmanned aerial vehicle adaptively matches the speed according to the event characteristic distribution condition, so that the robustness is improved.
Drawings
FIG. 1 is a schematic flow chart of virtual frame synthesis provided by the invention;
fig. 2 is a schematic flow chart of active visual navigation of an unmanned aerial vehicle provided by the invention;
FIG. 3 is a schematic illustration of an activation event surface provided by the present invention;
FIG. 4 is a schematic diagram of constructing a virtual frame according to the present invention;
fig. 5 is a schematic diagram of a potential field region provided by the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of virtual frame synthesis provided by the present invention is shown.
Firstly, event information output by an event camera is acquired, the event information is screened, and the screened pixel points are used as activated pixel points.
In a specific implementation, the event camera is a biologically inspired sensor, which is quite different from the working principle of the traditional camera. It acquires images by means of asynchronous events, and typically uses the brightness change of each pixel to output asynchronous event signals, including information such as position, light intensity, and time stamp, regardless of the number of frames acquired per second. Event cameras have many advantages such as fast response, high dynamic range, low power consumption, low latency, and high precision, and thus find wide application in many fields of application. The event (event information) output by the event camera includes a time stamp, pixel coordinates and polarity (brightness change), and expresses when, which pixel has the brightness increased or decreased, and when the brightness change of a certain pixel reaches a certain threshold value, an event (event information) is output.
In the embodiment of the invention, the brightness of the pixel points in the event information is compared with the brightness of the pixel points at the same position, and if the brightness change reaches the contrast threshold (the corresponding setting can be carried out according to the actual application condition), the pixel points in the event information are screened to be used as the activated pixel points.
In a specific implementation, for an event camera, the brightness is encoded in the form of temporal contrast. When the brightness change reaches the contrast threshold value compared with the event information of the last pixel point at the same position, the event information at the current moment can be screened out, and the pixel point is used as an activated pixel point.
Referring to fig. 3, a schematic diagram of an activation event surface provided in the present invention is shown.
The active pixel points are accumulated to form an active event surface (SAE, surface of Active Events) while the active pixel points are continuously obtained, namely, the active event surface is dynamically updated.
In the embodiment of the invention, the activation event surface is in a three-dimensional shape, and each time an activation pixel point is obtained through screening, the latest activation pixel point is added to the activation event surface, and the x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the activation event surface correspond to the x-axis coordinate, the y-axis coordinate and the time stamp t of the activation pixel point in the activation event surface respectively.
Each active pixel point is added to the active event surface according to the x-axis coordinates, the y-axis coordinates and the time stamp of the active pixel point, wherein the time stamp t represents the time corresponding to the active pixel point.
On the surface of an activation event, checking the time interval between the time stamp of the latest activation pixel point and the time stamp of the adjacent activation pixel point, and determining whether the latest activation pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points.
In the implementation, whether the area where the latest activated pixel exists has frequent feature change or not can be further determined by comparing the latest (current) activated pixel with the time stamp of the adjacent activated pixel, if so, the time interval is shorter, the area where the latest activated pixel exists has frequent feature change, the feature change represented by the latest activated pixel is more effective, the feature is more abundant, and the area where the latest activated pixel exists is easy to identify, so that the latest activated pixel can be extracted as an event feature point.
In the embodiment of the invention, a first circumference and a second circumference on the surface of an activation event are respectively obtained by taking the latest activation pixel point as a circle center and taking a first length as a radius and a second length as a radius; meanwhile, the following two conditions are satisfied, and the latest activated pixel point is considered as an event feature point: (1) On the first circumference, there are consecutive time stamps of a first standard number of activated pixels, which are greater than the first standard time stamp; (2) On the second circumference, there are consecutive second standard number of active pixel points with time stamps greater than the second standard time stamps.
In a specific implementation, the first length, the second length, the first standard number, the second standard number, the first standard time stamp and the second standard time stamp may be set correspondingly according to practical situations, so as to adjust the identification sensitivity, in general, the first length and the second length may be respectively 3 and 4 pixel lengths, and the first standard number and the second standard number may be respectively 3 and 4.
In a specific implementation, the first circumference and the second circumference are connected by an active pixel point on the surface of the active event.
A number of event feature points are acquired and a virtual frame may be generated.
In the embodiment of the invention, when the acquired event feature points reach the number required by virtual frame generation, a frame of virtual frame is generated, and then the acquired event feature points are used for generating the next frame of virtual frame.
In a specific implementation, the virtual frame is formed by projecting a plurality of event feature points onto the same time stamp surface, that is, filling the pixels of the plurality of event feature points on the same surface.
In a specific implementation, in general, the number of event feature points required for generating a frame of virtual frames is fixed, and each time the fixed number of event feature points is accumulated, a frame of virtual frames is generated, and then accumulation is continued, and virtual frames continue to be generated.
As shown in fig. 4, where a point in the three-dimensional coordinate system space represents an event feature point, a point on the right plane represents a projection point of the event feature point on the same time stamp t, that is, the right plane represents a virtual frame.
Fig. 2 is a schematic flow chart of active visual navigation of an unmanned aerial vehicle according to the present invention.
And calculating the potential field force of the point on the virtual frame to the center of the virtual frame, and synthesizing the potential field force to obtain the potential field force.
In the embodiment of the invention, the potential field force refers to the attractive force of a point on a virtual frame to the center of the virtual frame.
Referring to fig. 5, a schematic diagram of a potential field area provided by the present invention is shown, wherein the abscissa of the coordinate axis is the x-axis coordinate and the y-axis coordinate of the event feature point.
In the embodiment of the invention, a virtual frame is divided into a potential field-free region, a weak potential field region and a strong potential field region; the point in the potential-free field area has no potential field force to the center of the virtual frame; calculating potential field force to the virtual frame center by the point in the weak field region through an included angle between the point and the virtual frame center and a distance between the point and the virtual frame center; and calculating potential field force to the center of the virtual frame by the point in the strong field region through the charge quantity carried by the point and the included angle between the point and the center of the virtual frame.
In the embodiment of the invention, the potential field force of the point on the virtual frame to the virtual frame center is calculated by adopting the following formula:
wherein f i Representing potential field force of a point on a virtual frame to a virtual frame center, d representing distance between the point on the virtual frame and the virtual frame center, taking the virtual frame center as a circle center, taking a region with r as a radius as a potential field-free region, taking the virtual frame center as the circle center, taking a region between the radius r and the radius r+s as a weak potential field region, taking a region outside the potential field-free region and the weak potential field region as a strong potential field region, q i Representing the amount of charge carried by a point on the virtual frame, θ representing the angle between the point on the virtual frame and the virtual frame center;
the q is i The following formula is adopted for calculation:
wherein θ max Representing the maximum included angle (which can be set according to the actual application situation).
In implementations, points on the virtual frame refer to event feature points on the virtual frame.
In a specific implementation, in order to ensure that the movement speed change of the unmanned aerial vehicle is as gentle as possible and can effectively move along a path with rich features to finally reach a target point, an included angle theta between the characteristic point and the speed of the guiding target point is calculated by utilizing potential field force of deflection characteristics formed by the characteristic point and a virtual frame center point, and the charge quantity carried by the characteristic point is determined according to the included angle.
In a specific implementation, the potential field forces are synthesized to obtain potential field forces, namely, the potential field forces of all event characteristic points in the calculated virtual frame pointing to the center of the virtual frame are added to obtain potential field forces.
And calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the weak light environment speed, and executing the weak light environment speed by the unmanned aerial vehicle.
In the embodiment of the invention, the calculation formula of the low-light environment speed is as follows:
wherein V is pub Indicating the ambient speed of weak light, V g Representing unmanned plane target deflection speed, V f The unmanned aerial vehicle characteristic deflection speed is represented, alpha represents the weight of the unmanned aerial vehicle target deflection speed, and the time weight gamma of the unmanned aerial vehicle characteristic deflection speed t Is substituted into a calculation formula of the representation time weight limiting function epsilon (x);
time weight gamma t The calculation formula of (2) is as follows:
wherein t is i Represents the accumulated time of the virtual frame of the ith frame, k represents the current time, t k Representing the accumulated time of the current virtual frame;
the calculation formula of the time weight limiting function epsilon (x) is as follows:
wherein x represents a substituted time weight value;
the calculation formula of the unmanned aerial vehicle characteristic deflection speed is as follows:
where N represents the total number of event feature points in the virtual frame.
In the specific implementation, the potential field resultant force is normalized and then used as the unmanned plane characteristic deflection speed, so that a path with rich event characteristics can be actively selected, and the navigation accuracy is improved.
In a specific implementation, each virtual frame starts to acquire the event feature points until the number of the acquired event feature points reaches the number required for generating the virtual frames, which is called the accumulation time of the virtual frames.
In a specific implementation, since the accumulation time of virtual frames of each frame is different, the output of the virtual frames is not a fixed frequency, a method for releasing the characteristic deflection speed by only considering the distribution condition of image characteristic points is not suitable for an event camera, and in order to ensure that the unmanned aerial vehicle considers the characteristic deflection speed more in a scene with rich characteristics and rapid change, a time weight factor gamma is added t The weight is dynamically adjusted, so that the unmanned aerial vehicle can consider the influence of the features more when flying in the area with the rapid feature change, and the robustness of the visual navigation system based on the event camera is improved.
In a specific implementation, the temporal weighting factor gamma t And comparing and evaluating the accumulated time of the current virtual frame with the average value of the accumulated time of the previous virtual frame, and self-adaptively adjusting the characteristic speed weight.
In a specific embodiment, in order to make the speed in the unmanned plane navigation process stable, a limiting function epsilon (x) is introduced to limit the time weight factor change.
In a specific implementation, the unmanned target deflection speed refers to the component of the speed of the unmanned target that is currently pointing to the target (the actual flying target).
In a specific implementation, the unmanned aerial vehicle performs calculation to obtain the low-light environment speed, when the next frame virtual frame is accumulated, and thus a new low-light environment speed is obtained by calculation, and the unmanned aerial vehicle performs the new low-light environment speed, and the process is dynamic and real-time.
The invention also provides an unmanned aerial vehicle self-adaptive active visual navigation device in a low-light environment, which comprises a screening unit, an accumulating unit, a virtual frame generating unit, a potential field force calculating unit and a speed synthesizing unit, wherein: the screening unit is used for acquiring event information output by the event camera, screening the event information, and taking the screened pixel points as activated pixel points; the accumulating unit is used for checking the time interval between the time stamp of the latest activated pixel point and the time stamp of the adjacent activated pixel point on the surface of the activated event, and determining whether the latest activated pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points; the virtual frame generation unit is used for generating a virtual frame from the acquired event characteristic points; the potential field force calculation unit is used for calculating potential field force of points on the virtual frame to the center of the virtual frame and synthesizing the potential field force to obtain potential field force; the potential field force refers to the attractive force of a point on the virtual frame to the center of the virtual frame; the speed synthesis unit is used for calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the low-light environment speed, and executing the low-light environment speed by the unmanned aerial vehicle.
In a specific implementation, the functions and actions of each execution unit in the unmanned aerial vehicle self-adaptive active visual navigation device under the low-light environment provided by the embodiment of the invention can refer to steps, processes and calculation methods in the unmanned aerial vehicle self-adaptive active visual navigation method under the low-light environment provided by the embodiment of the invention.

Claims (7)

1. The unmanned aerial vehicle self-adaptive active visual navigation method in the low light environment is characterized by comprising the following steps:
acquiring event information output by an event camera, screening the event information, and taking the screened pixel points as activated pixel points;
on the surface of an activation event, checking the time interval between the time stamp of the latest activation pixel point and the time stamp of the adjacent activation pixel point, and determining whether the latest activation pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points;
generating a virtual frame from the acquired event feature points;
calculating potential field force of points on the virtual frame to the center of the virtual frame, and synthesizing the potential field force to obtain potential field force; the potential field force refers to the attractive force of a point on the virtual frame to the center of the virtual frame; dividing the virtual frame into a potential field-free region, a weak potential field region and a strong potential field region; the point in the potential-free field area has no potential field force to the center of the virtual frame; calculating potential field force to the virtual frame center by the point in the weak field region through an included angle between the point and the virtual frame center and a distance between the point and the virtual frame center; calculating potential field force to the center of the virtual frame by the point in the strong field region through the charge quantity carried by the point and the included angle between the point and the center of the virtual frame; and calculating potential field force of the point on the virtual frame to the virtual frame center by adopting the following formula:
wherein f i Representing potential field force of a point on a virtual frame to a virtual frame center, d representing distance between the point on the virtual frame and the virtual frame center, a region with the virtual frame center as a center and r as a radius as a potential field-free region, a region with the virtual frame center as a center and between r and r+s as a weak potential field region, a region other than the potential field-free region and the weak potential field region as a strong potential field region, q i Representing the amount of charge carried by a point on the virtual frame, θ representing the angle between the point on the virtual frame and the virtual frame center;
the q is i The following formula is adopted for calculation:
wherein θ max Representing the maximum included angle;
calculating a characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with a target deflection speed of the unmanned aerial vehicle to obtain a weak light environment speed, and executing the weak light environment speed by the unmanned aerial vehicle; the calculation formula of the low light environment speed is as follows:
wherein V is pub Indicating the ambient speed of weak light, V g Representing unmanned plane target deflection speed, V f The unmanned aerial vehicle characteristic deflection speed is represented, alpha represents the weight of the unmanned aerial vehicle target deflection speed, and the time weight gamma of the unmanned aerial vehicle characteristic deflection speed t Is substituted into a calculation formula of the representation time weight limiting function epsilon (x);
time weight gamma t The calculation formula of (2) is as follows:
wherein t is i Represents the accumulated time of the virtual frame of the ith frame, k represents the current time, t k Representing the accumulated time of the current virtual frame;
the calculation formula of the time weight limiting function epsilon (x) is as follows:
wherein x represents a substituted time weight value;
the calculation formula of the unmanned aerial vehicle characteristic deflection speed is as follows:
where N represents the total number of event feature points in the virtual frame.
2. The method for adaptive active visual navigation of an unmanned aerial vehicle in a low-light environment according to claim 1, wherein the filtering the event information, the pixel obtained by filtering being used as an activated pixel, comprises:
and comparing the brightness of the pixel points in the event information with the brightness of the pixel points at the same position, and screening the pixel points in the event information as activated pixel points if the brightness change reaches a contrast threshold.
3. The unmanned aerial vehicle self-adaptive active visual navigation method of claim 1, wherein the active event surface is in a three-dimensional shape, and each time an active pixel is obtained by screening, the latest active pixel is added to the active event surface, and the x-axis, y-axis and z-axis coordinates of the active event surface correspond to the x-axis, y-axis coordinates and time stamp of the active pixel in the active event surface respectively.
4. A method of adaptive active visual navigation of a drone in a low light environment according to claim 3, wherein said checking the time interval between the time stamp of the latest activated pixel and the time stamp of the adjacent activated pixel on the surface of the activation event to determine whether the latest activated pixel is an event feature point comprises:
taking the latest activated pixel point as a circle center, taking the first length as a radius and the second length as a radius to respectively obtain a first circumference and a second circumference on the surface of an activation event;
meanwhile, the following two conditions are satisfied, and the latest activated pixel point is considered as an event feature point: (1) On the first circumference, there are consecutive time stamps of a first standard number of activated pixels, which are greater than the first standard time stamp; (2) On the second circumference, there are consecutive second standard number of active pixel points with time stamps greater than the second standard time stamps.
5. The method for adaptive active visual navigation of an unmanned aerial vehicle in a low-light environment according to claim 1, wherein the generating a virtual frame from the acquired event feature points comprises:
and when the acquired event feature points reach the number required by virtual frame generation, generating a frame of virtual frame, and then, using the acquired event feature points to generate a next frame of virtual frame.
6. The method for adaptive active visual navigation of an unmanned aerial vehicle in a low-light environment according to claim 1, wherein the calculating the characteristic deflection speed of the unmanned aerial vehicle by potential field force comprises:
and respectively acquiring the accumulation time of the event feature points reaching the number required by virtual frame generation for the historical virtual frames and the current virtual frames, and adjusting the time weight of the unmanned aerial vehicle feature deflection speed in the process of combining the unmanned aerial vehicle feature deflection speed and the unmanned aerial vehicle target deflection speed according to the comparison result of the accumulation time.
7. The unmanned aerial vehicle self-adaptive active visual navigation device under the low light environment is characterized by comprising a screening unit, an accumulation unit, a virtual frame generation unit, a potential field force calculation unit and a speed synthesis unit, wherein:
the screening unit is used for acquiring event information output by the event camera, screening the event information, and taking the screened pixel points as activated pixel points;
the accumulating unit is used for checking the time interval between the time stamp of the latest activated pixel point and the time stamp of the adjacent activated pixel point on the surface of the activated event, and determining whether the latest activated pixel point is an event feature point or not; the activation event surface is formed by accumulating a plurality of activation pixel points;
the virtual frame generation unit is used for generating a virtual frame from the acquired event characteristic points;
the potential field force calculation unit is used for calculating potential field force of points on the virtual frame to the center of the virtual frame and synthesizing the potential field force to obtain potential field force; the potential field force refers to the attractive force of a point on the virtual frame to the center of the virtual frame; dividing the virtual frame into a potential field-free region, a weak potential field region and a strong potential field region; the point in the potential-free field area has no potential field force to the center of the virtual frame; calculating potential field force to the virtual frame center by the point in the weak field region through an included angle between the point and the virtual frame center and a distance between the point and the virtual frame center; calculating potential field force to the center of the virtual frame by the point in the strong field region through the charge quantity carried by the point and the included angle between the point and the center of the virtual frame; and calculating potential field force of the point on the virtual frame to the virtual frame center by adopting the following formula:
wherein f i Representing potential field force of a point on a virtual frame to a virtual frame center, d representing distance between the point on the virtual frame and the virtual frame center, a region with the virtual frame center as a center and r as a radius as a potential field-free region, a region with the virtual frame center as a center and between r and r+s as a weak potential field region, a region other than the potential field-free region and the weak potential field region as a strong potential field region, q i Representing the amount of charge carried by a point on the virtual frame, θ representing the angle between the point on the virtual frame and the virtual frame center;
the q is i The following formula is adopted for calculation:
wherein θ max Representing the maximum included angle;
the speed synthesis unit is used for calculating the characteristic deflection speed of the unmanned aerial vehicle through potential field force, synthesizing the characteristic deflection speed with the target deflection speed of the unmanned aerial vehicle to obtain the low-light environment speed, and executing the low-light environment speed by the unmanned aerial vehicle; the calculation formula of the low light environment speed is as follows:
wherein V is pub Indicating the ambient speed of weak light, V g Representing unmanned plane target deflection speed, V f Representing unmanned aerial vehicle characteristic deflection speed, and alpha represents unmanned aerial vehicle target deflectionWeight to speed, time weight gamma of unmanned aerial vehicle characteristic deflection speed t Is substituted into a calculation formula of the representation time weight limiting function epsilon (x);
time weight gamma t The calculation formula of (2) is as follows:
wherein t is i Represents the accumulated time of the virtual frame of the ith frame, k represents the current time, t k Representing the accumulated time of the current virtual frame;
the calculation formula of the time weight limiting function epsilon (x) is as follows:
wherein x represents a substituted time weight value;
the calculation formula of the unmanned aerial vehicle characteristic deflection speed is as follows:
where N represents the total number of event feature points in the virtual frame.
CN202311452969.8A 2023-11-03 2023-11-03 Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment Active CN117191047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311452969.8A CN117191047B (en) 2023-11-03 2023-11-03 Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311452969.8A CN117191047B (en) 2023-11-03 2023-11-03 Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment

Publications (2)

Publication Number Publication Date
CN117191047A CN117191047A (en) 2023-12-08
CN117191047B true CN117191047B (en) 2024-02-23

Family

ID=89000161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311452969.8A Active CN117191047B (en) 2023-11-03 2023-11-03 Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment

Country Status (1)

Country Link
CN (1) CN117191047B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749847A (en) * 2012-06-26 2012-10-24 清华大学 Cooperative landing method for multiple unmanned aerial vehicles
CN103197684A (en) * 2013-04-25 2013-07-10 清华大学 Method and system for cooperatively tracking target by unmanned aerial vehicle cluster
CN104317292A (en) * 2014-09-16 2015-01-28 哈尔滨恒誉名翔科技有限公司 Method for planning collision avoidance path of robot with complicated shape
CN104950885A (en) * 2015-06-10 2015-09-30 东南大学 UAV (unmanned aerial vehicle) fleet bilateral remote control system and method thereof based on vision and force sense feedback
CN106347654A (en) * 2016-10-09 2017-01-25 南京信息工程大学 Spherical unmanned aerial vehicle
CN106970641A (en) * 2017-03-28 2017-07-21 哈尔滨工程大学 Control method of the unmanned plane around the flight of object appearance profile
CN107014400A (en) * 2017-05-22 2017-08-04 南京信息工程大学 The self-checking device and calibration method of unmanned plane inertial navigation unit
CN107845256A (en) * 2017-11-23 2018-03-27 何世容 One kind monitoring Carpooling system and monitoring share-car method
CN109815876A (en) * 2019-01-17 2019-05-28 西安电子科技大学 Gesture identification method based on address events stream feature
CN110650294A (en) * 2019-11-06 2020-01-03 深圳传音控股股份有限公司 Video shooting method, mobile terminal and readable storage medium
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN114078100A (en) * 2021-11-25 2022-02-22 成都时识科技有限公司 Clustering noise reduction device, method, chip, event imaging device and electronic equipment
CN114413896A (en) * 2022-02-25 2022-04-29 陕西弘毅正清智能科技有限公司 Composite navigation method, device, equipment and storage medium for mobile robot
CN114815875A (en) * 2022-03-07 2022-07-29 北京航空航天大学 Parameter adjusting method for unmanned aerial vehicle cluster formation controller based on intelligent optimization of collective full-jet pigeon swarm
CN116092019A (en) * 2022-11-01 2023-05-09 上海海事大学 Ship periphery abnormal object monitoring system, storage medium thereof and electronic equipment
CN116182855A (en) * 2023-04-28 2023-05-30 北京航空航天大学 Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8482859B2 (en) * 2010-02-28 2013-07-09 Osterhout Group, Inc. See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749847A (en) * 2012-06-26 2012-10-24 清华大学 Cooperative landing method for multiple unmanned aerial vehicles
CN103197684A (en) * 2013-04-25 2013-07-10 清华大学 Method and system for cooperatively tracking target by unmanned aerial vehicle cluster
CN104317292A (en) * 2014-09-16 2015-01-28 哈尔滨恒誉名翔科技有限公司 Method for planning collision avoidance path of robot with complicated shape
CN104950885A (en) * 2015-06-10 2015-09-30 东南大学 UAV (unmanned aerial vehicle) fleet bilateral remote control system and method thereof based on vision and force sense feedback
CN106347654A (en) * 2016-10-09 2017-01-25 南京信息工程大学 Spherical unmanned aerial vehicle
CN106970641A (en) * 2017-03-28 2017-07-21 哈尔滨工程大学 Control method of the unmanned plane around the flight of object appearance profile
CN107014400A (en) * 2017-05-22 2017-08-04 南京信息工程大学 The self-checking device and calibration method of unmanned plane inertial navigation unit
CN107845256A (en) * 2017-11-23 2018-03-27 何世容 One kind monitoring Carpooling system and monitoring share-car method
CN109815876A (en) * 2019-01-17 2019-05-28 西安电子科技大学 Gesture identification method based on address events stream feature
CN110650294A (en) * 2019-11-06 2020-01-03 深圳传音控股股份有限公司 Video shooting method, mobile terminal and readable storage medium
CN111210463A (en) * 2020-01-15 2020-05-29 上海交通大学 Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN114078100A (en) * 2021-11-25 2022-02-22 成都时识科技有限公司 Clustering noise reduction device, method, chip, event imaging device and electronic equipment
CN114413896A (en) * 2022-02-25 2022-04-29 陕西弘毅正清智能科技有限公司 Composite navigation method, device, equipment and storage medium for mobile robot
CN114815875A (en) * 2022-03-07 2022-07-29 北京航空航天大学 Parameter adjusting method for unmanned aerial vehicle cluster formation controller based on intelligent optimization of collective full-jet pigeon swarm
CN116092019A (en) * 2022-11-01 2023-05-09 上海海事大学 Ship periphery abnormal object monitoring system, storage medium thereof and electronic equipment
CN116182855A (en) * 2023-04-28 2023-05-30 北京航空航天大学 Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment

Also Published As

Publication number Publication date
CN117191047A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN106780601B (en) Spatial position tracking method and device and intelligent equipment
CN111931752B (en) Dynamic target detection method based on event camera
CN106529495B (en) Obstacle detection method and device for aircraft
CN105187723B (en) A kind of image pickup processing method of unmanned vehicle
JP2021509515A (en) Distance measurement methods, intelligent control methods and devices, electronic devices and storage media
CN113316755B (en) Environmental model maintenance using event-based visual sensors
CN112965507B (en) Cluster unmanned aerial vehicle cooperative work system and method based on intelligent optimization
CN113096151B (en) Method and apparatus for detecting motion information of object, device and medium
CN110956657A (en) Depth image acquisition method and device, electronic equipment and readable storage medium
CN110689572A (en) System and method for positioning mobile robot in three-dimensional space
CN112207821A (en) Target searching method of visual robot and robot
CN117191047B (en) Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment
JP2017204699A (en) Imaging apparatus and imaging method
CN112051856B (en) Composite sensing system for dynamic recovery of unmanned aerial vehicle
JP2021085855A (en) Correction distance calculation device, program for correction distance calculation and correction distance calculation method
CN115639571A (en) Streak tube imaging laser radar image coordinate correction method and device
CN113038028B (en) Image generation method and system
US20120176309A1 (en) Method and module for modifying an angular pointer signal generated using three dimensional pointing, and three dimensional pointing device
WO2020215214A1 (en) Image processing method and apparatus
CN109448060B (en) Camera calibration parameter optimization method based on bat algorithm
CN112689084A (en) Airborne photoelectric reconnaissance imaging system and electronic image stabilization method
CN116958142B (en) Target detection and tracking method based on compound eye event imaging and high-speed turntable
WO2022201825A1 (en) Information processing device, information processing method, and information processing system
CN112258591B (en) Method for obtaining high-precision depth map by combining multiple depth cameras
CN116148883B (en) SLAM method, device, terminal equipment and medium based on sparse depth image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant