CN112489032A - Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background - Google Patents
Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background Download PDFInfo
- Publication number
- CN112489032A CN112489032A CN202011465483.4A CN202011465483A CN112489032A CN 112489032 A CN112489032 A CN 112489032A CN 202011465483 A CN202011465483 A CN 202011465483A CN 112489032 A CN112489032 A CN 112489032A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- target
- small
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 22
- 238000001914 filtration Methods 0.000 claims abstract description 21
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 230000003044 adaptive effect Effects 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 10
- 238000005259 measurement Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 238000002834 transmittance Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 239000003623 enhancer Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 230000001131 transforming effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
Abstract
The invention discloses a method and a system for detecting and positioning an unmanned aerial vehicle-mounted small target under a complex background, wherein the method comprises the following steps: acquiring a video image of a small target and attitude data of an unmanned aerial vehicle and a cloud deck; preprocessing the acquired video image by a multichannel optimal adaptive guided filtering defogging algorithm; detecting the preprocessed video image by using a finite pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target; performing associated mapping on the pixel positions of the small targets in the video images and the geographic coordinates of the unmanned aerial vehicle by using a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm to obtain the longitude, latitude and height of the small targets in a geographic coordinate system; and displaying the small target in the unmanned aerial vehicle-mounted video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time. The invention can carry out real-time on-line accurate detection and positioning on the small unmanned aerial vehicle-mounted target under the complex background.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a method and a system for detecting and positioning small targets carried by an unmanned aerial vehicle under a complex background.
Background
The small unmanned aerial vehicle has the characteristics of low manufacturing cost, convenience in use and the like, and is an ideal aerial platform for completing tasks such as ground target searching and monitoring. However, the existing positioning technology of the unmanned aerial vehicle still has many defects, such as difficulty in separating the object from the background under a complex background, and thus low positioning accuracy is caused; the imaging effect of the remote target is poor, the number of pixel points occupied by the target in the image is small, the target is difficult to be accurately identified, and the detection and positioning cannot be realized.
In the aspect of algorithm, the existing positioning algorithm is complex, and the detection efficiency of the positioning system is seriously influenced. In the aspect of hardware, the load of the unmanned aerial vehicle is limited, and the weight and the volume of various carried detection equipment and hardware devices are large. Therefore, a method and a system for detecting and positioning the small target carried by the unmanned aerial vehicle under the complex background are designed, the precision and the real-time performance of the unmanned aerial vehicle for detecting and positioning the small target are improved, and the technical problem which needs to be solved urgently at present is solved.
Disclosure of Invention
The invention aims to provide a method and a system for detecting and positioning an unmanned aerial vehicle-mounted small target under a complex background, which aim to solve the problem that in the prior art, an object is difficult to separate from the background under the complex background, so that the positioning accuracy is low; the imaging effect of the remote target is poor, the number of pixel points occupied by the target in the image is small, the target is difficult to be accurately identified, and the accurate detection and positioning can not be realized.
To solve the above technical problem, an embodiment of the present invention provides the following solutions:
on the one hand, the method for detecting and positioning the small unmanned aerial vehicle-mounted target under the complex background comprises the following steps:
acquiring a video image of a small target and attitude data of an unmanned aerial vehicle and a cloud deck;
preprocessing the acquired video image by a multichannel optimal adaptive guided filtering defogging algorithm;
detecting the preprocessed video image by using a finite pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target;
performing associated mapping on the pixel positions of the small targets in the video images and the geographic coordinates of the unmanned aerial vehicle by using a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm to obtain the longitude, latitude and height of the small targets in a geographic coordinate system;
and displaying the small target in the unmanned aerial vehicle-mounted video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
Preferably, the preprocessing the acquired video image by the multichannel preferably adaptive guided filtering defogging algorithm specifically includes:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
Preferably, the detecting the preprocessed image by using the finite pixel target space-semantic information fusion detection algorithm specifically includes:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
Preferably, the mapping of the pixel position of the small target in the video image and the geographic coordinate of the unmanned aerial vehicle by using the spatial multi-parameter pixel mapping geographic coordinate positioning algorithm specifically includes:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
In one aspect, a system for detecting and positioning a small unmanned aerial vehicle-mounted target under a complex background is provided, which includes:
the target vision enhancement subsystem is used for acquiring a video image of a small target and attitude data of the unmanned aerial vehicle and the tripod head, and preprocessing the acquired video image through a multichannel optimal adaptive guided filtering defogging algorithm;
the deep learning airborne detection positioning subsystem is used for detecting the preprocessed video image by utilizing a limited pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target; the pixel position of the small target in the video image is associated with the geographic coordinate of the unmanned aerial vehicle by utilizing a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm, so that the longitude, the latitude and the height of the small target in a geographic coordinate system are obtained;
and the data return and ground station subsystem is used for transmitting and displaying the small target in the unmanned aerial vehicle video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
Preferably, the target visual enhancer system is specifically for:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
Preferably, the deep learning airborne detection positioning subsystem is specifically configured to:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
Preferably, the deep learning airborne detection positioning subsystem is further specifically configured to:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
Preferably, the detection positioning system comprises a camera, a holder, an inertial measurement unit, a complex background image processing and space attitude calculation coprocessing unit and a high-performance computing unit;
the complex background image processing and spatial attitude calculation coprocessing unit comprises an I/O module, a clock control circuit, a JTAG controller and a basic programmable logic unit;
the camera, the cradle head and the inertia measurement unit are connected with the I/O module, the I/O module is connected with the clock control circuit, the JTAG controller and the basic programmable logic unit, the clock control circuit and the JTAG controller are connected with the basic programmable logic unit, the basic programmable logic unit is connected with the high-performance computing unit, and the high-performance computing unit is connected with the data return and ground station subsystem.
Preferably, the inertial measurement unit includes a three-axis gyroscope, a three-axis accelerometer, a three-axis geomagnetic sensor, a barometer, and a GPS module.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
aiming at the characteristic that the detection effect of the limited pixel small target is reduced due to the local detail loss of the traditional defogging algorithm under the complex background, the invention provides a multichannel optimal selection adaptive guided filtering defogging algorithm, removes the stray noise and retains the integral characteristic information of the limited pixel small target; aiming at the characteristics that the small targets of the unmanned airborne video occupy few pixels and the targets are difficult to separate from the background, a limited pixel target space-semantic fusion detection algorithm is provided, and effective feature extraction and detection of the small targets under the complex background are realized; aiming at the characteristics that the central position of a small target is difficult to measure and the target positioning precision and real-time performance are poor, a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm is provided, the pixel position of the small target in a video image is associated and mapped with the geographic coordinate of the unmanned aerial vehicle, and the longitude and latitude and the height of the small target under a geographic coordinate system are obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting and positioning an unmanned aerial vehicle-mounted small target under a complex background according to an embodiment of the present invention;
FIG. 2 is a flow chart of a multi-pass adaptive guided filtering defogging algorithm provided by an embodiment of the present invention;
FIG. 3 is a network structure diagram of a finite pixel target space-semantic fusion information detection algorithm provided by an embodiment of the present invention;
FIG. 4 is a flowchart of a spatial multi-parameter pixel mapping geographic coordinate location algorithm provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a small target detection and positioning system on board an unmanned aerial vehicle in a complex background according to an embodiment of the present invention;
fig. 6 is a specific structural diagram of an unmanned aerial vehicle-mounted small target detection positioning system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The embodiment of the invention provides a method for detecting and positioning an unmanned aerial vehicle-mounted small target under a complex background, as shown in fig. 1, the method comprises the following steps:
acquiring a video image of a small target and attitude data of an unmanned aerial vehicle and a cloud deck;
preprocessing the acquired video image by a multichannel optimal adaptive guided filtering defogging algorithm;
detecting the preprocessed video image by using a finite pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target;
performing associated mapping on the pixel position of the small target in the video image and the geographic coordinate of the unmanned aerial vehicle by using a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm to obtain the longitude and latitude and the height of the small target in a geographic coordinate system (WGS-84 GPS);
and displaying the small target in the unmanned aerial vehicle-mounted video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
Aiming at the characteristic that the detection effect of the limited pixel small target is reduced due to the local detail loss of the traditional defogging algorithm under the complex background, the invention provides a multichannel optimal selection adaptive guided filtering defogging algorithm, removes the stray noise and retains the integral characteristic information of the limited pixel small target; aiming at the characteristics that the small targets of the unmanned airborne video occupy few pixels and the targets are difficult to separate from the background, a limited pixel target space-semantic fusion detection algorithm is provided, and effective feature extraction and detection of the small targets under the complex background are realized; aiming at the characteristics that the central position of a small target is difficult to measure and the target positioning precision and real-time performance are poor, a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm is provided, the pixel position of the small target in a video image is associated and mapped with the geographic coordinate of the unmanned aerial vehicle, and the longitude and latitude and the height of the small target under a geographic coordinate system are obtained.
Further, as shown in fig. 2, the preprocessing the acquired video image by the multi-channel preferably adaptive guided filtering defogging algorithm specifically includes:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
In different scenarios, the detection algorithm may be subject to a number of different types of interference: the urban complex background can be interfered by artificial buildings such as buildings and the like; may be disturbed by vegetation in mountain environments; aiming at the complex background, the invention constructs small target data sets of various backgrounds such as highways, cities, mountains, villages and the like, trains a detection network and enhances the generalization capability of the unmanned aerial vehicle-mounted small target detection positioning model under the complex background.
The unmanned aerial vehicle is easily interfered by aerial fog during flying, and the detection effect of the limited pixel small target is reduced due to local detail loss of the traditional defogging algorithm under a complex background.
Specifically, the invention utilizes a classical fog model, combines prior knowledge to solve atmospheric light components and transmittance, and estimates the transmittance according to a fog pattern forming model and a dark channel prior theory in computer vision:
I(x)=J(x)t(x)+A(1-t(x))
the above formula is slightly modified to the following formula:
according to the dark channel prior theory, the method comprises the following steps:
It is assumed that the atmospheric light is known, and in fact is an estimate. The traditional algorithm finds the maximum pixel value of the corresponding position of the pixel in the original fog-free image as the atmospheric light value. However, if the background area in the image is too bright or has a high brightness object, the atmospheric light estimate may be close to 255, which may cause the image to be color cast or mottled after defogging. The invention provides a multi-channel optimal guiding filtering algorithm, which estimates the atmospheric light value from a bright channel and a dark channel respectively to obtain the final atmospheric light so as to solve a fog-free image, thereby realizing the enhancement of the target structure and detail information.
Further, as shown in fig. 3, the detecting the preprocessed image by using the finite pixel target space-semantic information fusion detection algorithm specifically includes:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
Typically, a target size of less than 4 × 4 pixels may be considered a small target. At the moment, the imaging area of the target is small, the proportion of the target in the imaging view range is small due to the factors of unmanned aerial vehicle remote imaging, small target geometric size and the like, and the target pixel proportion is lower than 0.1 percent of the whole image. Aiming at the image of the small target, the invention provides a finite pixel target space-semantic information fusion detection algorithm, which fuses the space information and the semantic information of the target, enhances the feature expression of the target, and can realize the feature extraction of the small target under the complex background, thereby achieving the accurate detection of the small target under the complex background.
Further, the mapping of the pixel positions of the small targets in the video image and the geographic coordinates of the unmanned aerial vehicle by using the spatial multi-parameter pixel mapping geographic coordinate positioning algorithm specifically comprises:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
Specifically, as shown in fig. 4, according to the detection result of the video image, the pixel position of the small target in the video image is obtained; converting according to the resolution of the camera and the pixel size to obtain the position of the small target under a video image physical coordinate system; and then, obtaining the position of the small target in a camera coordinate system according to the focal length of the camera, and obtaining the position of the small target in the unmanned aerial vehicle geographic coordinate system by combining the pitch angle, the yaw angle and the roll angle of the holder. When flying, the unmanned aerial vehicle can obtain the pitch angle, the yaw angle and the roll angle of the body in real time, and the position of the unmanned aerial vehicle under a camera coordinate system can be determined according to the information. The longitude and latitude and the height of the unmanned aerial vehicle at the initial position can determine the position of the small airborne target under the geodetic coordinate system, and the longitude and latitude of the geographical coordinate system of the small unmanned aerial vehicle under the geodetic coordinate system can be determined according to the earth parameters.
Aiming at the problems of low positioning precision and poor real-time performance of an airborne small target in the prior art, the invention provides a geographic coordinate positioning algorithm by utilizing spatial multi-parameter pixel mapping, the algorithm is an airborne small target real-time positioning algorithm based on projection coordinate transformation, attitude information of an airborne machine is obtained from a video image, a central point pixel of the small target in the video image is taken as a pixel position of the airborne small target in the video image, the pixel position of the airborne small target in the video image is combined with the actual ground height, the airborne small target is detected by utilizing deep learning in the field of unmanned aerial vehicle vision and is positioned by carrying out spatial attitude calculation, and the airborne small target is corresponding to the position in a real scene, so that the longitude and latitude and the height of the small target under a geographic coordinate system are obtained.
As a specific implementation mode of the invention, the detailed flow of the geographic coordinate positioning algorithm by utilizing spatial multi-parameter pixel mapping is as follows: obtaining the position information of a small target in an image according to an obtained video image, wherein the position information is coordinates under an image pixel coordinate system (the upper left corner of the image is the origin), transforming the coordinates to obtain the coordinates of the small target in an image physical coordinate system (the center of the image is the origin), transforming the coordinates to a position in a camera coordinate system (the optical axis center of the camera is the origin), transforming the camera coordinate system to a pan-tilt coordinate system (the optical axis center is the origin) by rotating and translating, obtaining the coordinates of the small target in the pan-tilt coordinate system after transforming, estimating the geographic position of the small target, extracting digital elevation information of a target area, enabling the unmanned aerial vehicle to fly around the small target, taking the height information of the unmanned aerial vehicle as the initial ground height, and further carrying out airborne coordinate system (the origin is the image point of the camera), the ground coordinate system, the pan-tilt coordinate system and the airborne coordinate system, And converting the geodetic coordinate system (the origin of the coordinate system is the image point of the camera) and the geographic coordinate system to obtain the longitude, latitude and height of the object in the actual geographic position.
Positioning the small unmanned aerial vehicle-mounted target according to the height of the ground to obtain the longitude and latitude and the height of the target; according to the digital elevation information of the area, the height of the earth at the moment is compared with the height of the target, and the height of the target can be determined within a certain error range.
Correspondingly, an embodiment of the present invention further provides a system for detecting and positioning a small target on board an unmanned aerial vehicle in a complex background, as shown in fig. 5, the system includes:
the target vision enhancement subsystem is used for acquiring a video image of a small target and attitude data of the unmanned aerial vehicle and the tripod head, and preprocessing the acquired video image through a multichannel optimal adaptive guided filtering defogging algorithm;
the deep learning airborne detection positioning subsystem is used for detecting the preprocessed video image by utilizing a limited pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target; the pixel position of the small target in the video image is associated with the geographic coordinate of the unmanned aerial vehicle by utilizing a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm, so that the longitude, the latitude and the height of the small target in a geographic coordinate system are obtained;
and the data return and ground station subsystem is used for transmitting and displaying the small target in the unmanned aerial vehicle video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
The unmanned aerial vehicle-mounted small target detection and positioning system under the complex background can be used for carrying out real-time online accurate detection and positioning on the unmanned aerial vehicle-mounted small target under the complex background.
Further, the target visual enhancer system is specifically for:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
The target vision enhancement subsystem and the channel optimization self-adaptive guided filtering defogging algorithm are deeply integrated, so that the algorithm is hardware, and the deep processing capability of the hardware and the adaptability of the algorithm are improved.
Further, the deep learning airborne detection positioning subsystem is specifically configured to:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
Further, the deep learning airborne detection positioning subsystem is further specifically configured to:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
A specific structure of the unmanned aerial vehicle-mounted small target detection and positioning system under the complex background is shown in fig. 6. The system comprises a camera, a holder, an inertia measurement unit, a complex background image processing and space attitude resolving coprocessing unit and a high-performance computing unit;
the complex background image processing and spatial attitude calculation coprocessing unit comprises an I/O module, a clock control circuit, a JTAG controller and a basic programmable logic unit;
the camera, the cradle head and the inertia measurement unit are connected with the I/O module, the I/O module is connected with the clock control circuit, the JTAG controller and the basic programmable logic unit, the clock control circuit and the JTAG controller are connected with the basic programmable logic unit, the basic programmable logic unit is connected with the high-performance computing unit, and the high-performance computing unit is connected with the data return and ground station subsystem.
Specifically, IMU inertial measurement unit is integrated chip, including triaxial gyroscope, triaxial accelerometer, triaxial geomagnetic sensor, barometer and GPS module, and the effect that plays in unmanned aerial vehicle is the change of perception gesture. The three-axis gyroscope is used for measuring the inclination angle of the unmanned aerial vehicle local machine; the three-axis accelerometer is used for measuring the acceleration of the XYZ three axes of the airplane; the geomagnetic sensor is used for sensing the geomagnetism and is equivalent to an electronic compass; the barometer calculates the pressure difference by measuring the air pressure at different positions to obtain the current height; the GPS is used for acquiring longitude and latitude and height of the unmanned aerial vehicle.
The clock control circuit provides clock signals to the I/O module and the basic programmable logic unit, including a global clock signal, a clock reset signal and an output enable signal.
The JTAG controller controls the I/O module to read data through the bus, wherein a serial port collects unmanned aerial vehicle attitude data and video images transmitted by a camera in the inertia measurement unit, and a CAN bus collects attitude data of a holder and stores the attitude data to a data storage module of the complex background image co-processing unit after decoding.
The basic programmable logic unit comprises an image preprocessing module, completes target vision enhancement such as defogging of an unmanned aerial vehicle-mounted small target under a complex background, converts serial data into parallel data through serial decoding, and outputs the parallel data to the high-performance computing unit.
The high-performance computing unit detects video images of the unmanned aerial vehicle-mounted small target according to the output data of the complex background image processing and space attitude calculation coprocessing unit, the video images are mapped in association with the geographical coordinates of the unmanned aerial vehicle coordinate system to achieve geographical positioning of the unmanned aerial vehicle-mounted small target, and the final result is transmitted to the ground station subsystem through data return and displayed in real time. And the data return and ground station subsystem realizes the visualization of the airborne small target and the longitude and latitude and height thereof under the geographic coordinate system.
The detection positioning system provided by the embodiment of the invention does not need an unmanned aerial vehicle to carry laser ranging equipment, and is suitable for small-sized and light-weight unmanned aerial vehicles.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A method for detecting and positioning small unmanned aerial vehicle-mounted targets under a complex background is characterized by comprising the following steps:
acquiring a video image of a small target and attitude data of an unmanned aerial vehicle and a cloud deck;
preprocessing the acquired video image by a multichannel optimal adaptive guided filtering defogging algorithm;
detecting the preprocessed video image by using a finite pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target;
performing associated mapping on the pixel positions of the small targets in the video images and the geographic coordinates of the unmanned aerial vehicle by using a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm to obtain the longitude, latitude and height of the small targets in a geographic coordinate system;
and displaying the small target in the unmanned aerial vehicle-mounted video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
2. The method for detecting and positioning the small unmanned aerial vehicle-mounted target under the complex background as claimed in claim 1, wherein the preprocessing the acquired video image by the multi-channel preferably adaptive guided filtering defogging algorithm specifically comprises:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
3. The method for detecting and positioning the small unmanned aerial vehicle-mounted target under the complex background according to claim 1, wherein the detection of the preprocessed image by using the finite pixel target space-semantic information fusion detection algorithm specifically comprises:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
4. The method for detecting and positioning the small unmanned aerial vehicle-mounted target under the complex background according to claim 1, wherein the associating and mapping the pixel position of the small target in the video image and the geographic coordinate of the unmanned aerial vehicle by using the spatial multi-parameter pixel mapping geographic coordinate positioning algorithm specifically comprises:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
5. The utility model provides a little target detection positioning system of unmanned aerial vehicle under complicated background which characterized in that includes:
the target vision enhancement subsystem is used for acquiring a video image of a small target and attitude data of the unmanned aerial vehicle and the tripod head, and preprocessing the acquired video image through a multichannel optimal adaptive guided filtering defogging algorithm;
the deep learning airborne detection positioning subsystem is used for detecting the preprocessed video image by utilizing a limited pixel target space-semantic information fusion detection algorithm to obtain the pixel position of a small target; the pixel position of the small target in the video image is associated with the geographic coordinate of the unmanned aerial vehicle by utilizing a spatial multi-parameter pixel mapping geographic coordinate positioning algorithm, so that the longitude, the latitude and the height of the small target in a geographic coordinate system are obtained;
and the data return and ground station subsystem is used for transmitting and displaying the small target in the unmanned aerial vehicle video image and the longitude and latitude and the height of the small target under the geographic coordinate system in real time.
6. The system for detecting and locating small targets on-board an unmanned aerial vehicle in a complex context of claim 5, wherein the target vision enhancer system is specifically configured to:
and carrying out channel segmentation on the obtained video image, wherein the channel segmentation comprises the following steps: dark channel images, bright channel images and RGB three-channel images;
estimating a local atmospheric value according to the dark channel image and the bright channel image;
carrying out weighted guided filtering on the RGB three-channel image, and then estimating local transmittance;
and obtaining a defogged image according to the estimated local atmospheric value and local transmissivity by combining a fog interference model.
7. The system for detecting and positioning the small unmanned aerial vehicle-mounted target under the complex background as claimed in claim 5, wherein the deep learning machine-mounted detection positioning subsystem is specifically configured to:
constructing a space-semantic feature extraction network, extracting multilayer space feature information by pooling and normalizing the region of interest, and extracting semantic feature information by cyclic convolution;
and performing regression classification on the bounding box of the target through two fully-connected layers to obtain a detection result of the video image.
8. The system for detecting and locating the small unmanned aerial vehicle-mounted target under the complex background as claimed in claim 5, wherein the deep learning machine-mounted detection and location subsystem is further specifically configured to:
obtaining the pixel position of the small target in the video image according to the detection result of the video image;
the height of the unmanned aerial vehicle at the ground position is used as the geodetic height, the pixel position of the small target in the video image and the geographical coordinates of the unmanned aerial vehicle are mapped, and the longitude and latitude of the small target in a geographical coordinate system are solved;
and determining the geodetic height of the small target according to the calculated longitude and latitude, comparing the obtained geodetic height of the small target with the height of the unmanned aerial vehicle at the ground position, correcting, and determining the longitude and latitude and the height of the small target.
9. The unmanned aerial vehicle-mounted small target detection and positioning system under the complex background according to claim 5, wherein the detection and positioning system comprises a camera, a holder, an inertial measurement unit, a complex background image processing and spatial attitude solution co-processing unit and a high-performance computing unit;
the complex background image processing and spatial attitude calculation coprocessing unit comprises an I/O module, a clock control circuit, a JTAG controller and a basic programmable logic unit;
the camera, the cradle head and the inertia measurement unit are connected with the I/O module, the I/O module is connected with the clock control circuit, the JTAG controller and the basic programmable logic unit, the clock control circuit and the JTAG controller are connected with the basic programmable logic unit, the basic programmable logic unit is connected with the high-performance computing unit, and the high-performance computing unit is connected with the data return and ground station subsystem.
10. The system for detecting and locating the small target on-board the unmanned aerial vehicle in the complex context of claim 9, wherein the inertial measurement unit comprises a three-axis gyroscope, a three-axis accelerometer, a three-axis geomagnetic sensor, a barometer, and a GPS module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011465483.4A CN112489032A (en) | 2020-12-14 | 2020-12-14 | Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011465483.4A CN112489032A (en) | 2020-12-14 | 2020-12-14 | Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112489032A true CN112489032A (en) | 2021-03-12 |
Family
ID=74917717
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011465483.4A Pending CN112489032A (en) | 2020-12-14 | 2020-12-14 | Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489032A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113865617A (en) * | 2021-08-30 | 2021-12-31 | 中国人民解放军火箭军工程大学 | Method for correcting matching accurate pose of rear view image of maneuvering launching active section of aircraft |
CN113949826A (en) * | 2021-09-28 | 2022-01-18 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition |
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
CN114743116A (en) * | 2022-04-18 | 2022-07-12 | 蜂巢航宇科技(北京)有限公司 | Barracks patrol scene-based unattended special load system and method |
CN114913717A (en) * | 2022-07-20 | 2022-08-16 | 成都天巡微小卫星科技有限责任公司 | Portable low-altitude flight anti-collision system and method based on intelligent terminal |
CN116778360A (en) * | 2023-06-09 | 2023-09-19 | 北京科技大学 | Ground target positioning method and device for flapping-wing flying robot |
CN118692010A (en) * | 2024-08-22 | 2024-09-24 | 广东工业大学 | Intelligent detection positioning method and system for small target of unmanned aerial vehicle in complex scene |
CN113949826B (en) * | 2021-09-28 | 2024-11-05 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle cluster collaborative reconnaissance method and system under condition of limited communication bandwidth |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103855644A (en) * | 2014-03-14 | 2014-06-11 | 刘凯 | Multi-rotary-wing intelligent inspection robot for overhead line |
CN104835115A (en) * | 2015-05-07 | 2015-08-12 | 中国科学院长春光学精密机械与物理研究所 | Imaging method for aerial camera, and system thereof |
CN107247458A (en) * | 2017-05-24 | 2017-10-13 | 中国电子科技集团公司第二十八研究所 | UAV Video image object alignment system, localization method and cloud platform control method |
CN107727079A (en) * | 2017-11-30 | 2018-02-23 | 湖北航天飞行器研究所 | The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite |
CN109974688A (en) * | 2019-03-06 | 2019-07-05 | 深圳飞马机器人科技有限公司 | The method and terminal positioned by unmanned plane |
CN110827221A (en) * | 2019-10-31 | 2020-02-21 | 天津大学 | Single image defogging method based on double-channel prior and side window guide filtering |
CN110940638A (en) * | 2019-11-20 | 2020-03-31 | 北京科技大学 | Hyperspectral image sub-pixel level water body boundary detection method and detection system |
CN111598183A (en) * | 2020-05-22 | 2020-08-28 | 上海海事大学 | Multi-feature fusion image description method |
-
2020
- 2020-12-14 CN CN202011465483.4A patent/CN112489032A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103855644A (en) * | 2014-03-14 | 2014-06-11 | 刘凯 | Multi-rotary-wing intelligent inspection robot for overhead line |
CN104835115A (en) * | 2015-05-07 | 2015-08-12 | 中国科学院长春光学精密机械与物理研究所 | Imaging method for aerial camera, and system thereof |
CN107247458A (en) * | 2017-05-24 | 2017-10-13 | 中国电子科技集团公司第二十八研究所 | UAV Video image object alignment system, localization method and cloud platform control method |
CN107727079A (en) * | 2017-11-30 | 2018-02-23 | 湖北航天飞行器研究所 | The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite |
CN109974688A (en) * | 2019-03-06 | 2019-07-05 | 深圳飞马机器人科技有限公司 | The method and terminal positioned by unmanned plane |
CN110827221A (en) * | 2019-10-31 | 2020-02-21 | 天津大学 | Single image defogging method based on double-channel prior and side window guide filtering |
CN110940638A (en) * | 2019-11-20 | 2020-03-31 | 北京科技大学 | Hyperspectral image sub-pixel level water body boundary detection method and detection system |
CN111598183A (en) * | 2020-05-22 | 2020-08-28 | 上海海事大学 | Multi-feature fusion image description method |
Non-Patent Citations (3)
Title |
---|
卢辉斌;赵燕芳;赵永杰;温淑焕;马金荣;LAM HAK KEUNG;王洪斌;: "基于亮通道和暗通道结合的图像去雾", 光学学报, no. 11, pages 1 - 10 * |
李希;徐翔;李军;: "面向航空飞行安全的遥感图像小目标检测", 航空兵器, no. 03 * |
杨光;李兵;冯鹏飞;: "无人机多目标定位算法研究", 舰船电子工程, no. 01 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113865617A (en) * | 2021-08-30 | 2021-12-31 | 中国人民解放军火箭军工程大学 | Method for correcting matching accurate pose of rear view image of maneuvering launching active section of aircraft |
CN113949826A (en) * | 2021-09-28 | 2022-01-18 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition |
CN113949826B (en) * | 2021-09-28 | 2024-11-05 | 航天时代飞鸿技术有限公司 | Unmanned aerial vehicle cluster collaborative reconnaissance method and system under condition of limited communication bandwidth |
CN114217626A (en) * | 2021-12-14 | 2022-03-22 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle inspection video |
CN114217626B (en) * | 2021-12-14 | 2022-06-28 | 集展通航(北京)科技有限公司 | Railway engineering detection method and system based on unmanned aerial vehicle routing inspection video |
CN114743116A (en) * | 2022-04-18 | 2022-07-12 | 蜂巢航宇科技(北京)有限公司 | Barracks patrol scene-based unattended special load system and method |
CN114913717A (en) * | 2022-07-20 | 2022-08-16 | 成都天巡微小卫星科技有限责任公司 | Portable low-altitude flight anti-collision system and method based on intelligent terminal |
CN114913717B (en) * | 2022-07-20 | 2022-09-27 | 成都天巡微小卫星科技有限责任公司 | Portable low-altitude flight anti-collision system and method based on intelligent terminal |
CN116778360A (en) * | 2023-06-09 | 2023-09-19 | 北京科技大学 | Ground target positioning method and device for flapping-wing flying robot |
CN116778360B (en) * | 2023-06-09 | 2024-03-19 | 北京科技大学 | Ground target positioning method and device for flapping-wing flying robot |
CN118692010A (en) * | 2024-08-22 | 2024-09-24 | 广东工业大学 | Intelligent detection positioning method and system for small target of unmanned aerial vehicle in complex scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489032A (en) | Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background | |
CN110926474B (en) | Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method | |
CN109061703B (en) | Method, apparatus, device and computer-readable storage medium for positioning | |
CN109708649B (en) | Attitude determination method and system for remote sensing satellite | |
CN102353377B (en) | High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
KR102239562B1 (en) | Fusion system between airborne and terrestrial observation data | |
CN108475442A (en) | Augmented reality method, processor and unmanned plane for unmanned plane | |
CN110515110B (en) | Method, device, equipment and computer readable storage medium for data evaluation | |
US20230244227A1 (en) | Data processing method, control apparatus and storage medium | |
CN112348886A (en) | Visual positioning method, terminal and server | |
CN105424034A (en) | Shipborne all-time starlight and inertia combined navigation system | |
KR20210034253A (en) | Method and device to estimate location | |
US8965053B2 (en) | Method for remotely determining an absolute azimuth of a target point | |
CN109916415A (en) | Road type determines method, apparatus, equipment and storage medium | |
CN207068060U (en) | The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft | |
CN114495416A (en) | Fire monitoring method and device based on unmanned aerial vehicle and terminal equipment | |
CN117727011A (en) | Target identification method, device, equipment and storage medium based on image fusion | |
IL267309B (en) | Terrestrial observation device having location determination functionality | |
KR102249381B1 (en) | System for generating spatial information of mobile device using 3D image information and method therefor | |
CN111652276A (en) | All-weather portable multifunctional bionic positioning, attitude determining and viewing system and method | |
CN116027351A (en) | Hand-held/knapsack type SLAM device and positioning method | |
CN115790582A (en) | Unmanned aerial vehicle global positioning method for GNSS rejection environment | |
CN112213753B (en) | Method for planning parachuting training path by combining Beidou navigation and positioning function and augmented reality technology | |
CN114429515A (en) | Point cloud map construction method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |