CN116295446B - Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition - Google Patents

Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition Download PDF

Info

Publication number
CN116295446B
CN116295446B CN202310578842.4A CN202310578842A CN116295446B CN 116295446 B CN116295446 B CN 116295446B CN 202310578842 A CN202310578842 A CN 202310578842A CN 116295446 B CN116295446 B CN 116295446B
Authority
CN
China
Prior art keywords
channel
fusion
detail
base layer
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310578842.4A
Other languages
Chinese (zh)
Other versions
CN116295446A (en
Inventor
范晨
马铜伟
张礼廉
何晓峰
胡小平
苗桐侨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202310578842.4A priority Critical patent/CN116295446B/en
Publication of CN116295446A publication Critical patent/CN116295446A/en
Application granted granted Critical
Publication of CN116295446B publication Critical patent/CN116295446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3856Data obtained from user input
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Navigation (AREA)

Abstract

The application relates to a polarized multi-scale decomposition unmanned aerial vehicle vision matching navigation method and device. The method comprises the following steps: the method comprises the steps of taking a priori navigation topological map and polarization degree images of the priori navigation topological map as far and near scene dual-channel input, performing multi-scale decomposition to obtain a dual-channel base layer and a detail layer, constructing a weight function based on a gradient saliency map in base layer fusion, setting information from a second channel in the fusion base layer to be not less than a preset value while emphasizing channel input with stronger activity characteristics, constructing a detail layer fusion optimization function based on a weighted least square method in detail layer fusion, and obtaining a reconstructed and fused priori navigation topological map according to the fusion base layer and the fusion detail layer to perform navigation positioning. The invention has the advantages of novel innovation point, suitability for different weather and the like, and has wide application prospect for improving the robustness and all-weather adaptability of bionic polarized light navigation in complex weather.

Description

Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition
Technical Field
The application relates to the field of visual matching navigation, in particular to a polarized multi-scale decomposition unmanned aerial vehicle visual matching navigation method and device.
Background
At present, research on polarized light navigation technology obtains a great deal of research results, such as under the weather (Rayleigh weather) which is clear, cloudy and the like and is mainly based on Rayleigh scattering, the distribution of the atmospheric polarization mode is relatively stable, and autonomous navigation of part of mobile robots, unmanned platforms and the like under the satellite-free condition can be realized, but in the complex weather of atmospheric turbulence and cloud layer change such as cloudy, cloudy and the like, the multiple scattering such as Rayleigh scattering, mies scattering and the like occurs due to the existence of particles with different scales such as haze, water drops, solid particles and the like, so that the distribution of the atmospheric polarization mode is unstable, and the orientation precision of polarized light is reduced or even fails. Therefore, the improvement of the sheet polarization information of the original polarized haze image has important significance for accurate navigation information.
An important method for defogging polarized images is a multi-scale fusion defogging method, codruta proves the practicability and effectiveness of the fusion-based method in image defogging for the first time, and the important characteristics of the images are obtained by calculating three weight maps (weight maps), and input Laplace and weighted Gaussian images are combined in a multi-scale fusion mode. Xue et al designed a multi-scale feature extraction module that could detect rain bands of different lengths for clear imaging in a rainy fog background. Liu et al propose GridDehazent, the backbone module implements a new attention-based multi-scale estimation on the mesh network to improve the feature extraction performance of conventional multi-scale decomposition. Li proposes a global guided image filtering globally guided image filtering (G-GIF) algorithm for image defogging, which can preserve details in fine structure areas. Li et al decompose the fog image into different levels by using Laplacian pyramid and Gaussian pyramid, and recover scene brightness of different levels by using different defogging and noise reduction methods. To recover a haze-free image. DehazeNet proposed by Cai et al realizes defogging of a single image by learning the mapping relationship between a foggy day image and a transmission image. The algorithm consists of a coarse-scale network and a fine-scale network, wherein the coarse-scale network is used for predicting an overall transmission diagram based on a full diagram, and the fine-scale network is used for locally refining defogging results. However, the prior art is easy to lose the main characteristics of different source information, so that the problems of low accuracy and robustness exist.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device, a computer device and a storage medium for unmanned aerial vehicle vision matching navigation capable of improving polarized light orientation accuracy and robustness in complex weather.
A polarized multi-scale resolved unmanned aerial vehicle vision matching navigation method, the method comprising:
acquiring a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to respectively obtain a base layer image and a detail layer image of a double channel; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel;
obtaining a two-channel gradient saliency map according to the two-channel base layer image, constructing a weight function according to the two-channel gradient saliency map, and carrying out weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
the scene features respectively represented by the first detail layer and the second detail layer are reserved to the greatest extent as targets, a detail layer fusion optimization function is constructed based on a weighted least square method, and the detail layer fusion optimization function is solved to obtain a fusion detail layer;
and obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
In one embodiment, the method further comprises: and carrying out multi-scale decomposition on the first channel and the second channel through an algorithm based on rolling guide filtering to respectively obtain a base layer image and a detail layer image of the double channels.
In one embodiment, the method further comprises: and detecting the edge gradient of each pixel of the dual-channel base layer image in the adjacent macro block through a Sobel operator to obtain a dual-channel gradient saliency map.
In one embodiment, the method further comprises: constructing a weight function according to the gradient saliency map of the double channels, wherein the weight function is as follows:
wherein ,is weight(s)>Is a unitary matrix->、/>Is a dual-channel gradient saliency map, +.>The representation is->、/>Is the maximum value of (a).
In one embodiment, the method further comprises: and carrying out weighted average on the base layer images of the two channels according to the weight function to obtain a fusion base layer which is:
wherein ,for the fusion base layer obtained, < >>、/>And respectively obtaining the primary layer pictures of the first channel and the second channel.
In one embodiment, the method further comprises: and aiming at reserving scene features respectively represented by the first detail layer and the second detail layer to the greatest extent, constructing a detail layer fusion optimization function based on a weighted least square method, wherein the fusion optimization function comprises the following steps:
wherein ,representing the spatial position of the pixel, is->Represent the firstjLevel fusion detail layer->Representing said first detail layer,/>Representing the second level of detail, +.>Is a trade-off parameter->Is a coefficient having spatial variation weight.
In one embodiment, the method further comprises: and performing inverse multi-scale decomposition according to the fusion base layer and the fusion detail layer to obtain a reconstructed and fused prior navigation topological map.
A polarized multi-scale resolved unmanned aerial vehicle vision matching navigation device, the device comprising:
the multi-scale decomposition module is used for obtaining a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to obtain a base layer image and a detail layer image of the two channels respectively; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel;
the base layer fusion module is used for obtaining a two-channel gradient saliency map according to the two-channel base layer image, constructing a weight function according to the two-channel gradient saliency map, and carrying out weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
the detail layer fusion module is used for constructing a detail layer fusion optimization function based on a weighted least square method with the aim of reserving scene features respectively represented by the first detail layer and the second detail layer to the greatest extent, and solving the detail layer fusion optimization function to obtain a fusion detail layer;
and the navigation positioning module is used for obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to respectively obtain a base layer image and a detail layer image of a double channel; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel;
obtaining a two-channel gradient saliency map according to the two-channel base layer image, constructing a weight function according to the two-channel gradient saliency map, and carrying out weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
the scene features respectively represented by the first detail layer and the second detail layer are reserved to the greatest extent as targets, a detail layer fusion optimization function is constructed based on a weighted least square method, and the detail layer fusion optimization function is solved to obtain a fusion detail layer;
and obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to respectively obtain a base layer image and a detail layer image of a double channel; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel;
obtaining a two-channel gradient saliency map according to the two-channel base layer image, constructing a weight function according to the two-channel gradient saliency map, and carrying out weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
the scene features respectively represented by the first detail layer and the second detail layer are reserved to the greatest extent as targets, a detail layer fusion optimization function is constructed based on a weighted least square method, and the detail layer fusion optimization function is solved to obtain a fusion detail layer;
and obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
According to the unmanned aerial vehicle vision matching navigation method, the device, the computer equipment and the storage medium for polarization multi-scale decomposition, the polarization degree images of the prior navigation topological map and the prior navigation topological map are used as the two-channel input of the far-near scene, the multi-scale decomposition is carried out to obtain the two-channel base layer and the detail layer, in the fusion of the base layer, the weight function based on the gradient saliency map is constructed, the channel input with more intense activity characteristics is emphasized, and meanwhile, the information from the second channel in the fusion base layer is set to be not less than a preset value, so that good contrast and overall appearance are provided for the final prior navigation topological map; in detail layer fusion, the scene features respectively represented by the first detail layer and the second detail layer are reserved to the greatest extent, a detail layer fusion optimization function is constructed based on a weighted least square method, detail information with better visual effect is fused from different source priori navigation topological maps, and the features of the different source priori navigation topological maps can be reserved to the greatest extent; and finally, obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through vision matching calculation. The invention has the advantages of novel innovation point, suitability for different weather and the like, and has wide application prospect for improving the robustness and all-weather adaptability of bionic polarized light navigation in complex weather.
Drawings
FIG. 1 is a flow diagram of a method for unmanned aerial vehicle vision matching navigation with polarization multi-scale decomposition in one embodiment;
FIG. 2 is a flow chart of a method of unmanned aerial vehicle vision matching navigation with polarization multi-scale decomposition in another embodiment;
FIG. 3 is a block diagram of a polarized multi-scale resolved unmanned aerial vehicle vision matching navigation device in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a polarized multi-scale decomposed unmanned aerial vehicle vision matching navigation method, including the steps of:
step 102, acquiring a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to respectively obtain a base layer image and a detail layer image of the two channels.
The first channel corresponds to a near scene of the prior navigation topological map, and the second channel corresponds to a far scene of the prior navigation topological map. The first channel and the second channel form different source a priori navigation topology maps.
The first detail layer obtained from the first channel and the second detail layer obtained from the second channel represent detail features of the near scene and the far scene, respectively.
And solving the polarization degree image of the prior navigation topological map according to the Stokes theory according to the prior navigation topological map, wherein the step is the prior art.
The obtained two-channel base layer images are respectively、/>The obtained detail layer images are respectively the first detail layerAnd second detail layer->
Step 104, obtaining a two-channel gradient saliency map according to the two-channel base layer image, constructing a weight function according to the two-channel gradient saliency map, and carrying out weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer.
Detecting the edge gradient of each pixel of the dual-channel base layer image in the adjacent macro block through a Sobel operator to obtain a dual-channel gradient saliency map, wherein the dual-channel gradient saliency map is respectively、/>
Obtaining a fused base layer by weighted averaging
(1)
Wherein the weights areThe definition is as follows:
(2)
by considering the gradient saliency map, the equations presented by the present invention provide an improved "average" fusion rule for the base layer. Weighting ofIt should be emphasized that channel inputs with more aggressive features, which refer to inputs with better features, are more aggressive. For the weight function proposed by the invention, if +.>Equal to->Weight->Will decrease to a common average value of 0.5; if->Relatively greater than->,/>Less than 0.5%>More information from base layer B2 will be fused. If->Relatively less than->Weight ∈>Will be reduced to the normal average weight, +.>0.5. The method aims at realizing simultaneous defogging of a near-distance scene and a far-distance scene. It is well known that distant scenes in a prior navigation topology map are more difficult to defog than near field scenes, and therefore the base layer fusion method here is more prone to consider the information of the base layer B2.
The weighting function of the invention emphasizes the channel input with more intense activity characteristics and sets the information from the second channel in the fusion base layer to be not less than 0.5 of a preset value. The design can provide good contrast and overall appearance for the final a priori navigation topology map.
And 106, constructing a detail layer fusion optimization function based on a weighted least square method by taking the scene characteristics respectively represented by the first detail layer and the second detail layer as a target to the greatest extent, and solving the detail layer fusion optimization function to obtain a fusion detail layer.
In order to keep scene characteristics respectively represented by the first detail layer and the second detail layer to the greatest extent, the invention constructs a detail layer fusion optimization function, so that the distance weighted sum between the first detail layer and the second detail layer is minimum, and the detail layer fusion optimization function is as follows:
wherein ,representing the spatial position of the pixel, is->Is a coefficient having a spatial variation weight,,/>is a small constant (usually 0.0001) preventing zero removal,/for example>The window is a square window taking the pixel p as the center, when the window size of the coefficient is selected, the larger window increases the calculation cost, and on the other hand, the smaller window cannot eliminate the influence of noise; />Represent the firstjLevel fusion detail layer->Representing a first detail layer,/->Representing a second level of detail, the first item->Is to minimize the fusion detail layer->And first detail layer->Euclidean therebetweenDistance, second item->Aiming at enabling fusion detail layer->Approach detail layer->,/>Is a parameter that globally controls the trade-off between these two items.
The conventional technology generally follows a "maximum-absolute" fusion rule, which means that in multi-scale decomposition, a final detail layer is selected according to image evaluation indexes such as "feature points or texture information" from a plurality of decomposed detail layers, and a certain detail layer corresponding to the maximum index is selected.
The invention concept of the method for fusing detail layers is that the optimal thought is utilized, the characteristics of a near scene and a far scene respectively represented by a first detail layer and a second detail layer are reserved to the greatest extent, the defect of the traditional maximum-absolute fusion rule can be overcome, the detail information with better visual effect is fused from different source priori navigation topology maps, and the characteristics of different source priori navigation topology maps can be reserved to the greatest extent.
And step 108, obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
The inverse multi-scale decomposition is carried out on the fused base layer and detail layer to obtain a result that the visibility of the far and near scenes of the prior navigation topological map is obviously improved, so that the target recognition accuracy based on the prior navigation topological map is improved, the accurate navigation orientation precision is obtained, and particularly, the course angle of the carrier is obtained through vision matching calculation
In the unmanned aerial vehicle vision matching navigation method based on polarization multi-scale decomposition, the polarization degree images of the prior navigation topological map and the prior navigation topological map are used as the two-channel input of the far-near scene, the multi-scale decomposition is carried out to obtain the base layer and the detail layer of the two channels, in the fusion of the base layer, a weight function based on a gradient saliency map is constructed, the input with more intense activity characteristics is emphasized, and at the same time, the information from the second channel in the fusion base layer is set to be not less than a preset value, so that good contrast and overall appearance are provided for the final prior navigation topological map; in detail layer fusion, the scene features respectively represented by the first detail layer and the second detail layer are reserved to the greatest extent, a detail layer fusion optimization function is constructed based on a weighted least square method, detail information with better visual effect is fused from different source priori navigation topological maps, and the features of the different source priori navigation topological maps can be reserved to the greatest extent; and finally, obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, and obtaining navigation positioning information of the carrier through vision matching calculation. The invention has the advantages of novel innovation point, suitability for different weather and the like, and has wide application prospect for improving the robustness and all-weather adaptability of bionic polarized light navigation in complex weather.
As shown in fig. 2, a polarization multi-scale decomposition unmanned aerial vehicle vision matching navigation method is provided, which comprises the following steps:
1) Acquiring a two-channel input of a far-near scene;
2) Double-input multi-scale decomposition based on rolling guide filtering;
3) Constructing a weight measurement method of gradient mapping to fuse a base layer;
4) Designing a least square-based optimization method to fuse detail layers;
5) The result of obviously improving the visibility of the near-far scene of the prior navigation topological map is obtained by performing inverse multi-scale decomposition on the fused base layer and detail layer;
6) And obtaining the positioning information of the carrier through visual matching calculation.
It should be understood that, although the steps in the flowcharts of fig. 1-2 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-2 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, as shown in fig. 3, there is provided a polarized multi-scale resolved unmanned aerial vehicle vision matching navigation device, comprising: a multi-scale decomposition module 302, a base layer fusion module 304, a detail layer fusion module 306, and a navigation positioning module 308, wherein:
the multi-scale decomposition module 302 is configured to obtain a priori navigation topological map as a first channel, solve a polarization degree image of the priori navigation topological map as a second channel, and perform multi-scale decomposition on the first channel and the second channel to obtain a base layer image and a detail layer image of the two channels respectively; the detail layer image comprises a first detail layer obtained by a first channel and a second detail layer obtained by a second channel;
the base layer fusion module 304 is configured to obtain a two-channel gradient saliency map according to the two-channel base layer image, construct a weight function according to the two-channel gradient saliency map, and perform weighted average on the two-channel base layer image according to the weight function to obtain a fusion base layer; setting information from a second channel in a fusion base layer to be not less than a preset value while the weight function emphasizes channel input with more intense activity characteristics;
the detail layer fusion module 306 is configured to construct a detail layer fusion optimization function based on a weighted least square method with the objective of reserving scene features respectively represented by the first detail layer and the second detail layer to the greatest extent, and solve the detail layer fusion optimization function to obtain a fusion detail layer;
the navigation positioning module 308 is configured to obtain a reconstructed and fused prior navigation topological map according to the fused base layer and the fused detail layer, and obtain navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
The multi-scale decomposition module 302 is further configured to perform multi-scale decomposition on the first channel and the second channel by using an algorithm based on rolling guide filtering to obtain a base layer image and a detail layer image of the two channels, respectively.
The base layer fusion module 304 is further configured to detect an edge gradient of each pixel of the two-channel base layer image in the adjacent macro block by using a Sobel operator, so as to obtain a two-channel gradient saliency map.
The base layer fusion module 304 is further configured to construct a weight function according to the two-channel gradient saliency map as follows:
wherein ,is weight(s)>Is a unitary matrix->、/>Is a two-channel gradient saliency map,the representation is->、/>Is the maximum value of (a).
The base layer fusion module 304 is further configured to perform weighted average on the two-channel base layer images according to the weight function, so as to obtain a fused base layer as follows:
wherein ,for the fusion base layer obtained, < >>、/>And respectively the primary layer of the first channel and the primary layer of the second channel.
The detail layer fusion module 306 is further configured to construct a detail layer fusion optimization function based on a weighted least square method with a view to reserving scene features respectively represented by the first detail layer and the second detail layer to the maximum extent, where the detail layer fusion optimization function is as follows:
wherein ,representing the spatial position of the pixel, is->Represent the firstjLevel fusion detail layer->A first layer of detail is indicated and,representing a second level of detail, ">Is a trade-off parameter->Is a coefficient having spatial variation weight.
The navigation positioning module 308 is further configured to perform inverse multi-scale decomposition according to the fusion base layer and the fusion detail layer to obtain a reconstructed and fused prior navigation topological map.
For specific limitations on the polarized multi-scale resolved unmanned aerial vehicle vision matching navigation device, reference may be made to the above limitation on the polarized multi-scale resolved unmanned aerial vehicle vision matching navigation method, and no further description is given here. The modules in the polarized multi-scale decomposed unmanned aerial vehicle vision matching navigation device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a polarized multi-scale resolved unmanned aerial vehicle vision matching navigation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (6)

1. A polarized multi-scale resolved unmanned aerial vehicle vision matching navigation method, the method comprising:
acquiring a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to respectively obtain a base layer image and a detail layer image of a double channel; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel; the first channel corresponds to a priori navigation topological map near scene, and the second channel corresponds to a priori navigation topological map far scene;
obtaining a two-channel gradient saliency map according to the two-channel base layer image, and constructing a weight function according to the two-channel gradient saliency map as follows:
wherein ,is weight(s)>Is a unitary matrix->、/>Is a dual-channel gradient saliency map, +.>The representation is->、/>Maximum value of (2);
carrying out weighted average on the base layer images of the two channels according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
and aiming at reserving scene features respectively represented by the first detail layer and the second detail layer to the greatest extent, constructing a detail layer fusion optimization function based on a weighted least square method, wherein the fusion optimization function comprises the following steps:
wherein ,representing the spatial position of the pixel, is->Represent the firstjLevel fusion detail layer->The first layer of detail is represented as such,representing the second level of detail, +.>Is a trade-off parameter->Is a coefficient having spatial variation weight;
solving the detail layer fusion optimization function to obtain a fusion detail layer;
and obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, realizing the simultaneous defogging of the near-distance scene and the far-distance scene, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
2. The method of claim 1, wherein performing multi-scale decomposition on the first channel and the second channel to obtain a two-channel base layer image and a detail layer image, respectively, comprises:
and carrying out multi-scale decomposition on the first channel and the second channel through an algorithm based on rolling guide filtering to respectively obtain a base layer image and a detail layer image of the double channels.
3. The method of claim 1, wherein obtaining a two-channel gradient saliency map from the two-channel base layer image comprises:
and detecting the edge gradient of each pixel of the dual-channel base layer image in the adjacent macro block through a Sobel operator to obtain a dual-channel gradient saliency map.
4. The method of claim 1, wherein performing a weighted average on the two-channel base layer image according to the weight function to obtain a fused base layer comprises:
and carrying out weighted average on the base layer images of the two channels according to the weight function to obtain a fusion base layer which is:
wherein ,for the fusion base layer obtained, < >>、/>And respectively obtaining the primary layer pictures of the first channel and the second channel.
5. The method of claim 1, wherein obtaining a reconstructed fused prior navigation topology map from the fused base layer and the fused detail layer comprises:
and performing inverse multi-scale decomposition according to the fusion base layer and the fusion detail layer to obtain a reconstructed and fused prior navigation topological map.
6. A polarized multi-scale resolved unmanned aerial vehicle vision matching navigation device, the device comprising:
the multi-scale decomposition module is used for obtaining a priori navigation topological map as a first channel, solving a polarization degree image of the priori navigation topological map as a second channel, and performing multi-scale decomposition on the first channel and the second channel to obtain a base layer image and a detail layer image of the two channels respectively; the detail layer image comprises a first detail layer obtained by the first channel and a second detail layer obtained by the second channel; the first channel corresponds to a priori navigation topological map near scene, and the second channel corresponds to a priori navigation topological map far scene;
the base layer fusion module is used for obtaining a two-channel gradient saliency map according to the two-channel base layer image, and constructing a weight function according to the two-channel gradient saliency map as follows:
wherein ,is weight(s)>Is a unitary matrix->、/>Is a dual-channel gradient saliency map, +.>The representation is->、/>Maximum value of (2);
carrying out weighted average on the base layer images of the two channels according to the weight function to obtain a fusion base layer; setting the information from the second channel in the fusion base layer to be not less than a preset value while the weight function emphasizes the channel input with more intense activity characteristics;
the detail layer fusion module is used for constructing a detail layer fusion optimization function based on a weighted least square method by taking the scene characteristics respectively represented by the first detail layer and the second detail layer as targets to the greatest extent, wherein the detail layer fusion optimization function comprises the following steps:
wherein ,representing the spatial position of the pixel, is->Represent the firstjLevel fusion detail layer->The first layer of detail is represented as such,representing the second level of detail, +.>Is a trade-off parameter->Is a coefficient having spatial variation weight;
solving the detail layer fusion optimization function to obtain a fusion detail layer;
and the navigation positioning module is used for obtaining a reconstructed and fused prior navigation topological map according to the fusion base layer and the fusion detail layer, realizing the simultaneous defogging of the near-distance scene and the far-distance scene, and obtaining navigation positioning information of the carrier through visual matching calculation according to the reconstructed and fused prior navigation topological map.
CN202310578842.4A 2023-05-22 2023-05-22 Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition Active CN116295446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310578842.4A CN116295446B (en) 2023-05-22 2023-05-22 Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310578842.4A CN116295446B (en) 2023-05-22 2023-05-22 Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition

Publications (2)

Publication Number Publication Date
CN116295446A CN116295446A (en) 2023-06-23
CN116295446B true CN116295446B (en) 2023-08-04

Family

ID=86799997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310578842.4A Active CN116295446B (en) 2023-05-22 2023-05-22 Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition

Country Status (1)

Country Link
CN (1) CN116295446B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393216A (en) * 2022-08-25 2022-11-25 中国人民解放军国防科技大学 Image defogging method and device based on polarization characteristics and atmospheric transmission model
CN116091361A (en) * 2023-03-23 2023-05-09 长春理工大学 Multi-polarization parameter image fusion method, system and terrain exploration monitor

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846289B (en) * 2017-01-17 2019-08-23 中北大学 A kind of infrared light intensity and polarization image fusion method
KR101918007B1 (en) * 2017-07-17 2018-11-13 서울시립대학교 산학협력단 Method and apparatus for data fusion of polarimetric synthetic aperature radar image and panchromatic image
CN109754384B (en) * 2018-12-18 2022-11-22 电子科技大学 Infrared polarization image fusion method of uncooled infrared focal plane array
CN110766676B (en) * 2019-10-24 2022-04-26 中国科学院长春光学精密机械与物理研究所 Target detection method based on multi-source sensor fusion
CN111080724B (en) * 2019-12-17 2023-04-28 大连理工大学 Fusion method of infrared light and visible light
US20240161479A1 (en) * 2021-03-25 2024-05-16 Sri International Polarized Image Enhancement using Deep Neural Networks
US11546508B1 (en) * 2021-07-21 2023-01-03 Black Sesame Technologies Inc. Polarization imaging system with super resolution fusion
CN114092369A (en) * 2021-11-19 2022-02-25 中国直升机设计研究所 Image fusion method based on visual saliency mapping and least square optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393216A (en) * 2022-08-25 2022-11-25 中国人民解放军国防科技大学 Image defogging method and device based on polarization characteristics and atmospheric transmission model
CN116091361A (en) * 2023-03-23 2023-05-09 长春理工大学 Multi-polarization parameter image fusion method, system and terrain exploration monitor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多尺度奇异值分解的偏振图像融合去雾算法与实验;周文舟等;中国光学;第14卷(第2期);298-306 *

Also Published As

Publication number Publication date
CN116295446A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
Ju et al. IDGCP: Image dehazing based on gamma correction prior
CN111788602B (en) Point cloud denoising system and method
Berman et al. Single image dehazing using haze-lines
CN109300190B (en) Three-dimensional data processing method, device, equipment and storage medium
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US9483703B2 (en) Online coupled camera pose estimation and dense reconstruction from video
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
Xiao et al. Planar segment based three‐dimensional point cloud registration in outdoor environments
AU2013213659B2 (en) Method and system for using fingerprints to track moving objects in video
Hua et al. Extended guided filtering for depth map upsampling
CN112947419B (en) Obstacle avoidance method, device and equipment
CN110998671B (en) Three-dimensional reconstruction method, device, system and storage medium
Liao et al. Pyramid multi‐view stereo with local consistency
Zhang et al. A new high resolution depth map estimation system using stereo vision and kinect depth sensing
Lo et al. Depth map super-resolution via Markov random fields without texture-copying artifacts
CN114519772A (en) Three-dimensional reconstruction method and system based on sparse point cloud and cost aggregation
Hou et al. Planarity constrained multi-view depth map reconstruction for urban scenes
CN112396701A (en) Satellite image processing method and device, electronic equipment and computer storage medium
Xie et al. A flexible free-space detection system based on stereo vision
Hu et al. IMGTR: Image-triangle based multi-view 3D reconstruction for urban scenes
Yoo et al. Accurate object distance estimation based on frequency‐domain analysis with a stereo camera
Wang et al. Pedestrian detection based on YOLOv3 multimodal data fusion
Yoo et al. True orthoimage generation by mutual recovery of occlusion areas
CN116295446B (en) Unmanned aerial vehicle vision matching navigation method and device adopting polarization multi-scale decomposition
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant