CN112101099B - Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method - Google Patents
Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method Download PDFInfo
- Publication number
- CN112101099B CN112101099B CN202010771915.8A CN202010771915A CN112101099B CN 112101099 B CN112101099 B CN 112101099B CN 202010771915 A CN202010771915 A CN 202010771915A CN 112101099 B CN112101099 B CN 112101099B
- Authority
- CN
- China
- Prior art keywords
- image
- eagle eye
- color
- eagle
- sea surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Abstract
The invention discloses an eagle eye self-adaptive mechanism-simulated sea surface small target identification method for an unmanned aerial vehicle, which comprises the following steps: modeling an eagle eye-imitating light and shade adaptation mechanism; step two: modeling an eagle eye-imitated color adaptation mechanism; step three: modeling an eagle eye-imitated foreground background adaptive mechanism; step four: carrying out amplitude-phase spectrum reconstruction on the image after the eagle eye adaptation mechanism is adjusted; step five: calculating multi-scale gray difference; step six: calculating and normalizing the significant information; step seven: and outputting a significant map of the small sea surface target recognized by the unmanned aerial vehicle imitating the eagle eye self-adaptive mechanism. The invention has the advantages that: 1) the eagle eye mechanism is introduced into the small target detection process, so that the capability that the eagle can capture targets in various environments in the high altitude of thousands of meters can be better mapped to the task requirement of the unmanned plane for remote identification and tracking of the unmanned plane; 2) by using the eagle eye self-adaptive mechanism, the unmanned aerial vehicle can be greatly adapted to various sea surface environments, such as various environments of sea surface fish scale light, sunny days, cloudy days, dusk and the like.
Description
Technical Field
The invention relates to a biological vision-based unmanned aerial vehicle sea surface long-distance small target recognition research method, in particular to an eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target recognition method, and belongs to the field of computer vision.
Background
Islands in the sea are an important guarantee for relieving ocean economy and coastal area development, and are also leading-edge places for maintaining ocean rights and interests and national safety. The relatively complex topography and topography of these islands makes the underlying geographic information difficult to obtain. The traditional manual measurement mode is difficult to complete the collection task of complex basic topographic data, and has the problems of large workload, serious loss, personnel safety and the like. Therefore, an intelligent platform is needed to fill in the blank area of geographic data in coastal areas.
An Unmanned Aerial Vehicle (UAV) is a type of drone controlled by a radio control device and a separate programmed control device. The unmanned aerial vehicle has the advantages of small size, low cost, high speed, wide visual field, low requirement on the operation environment, strong battlefield viability and the like. In the face of complex offshore task requirements, due to the limitation of the range of the unmanned aerial vehicle, the unmanned aerial vehicle must complete offshore operation tasks within a fixed time or return to the shore for energy supplement. Consequently, unmanned aerial vehicle's efficiency greatly reduced. Unmanned Surface Vessels (USV) can provide an energy supply platform for unmanned aerial vehicles. Therefore, the unmanned aerial vehicle can carry out offshore operation tasks for a long time without returning to the shore for supplementing energy. Fully embodies the superiority of unmanned aerial vehicle and unmanned ship cooperation. Before the unmanned aerial vehicle supplies energy to the unmanned ship, the unmanned ship needs to be searched and tracked. When the unmanned aerial vehicle is far off the sea, the unmanned ship can be regarded as a small target in the field of view of the unmanned aerial vehicle. In the process of detecting small targets on the sea, due to the influence of random interference such as fish scale light and waves, the problems of low resolution, fuzzy images, small information amount and the like are faced when the small targets on the sea surface are detected, and the remote detection of the small targets of the unmanned surface vehicle on the sea surface is not facilitated.
Since the 80's of the 20 th century, a large number of small target detection methods have been proposed. A Maximum Stable Extreme Region (MSER) method is adopted to detect a plurality of regions with different stable thresholds, but the false alarm rate of the method is high. Although color information and filtering methods are considered in the Region Stability and Saliency (RSS) algorithm, the detection effect is not ideal when the object and the background are similar. In addition, methods such as a top-hat transformation algorithm and a machine learning method are also applied to the detection of small targets. However, the conditions used by these algorithms are harsh and the detection accuracy of the algorithms is not high. With the development of machine learning technology, a large number of deep learning methods are applied to small target detection, such as region-convolutional neural network (R-CNN) and fast region-convolutional neural network (fast R-CNN) frameworks, which greatly improves the accuracy of small target detection. The method of the unified-real-time target detection (Yolo) also makes a certain contribution to improving the precision of the small target detection. Although the deep learning algorithm has better performance in small target detection, a large number of samples need to be calibrated manually, and the training process is time-consuming. With the gradual highlighting of biological visual advantages, visual attention methods such as an Itti method based on a biological visual principle, a graph-based visual saliency algorithm and a Spectral Residual (SR) method are applied to target detection, but most methods are sensitive to noise signals, and the false alarm rate of small targets is high.
Hawk has the reputation of "the king of the sky". In high altitudes of several meters, hawks can accurately identify and catch on the ground or sea surface preys. This extraordinary ability of the eagle comes from the unique eye structure of the eagle. The retina of the eagle has two fovea, a deep fovea and a superficial fovea, which are each concentrated in different areas of the eye. The visual field of the hawk eye is similar to a sphere, a binocular visual area formed by overlapping the visual fields of two shallow recesses is arranged in front of the hawk head, and the deep recesses form monocular visual areas on two sides of the hawk eye respectively, so that the hawk can find a prey and realize capture in a wide scene. Eagle utilizes the deep concave side target of observing, realizes the search in a large scale. And observing a front scene by using the dimple to realize the tracking and capturing of the target. There are comb-like protrusions in the olecranon eye, which are special folding structures that protrude into the posterior chamber of the eye from the optic nerve entry point. The function of the comb-shaped protrusions weakens scattered light in eyes, so that images obtained by the eagle eyes are clearer. In addition, the hawk can accurately catch prey in different scenes such as grasslands, snowfields, oceans and the like, which shows that the hawk has strong adaptability to the background. The characteristics that the detection of the sea surface small target is often influenced by factors such as sunlight reflection, sea surface fluctuation, sea-sky boundary and the like are combined, and the background adaptability characteristic of the hawk is applied to the detection of the sea small target.
In conclusion, the invention provides the eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method, which aims to solve the problems of remote small target identification and remote positioning difficulty on a complex sea surface, effectively improve the identification capability of the small target of the unmanned surface boat and provide possibility for guiding the unmanned aerial vehicle to land on the unmanned boat for energy supplement.
Disclosure of Invention
The invention provides an eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method, and aims to provide a real-time and on-line remote small target detection method under a complex sea surface background environment, so that the efficiency and the precision of small target detection under the complex sea surface environment are effectively improved, and the possibility of remotely landing an unmanned aerial vehicle on an unmanned ship is provided.
Aiming at the problem that an unmanned aerial vehicle recognizes a long-distance sea surface small target, the invention develops an eagle eye-imitated adaptive mechanism-based unmanned aerial vehicle sea surface small target recognition method, the structural framework of the method is shown in figure 1, and the method comprises the following specific steps:
the method comprises the following steps: eagle eye-imitated light and shade adaptation mechanism modeling
Establishing a mathematic model simulating the eagle eye light and shade adaptation mechanism according to the light and shade adaptation mechanism of the eagle eye, wherein the mathematic model I suitable for the light environment g (i, j) can be represented as
Where I is the input image, (I, j) is a certain position in the image, M is the length of the image, N is the width of the image, k is the linear variation slope, G (I, j, σ) guass ) Is that the variance is sigma guass Two-dimensional Gaussian function of (I) V (i, j) is the brightness of the image at location (i, j),are the convolution symbols.
For convenience of presentation, mid is defined V Is the average brightness information of the input image I and can be expressed as
Therefore, the mathematic model I imitating the adaptation of the eagle eye dark environment d (i, j) can be represented as
Wherein Mean (i, j, σ) mean ) As a mean filter function, σ mean Is the filter kernel size of the mean filter function. Therefore, the mathematical model simulating the eagle eye shading adaptation mechanism can be summarized as formula (4).
Wherein, I l For the image after the input image is adjusted by the eagle eye shading adaptation mechanism, T l1 And T l2 Respectively, a threshold value for distinguishing a bright image from a dark image.
Step two: modeling of eagle eye-imitated color adaptation mechanism
The color absorption through the eagle eye can be expressed as:
where γ is the guess of the color and is a constant. λ is the color failure rate, which ranges from [0,0.25 ]. Alpha and beta are used to describe the change in color.
When the absolute value of the difference between the pixel averages of any two channels exceeds 20, the image is considered to exhibit a particular color, and the absolute values of the differences between the pixel averages of the channels RG, RB, GB are represented by equation (6), where R, G, B represents the red, green, and blue color channels of the image, respectively:
wherein, I s (I, j), s ∈ { R, G, B } are the pixel values of the input image I on R, G, B color channels, respectively. When the input image I satisfies (RG > T) c )&(RB>T c )&(GB>T c ) And if not, the input image I belongs to the special color image. When the image is judged to be a normal color image, the adjustment mode is shown as the formula (7), and when the image is judged to be a special color image, the adjustment mode is shown as the formula (8).
Wherein, T c The color is judged to be a threshold value, and the image after color adjustment is I c ,Med(i,j,σ med ) As a filter kernel of sigma med D (i, j, σ) of dou ) Being a bilateral filter, σ dou The standard deviation of the bilateral filter.
Step three: modeling imitating eagle eye foreground background adaptive mechanism
Relevant research shows that the single cone of the eagle eyeCells were used for color vision and bipyramid cells were used for achromatic vision, where color vision of the eagle eye was used to view a larger field of view and achromatic vision of the eagle eye was used to detect details of the image. Foreground-background segmentation threshold T of acquired input image I g And further to distinguish between foreground information and background information.
Wherein, g t Is the absolute value of the difference between the gray average and the foreground-background threshold, mid g Is the color mean, g, of the I-gray scale of the input image rgb Is the absolute value, mid, of the difference between the average value of the image grey scale and the average value of the channels of the input image I RGB Is the average value of each channel of the input image I. Suppose when T g <mid g If so, the input image I is judged as a foreground prominent image, and the expression mode is shown as the formula (10). Otherwise, the input image I is a background-highlighted image, and the expression manner thereof is shown in equation (11).
Wherein, T rgb To adjust the threshold, T b1 And T b2 To determine a threshold for the background, I b And the image is the final adjusted image imitating the eagle eye self-adaptive mechanism.
Step four: carrying out amplitude-phase spectrum reconstruction on the image after the adjustment of the eagle eye adaptation mechanism
Respectively calculating an amplitude spectrum and a phase spectrum of the adjusted images obtained in the first step to the third step, reconstructing the amplitude-phase spectrum, and reconstructing an image S a (i, j) may be represented as.
Wherein A (f) is the adjusted image I b P (f) is the adjusted image I b Phase spectrum of (1), F -1 (. cndot.) represents an inverse fourier transform.
Step five: calculating multi-scale gray scale differences
Reconstruction of an image S a (i, j) multiscale gray scale difference D a (i, j) can be represented as
Wherein K is the number of neighborhoods at point (i, j),is the k-th image at image (i, j) th The gray scale difference of each neighborhood can be expressed as
Wherein L is max Is a positive integer, omega k ,(p,q)∈Ω k Is the kth at point (i, j) th A neighborhood, as shown in fig. 3. Omega k Can be expressed as (2k +1) 2 ,Ω K ,(m,n)∈Ω K Is the largest neighborhood at point (i, j),andare respectively region omega k And omega K The number of pixels.
Step six: computing saliency information and normalizing
The saliency map S (I, j) of the input image I may be represented as
Where G (i, j, σ) is a gaussian filter with standard deviation σ, and norm (·) represents the normalization operation.
Step seven: salient map for identifying small sea surface targets by unmanned aerial vehicle outputting eagle eye-imitating self-adaptive mechanism
And finally, obtaining a small target detection result of the unmanned aerial vehicle for identifying the complex sea surface environment, such as a sunny day, a cloudy day, a dusk and a severe fish scale luminous environment, through the calculation in the first step to the sixth step. Position estimation can be further carried out according to the mode of triangulation location, and then guide unmanned aerial vehicle to fly to unmanned ship target, finally realize descending the unmanned aerial vehicle and carry out the energy supply on unmanned ship.
The invention provides an eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method. The method realizes the identification of the small target on the complex sea surface by simulating the light and shade adaptation mechanism, the color adaptation mechanism and the foreground-background adaptation mechanism of the eagle eye. The main advantages of the invention are mainly embodied in the following 2 aspects: 1) the eagle eye mechanism is introduced into the small target detection process, so that the capability of the eagle to capture various environmental preys in thousands of meters of high altitude can be better mapped to the task requirement of the unmanned aerial vehicle for remote identification and tracking of the unmanned aerial vehicle; 2) the eagle eye self-adaptive mechanism can enable the unmanned aerial vehicle to be greatly adaptive to various sea surface environments, such as various environments of sea surface fish, such as the light, sunny days, cloudy days, dusk and the like.
Drawings
FIG. 1 is a block diagram of the process of the present invention.
Fig. 2 is a schematic view of an eagle eye eyeball structure and a comb-shaped protrusion.
FIG. 3 is a schematic diagram of a neighborhood of pixels in an image.
Fig. 4(a) - (d) are diagrams of the detection results of small targets in the sea surface environment on a sunny day.
5(a) - (d) are diagrams of small target detection results in cloudy sea surface environments.
Fig. 6(a) - (d) are graphs of the detection results of small targets in the environments of the dusk sea surface.
Fig. 7(a) - (d) are diagrams of the detection results of small targets in the sea environment when fish is seriously irradiated by the scaly light and sunlight.
The reference numbers and symbols in the figures are as follows:
i-input image
R, G, B-Red, Green, blue color channel
T l1 Luminance segmentation threshold 1
T l2 Luminance division threshold 2
mid V -average brightness of the image
mid R -pixel mean of the red channel of the image
mid G -pixel mean of the green channel of the image
mid B -pixel mean of blue channel of image
mid RGB -pixel mean of each color channel of the image
mid g -pixel mean of the grey scale of the image
RG-Absolute value of the difference between the mean values of pixels of the Red and Red channels
Absolute value of difference between RB-red and blue channel pixel mean values
Absolute value of difference between GB-green-blue channel pixel mean values
T g -image foreground-background segmentation threshold
g t Absolute value of the difference between the image mean value and the foreground-background segmentation threshold
g rgb The absolute value of the difference between the image gray level mean and the image pixel mean of each channel
S-output saliency map
Y-is (satisfies the condition)
N-No (unsatisfied with condition)
Detailed Description
The effectiveness of the method proposed by the present invention is verified by a specific example of identifying small targets by a drone in a complex sea environment. The experimental computer is configured with an Intel Core i7-4790 processor, 3.60Ghz dominant frequency, 4G memory, and software as MATLAB 2014a version. An eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target recognition method specifically comprises the following steps:
the method comprises the following steps: modeling of eagle eye-imitated light and shade adaptation mechanism
The comb-like protrusion is a special colorful variable bird eye structure located at the optic nerve ending to the optic nerve in the vitreous, and is schematically shown in fig. 2. Wherein the enlarged portion of the lower right corner is the comb-like raised portion of the eagle. Like other birds, the comb-like protrusions of the eagle eye not only do not affect vision, but also absorb a significant portion of the light. Wherein, the retina of the barn owl is rich in rod cells which are sensitive to light. Researches show that the barn owls have the ability to adapt to dazzling light and dim light environments, and the retina bases of the barn owls are provided with a layer of extremely good reflective film, so that the structure can enhance the weak light entering pupils.
Establishing a mathematic model imitating the eagle eye light and shade adaptation mechanism according to the eagle eye light and shade adaptation mechanism, wherein the mathematic model I of the light environment adaptation g (i, j) is represented by the formula (1). Where I is the input image, (I, j) is a certain position in the image, M is the length of the image, N is the width of the image, k is the linear variation slope, G (I, j, σ) guass ) Is that the variance is sigma guass Two-dimensional Gaussian function of (I) V (i, j) is the brightness of the image at location (i, j),are the convolution symbols.
For convenience of presentation, mid is defined V The average luminance information of the input image I is shown in equation (2). Therefore, the mathematic model I imitating the adaptation of the eagle eye dark environment d (i, j) is represented by the formula (3). Wherein Mean (i, j, σ) mean ) As a mean filter function, σ mean Is the filter kernel size of the mean filter function. Therefore, the mathematical model simulating the eagle eye shading adaptation mechanism can be summarized as formula (4).
Wherein, I l Determining threshold values for distinguishing bright images and dark images, T being respectively, for the images after the input images are adjusted by an eagle eye shading adaptation mechanism according to the characteristics of sea surface imaging under various weather conditions and multiple experimental tests l1 0.745 and T l2 =0.518。
Step two: modeling of eagle eye-imitated color adaptation mechanism
There are four different cone photoreceptors in the olecranal retina, and there are a large number of oil droplets distributed at the ends of the olecranal cells. The oil drops mainly comprise five colors of green, yellow, orange, red and transparent, and different oil drops represent different functions, which can not only reduce the input of short wave, but also filter the overlapping of various wavelengths. The spectral sensitivity of the eagle eye cone cells depends on the light transmittance of oil drops and the absorption of pigments, and by changing the spectral transmittance, the resolution of colors and the constancy of colors can be improved.
The eagle can adapt to different environments according to different oil drop proportions in the eyes. When the photoreceptor receives a different background, the color resolution changes. The threshold is higher when the difference between the stimulus and background is significant, otherwise the threshold is lower. The absorption rate of the color through the eagle eye is shown in formula (5). Where γ is the guess rate of color and is a constant. λ is the color failure rate, which ranges from [0,0.25 ]. Alpha and beta are used to describe the change in color.
Color is an important factor in distinguishing different backgrounds. The proportion of colors in the image is reflected by the pixel average for each color channel. When the absolute value of the difference between the pixel averages between any two channels exceeds 20, the image is considered to exhibit a particular color, and the absolute values of the differences between the pixel averages between the channels RG, RB, GB are represented by equation (6), where R, G, B represents the red, green, and blue color channels of the image, respectively. Wherein, I s (I, j), s ∈ { R, G, B } are the pixel values of the input image I on the color channels R, G, B, respectively. When the input image I satisfies (RG > T) c )&(RB>T c )&(GB>T c ) And if not, the input image I belongs to the special color image. When the image is judged to be a normal color image, the adjustment mode is shown as the formula (7), and when the image is judged to be a special color image, the adjustment mode is shown as the formula (8).
Wherein, T c 25 is the color judgment threshold, and the image after color adjustment is,Med(i,j,σ med ) As a filter kernel of sigma med D (i, j, σ) of dou ) Being a bilateral filter, σ dou The standard deviation of the bilateral filter.
Step three: modeling imitating eagle eye foreground background adaptive mechanism
Relevant studies have shown that single cone cells of the eagle eye are used for chromatic vision and double cone cells are used for achromatic vision, where chromatic vision of the eagle eye is used to view a larger field of view and achromatic vision of the eagle eye is used to detect details of the image. Foreground-background segmentation threshold T of acquired input image I g And then to distinguish between foreground information and background information.
As shown in formula (9), g t Is the absolute value of the difference between the gray average and the foreground-background threshold, mid g Is the color mean, g, of the image I grayscale map rgb Is the absolute value, mid, of the difference between the average value of the image grey scale and the average value of the channels of the input image I RGB Is the average value of each channel of the input image I. Suppose when T g <mid g If so, the input image I is judged as a foreground prominent image, and the expression mode is shown as the formula (10). Otherwise, the input image I is a background-highlighted image, and the expression manner thereof is shown in equation (11). Wherein, the adjustment threshold value T is determined according to the characteristics of sea surface imaging under various weather conditions and multiple experimental tests rgb The background determination threshold is T8 b1 170 and T b2 =130,I b And the image is the final adjusted image imitating the eagle eye self-adaptive mechanism.
Step four: carrying out amplitude-phase spectrum reconstruction on the image after the adjustment of the eagle eye adaptation mechanism
Respectively calculating an amplitude spectrum and a phase spectrum of the adjusted images obtained in the first step to the third step, reconstructing the amplitude-phase spectrum, and reconstructing an image S a (i, j) is represented by the formula (12). Wherein A (f) is the adjusted image I b P (f) is the adjusted image I b Phase spectrum of (1), F -1 (. cndot.) represents an inverse fourier transform.
Step five: computing multi-scale gray scale differences
Reconstructed image S a (i, j) multi-scale gray scale differenceIs divided into D a (i, j) is represented by the formula (13). Wherein K is the number of neighborhoods at point (i, j),is the k-th image at image (i, j) th The gray scale difference of each neighborhood is shown in formula (14). Wherein L is max Is a positive integer, Ω k ,(p,q)∈Ω k Is the k-th at point (i, j) th A neighborhood. Omega k Can be expressed as (2k +1) 2 ,Ω K ,(m,n)∈Ω K Is the largest neighborhood at point (i, j),andare respectively the region omega k And omega K The number of pixels.
Step six: computing saliency information and normalizing
The saliency map S (I, j) of the input image I is as shown in equation (15). Where G (i, j, σ) is a gaussian filter with standard deviation σ, and norm (·) represents the normalization operation.
Step seven: salient map for identifying small sea surface targets by unmanned aerial vehicle outputting eagle eye-imitating self-adaptive mechanism
And calculating in the first step to the sixth step to obtain a final complex sea surface small target detection result, estimating the position according to a triangulation positioning mode, further guiding the unmanned aerial vehicle to fly to the unmanned ship target, and finally realizing energy supplement of the unmanned aerial vehicle landing on the unmanned ship. The detection result of the small target in the sea surface environment in a fine day is shown in fig. 4, wherein fig. 4(a) is an original image (containing a small target) of the small target in the sea surface environment in a fine day; FIG. 4(b) is a saliency map derived based on the present invention; FIG. 4(c) is an information distribution graph of a normalized saliency map; fig. 4(d) shows the target position (i.e., the region of interest) selected in the frame of the original image according to the saliency detection result. As can be seen from fig. 4, under a clear weather environment, a small sea-surface target can be effectively detected. The detection result of the small target in the cloudy sea surface environment is shown in fig. 5, wherein fig. 5(a) is an original drawing (including two small targets) containing the small target on the sea surface in the cloudy sea surface environment; fig. 5(b) shows the significance detection result in the weather environment, where both small targets are accurately detected; FIG. 5(c) is an information distribution diagram of a normalized saliency map in a cloudy environment; fig. 5(d) shows the target position selected in the original image according to the saliency detection result in the cloudy environment. The detection result of the small target in the dusk sea environment is shown in fig. 6, wherein fig. 6(a) is an original image of a sea scene containing a small target at dusk; FIG. 6(b) shows the results of significance detection; FIG. 6(c) is an information distribution graph of a normalized saliency map; fig. 6(d) is a graph showing the result of the location of a small target on the sea surface at dusk according to the present invention. The detection results of the small targets in the sea surface environment under severe fish-scale light and sunlight irradiation are shown in fig. 7, wherein fig. 7(a) is an original drawing of the small targets in the sea surface with severe fish-scale light and sunlight reflection; FIG. 7(b) shows the results of significance detection; FIG. 7(c) information distribution plot of normalized saliency map; fig. 7(d) shows a small target area on the sea surface in the presence of disturbance information such as fish scale light.
Claims (1)
1. An eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target recognition method is characterized by comprising the following steps: the method comprises the following specific steps:
the method comprises the following steps: eagle eye-imitated light and shade adaptation mechanism modeling
Establishing a mathematic model simulating the eagle eye light and shade adaptation mechanism according to the light and shade adaptation mechanism of the eagle eye, wherein the mathematic model I suitable for the light environment g (i, j) is represented by
Where I is the input image, (I, j) is a certain position in the image, M is the length of the image, N is the width of the image, k is the linear variation slope, G (I, j, σ) guass ) Is that the variance is sigma guass Two-dimensional Gaussian function of (I) V (i, j) is the brightness of the image at location (i, j),is a convolution symbol;
for convenience of presentation, mid is defined V Is the average luminance information of the input image I, represented as
Therefore, the mathematic model I imitating the adaptation of the eagle eye dark environment d (i, j) is represented by
Wherein Mean (i, j, σ) mean ) As a mean filter function, σ mean A filter kernel size that is a mean filter function; therefore, the summary of the mathematical model imitating the eagle eye shading adaptation mechanism is formula (4);
wherein, I l For the image after the input image is adjusted by the eagle eye shading adaptation mechanism, T l1 And T l2 Respectively a threshold for distinguishing a bright image from a dark image;
step two: modeling of eagle eye-imitating color adaptation mechanism
The color absorption through the eagle eye is expressed as:
wherein, gamma is the guess rate of color and is a constant; λ is the color failure rate, which ranges from [0,0.25 ]; α and β are used to describe the change in color;
the absolute values RG, RB, GB of the differences between the pixel averages of the channels are represented by equation (6), where R, G, B represent the red, green, and blue color channels of the image, respectively:
wherein, I s (I, j), s ∈ { R, G, B } is the pixel value of the input image I on each color channel of R, G, B respectively; when the input image I satisfies (RG > T) c )&(RB>T c )&(GB>T c ) If so, setting the input image I as a normal color image, otherwise, belonging to a special color image; when the image is judged to be a normal color image, the adjustment mode is shown as a formula (7), and when the image is judged to be a special color image, the adjustment mode is shown as a formula (8);
wherein, T c The color is judged to be a threshold value, and the image after color adjustment is I c ,Med(i,j,σ med ) For the filter kernel to be sigma med D (i, j, σ) of dou ) Being a bilateral filter, σ dou Is the standard deviation of the bilateral filter;
step three: modeling imitating eagle eye foreground background adaptive mechanism
Foreground-background segmentation threshold T of acquired input image I g Further, foreground information and background information are distinguished;
wherein, g t Is the absolute value of the difference between the gray average and the foreground-background threshold, mid g Is the color mean, g, of the I-gray scale of the input image rgb Is the average value of image gray scale and the input image IAbsolute value of the difference between the mean values of the channels, mid RGB The average value of each channel of the input image I is obtained; suppose when T g <mid g When the input image I is judged to be the foreground prominent image, the expression mode is shown as a formula (10); otherwise, the input image I is a background highlight image, and the expression mode is shown as the formula (11);
wherein, T rgb To adjust the threshold, T b1 And T b2 Threshold value for background determination, I b The final adjusted image of the eagle eye-imitating self-adaptive mechanism is obtained;
step four: amplitude-phase spectrum reconstruction is carried out on the image after the adjustment of the eagle eye adaptation mechanism
Respectively calculating an amplitude spectrum and a phase spectrum of the adjusted image obtained in the first step to the third step, reconstructing the amplitude-phase spectrum, and reconstructing an image S a (i, j) is represented as;
wherein A (f) is the adjusted image I b P (f) is the adjusted image I b Phase spectrum of (2), F -1 (. -) represents an inverse fourier transform;
step five: computing multi-scale gray scale differences
Reconstructed image S a (i, j) multiscale gray scale difference D a (i, j) is represented by
Wherein K is the number of neighborhoods at point (i, j),is the k-th image at image (i, j) th The gray scale difference of each neighborhood is expressed as
Wherein L is max Is a positive integer, Ω k ,(p,q)∈Ω k Is the kth at point (i, j) th A neighborhood; omega k Is expressed as (2k +1) 2 ,Ω K ,(m,n)∈Ω K Is the largest neighborhood at point (i, j),andare respectively region omega k And omega K The number of pixels of (a);
step six: computing saliency information and normalizing
The saliency map S (I, j) of the input image I is represented as
Wherein G (i, j, sigma) is a Gaussian filter with standard deviation sigma, and norm (-) represents normalization operation;
step seven: and outputting a significant map of the small sea surface target recognized by the unmanned aerial vehicle imitating the eagle eye self-adaptive mechanism.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010771915.8A CN112101099B (en) | 2020-08-04 | 2020-08-04 | Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010771915.8A CN112101099B (en) | 2020-08-04 | 2020-08-04 | Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112101099A CN112101099A (en) | 2020-12-18 |
CN112101099B true CN112101099B (en) | 2022-09-06 |
Family
ID=73750053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010771915.8A Active CN112101099B (en) | 2020-08-04 | 2020-08-04 | Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101099B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102629988A (en) * | 2012-03-31 | 2012-08-08 | 博康智能网络科技股份有限公司 | Automatic control method and device of camera head |
CN103051887A (en) * | 2013-01-23 | 2013-04-17 | 河海大学常州校区 | Eagle eye-imitated intelligent visual sensing node and work method thereof |
CN204929083U (en) * | 2015-09-01 | 2015-12-30 | 河南工业大学 | Pay close attention to target search device based on imitative hawk eye vision |
CN107392963A (en) * | 2017-06-28 | 2017-11-24 | 北京航空航天大学 | A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling |
CN107424156A (en) * | 2017-06-28 | 2017-12-01 | 北京航空航天大学 | Unmanned plane autonomous formation based on Fang Cang Owl eye vision attentions accurately measures method |
CN109949363A (en) * | 2019-03-26 | 2019-06-28 | 北京遥感设备研究所 | A kind of object recognition and detection system suitable for Terahertz bionic compound eyes imaging system |
CN111105429A (en) * | 2019-12-03 | 2020-05-05 | 华中科技大学 | Integrated unmanned aerial vehicle detection method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017053874A1 (en) * | 2015-09-23 | 2017-03-30 | Datalogic ADC, Inc. | Imaging systems and methods for tracking objects |
-
2020
- 2020-08-04 CN CN202010771915.8A patent/CN112101099B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102629988A (en) * | 2012-03-31 | 2012-08-08 | 博康智能网络科技股份有限公司 | Automatic control method and device of camera head |
CN103051887A (en) * | 2013-01-23 | 2013-04-17 | 河海大学常州校区 | Eagle eye-imitated intelligent visual sensing node and work method thereof |
CN204929083U (en) * | 2015-09-01 | 2015-12-30 | 河南工业大学 | Pay close attention to target search device based on imitative hawk eye vision |
CN107392963A (en) * | 2017-06-28 | 2017-11-24 | 北京航空航天大学 | A kind of imitative hawkeye moving target localization method for soft autonomous air refuelling |
CN107424156A (en) * | 2017-06-28 | 2017-12-01 | 北京航空航天大学 | Unmanned plane autonomous formation based on Fang Cang Owl eye vision attentions accurately measures method |
CN109949363A (en) * | 2019-03-26 | 2019-06-28 | 北京遥感设备研究所 | A kind of object recognition and detection system suitable for Terahertz bionic compound eyes imaging system |
CN111105429A (en) * | 2019-12-03 | 2020-05-05 | 华中科技大学 | Integrated unmanned aerial vehicle detection method |
Non-Patent Citations (3)
Title |
---|
仿鹰眼视觉技术研究进展;赵国治等;《中国科学:技术科学》;20170520(第05期);全文 * |
基于仿生探测的大视场小目标跟踪算法;寇巍巍等;《探测与控制学报》;20171226(第06期);全文 * |
基于相位谱的红外小目标搜索算法研究;许强等;《红外》;20120610(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112101099A (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108573276A (en) | A kind of change detecting method based on high-resolution remote sensing image | |
CN103914813B (en) | The restored method of colored haze image defogging and illumination compensation | |
CN106570485B (en) | A kind of raft culture remote sensing images scene mask method based on deep learning | |
Qu et al. | A pedestrian detection method based on yolov3 model and image enhanced by retinex | |
CN109558806A (en) | The detection method and system of high score Remote Sensing Imagery Change | |
Miao et al. | Classification of farmland images based on color features | |
CN104318051B (en) | The rule-based remote sensing of Water-Body Information on a large scale automatic extracting system and method | |
CN107247927B (en) | Method and system for extracting coastline information of remote sensing image based on tassel cap transformation | |
CN105512622B (en) | A kind of visible remote sensing image sea land dividing method based on figure segmentation and supervised learning | |
CN110853070A (en) | Underwater sea cucumber image segmentation method based on significance and Grabcut | |
CN115578660B (en) | Land block segmentation method based on remote sensing image | |
Duan et al. | Unmanned aerial vehicle recognition of maritime small-target based on biological eagle-eye vision adaptation mechanism | |
CN109165658A (en) | A kind of strong negative sample underwater target detection method based on Faster-RCNN | |
CN114764801A (en) | Weak and small ship target fusion detection method and device based on multi-vision significant features | |
Wang et al. | Improved minimum spanning tree based image segmentation with guided matting | |
CN112330562B (en) | Heterogeneous remote sensing image transformation method and system | |
CN112101099B (en) | Eagle eye self-adaptive mechanism-simulated unmanned aerial vehicle sea surface small target identification method | |
CN115170523B (en) | Low-complexity infrared dim target detection method based on local contrast | |
Politz et al. | Exploring ALS and DIM data for semantic segmentation using CNNs | |
CN107862280A (en) | A kind of storm surge disaster appraisal procedure based on unmanned aerial vehicle remote sensing images | |
Yuming et al. | Traffic signal light detection and recognition based on canny operator | |
Shen et al. | Polarization calculation and underwater target detection inspired by biological visual imaging | |
Sangari et al. | Deep learning-based Object Detection in Underwater Communications System | |
Huang et al. | Geological segmentation on UAV aerial image using shape-based LSM with dominant color | |
Tong et al. | Study on the Extraction of Target Contours of Underwater Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |