CN115439497A - Infrared image ship target rapid identification method based on improved HOU model - Google Patents

Infrared image ship target rapid identification method based on improved HOU model Download PDF

Info

Publication number
CN115439497A
CN115439497A CN202211055328.4A CN202211055328A CN115439497A CN 115439497 A CN115439497 A CN 115439497A CN 202211055328 A CN202211055328 A CN 202211055328A CN 115439497 A CN115439497 A CN 115439497A
Authority
CN
China
Prior art keywords
target
image
hou
model
ship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211055328.4A
Other languages
Chinese (zh)
Inventor
马一帆
苗得雨
曹爽
王勇
李晓露
樊宇亮
杨广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Vastriver Technology Co ltd
Original Assignee
Beijing Vastriver Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Vastriver Technology Co ltd filed Critical Beijing Vastriver Technology Co ltd
Priority to CN202211055328.4A priority Critical patent/CN115439497A/en
Publication of CN115439497A publication Critical patent/CN115439497A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

An infrared image ship target rapid identification method based on an improved HOU model belongs to the field of infrared image target detection. The invention aims at the problem that the existing infrared ship target identification method has low detection and identification accuracy under the complex sea surface background. The method comprises the following steps: preprocessing the collected original infrared ship image to obtain a preprocessed gray image; obtaining a brightness characteristic graph and an HOU model saliency graph from brightness information of the preprocessed gray level image, and performing weighted fusion to obtain a fused saliency graph; performing threshold segmentation on the fused saliency map by using an improved OTSU algorithm, and determining a background area and a target area in the fused saliency map through morphological closed operation; and extracting geometric features of the target area, and determining a ship target according to the geometric features. The method is used for quickly identifying the ship target.

Description

Infrared image ship target rapid identification method based on improved HOU model
Technical Field
The invention relates to an infrared image ship target rapid identification method based on an improved HOU model, and belongs to the field of infrared image target detection.
Background
The infrared imaging technology becomes a core technology of unmanned aerial vehicle reconnaissance due to the advantages of good concealment, high resolution and high sensitivity.
The marine infrared image ship target identification technology firstly carries out infrared detection and imaging on marine ships, utilizes a digital image processing technology to process images, and completes detection on regions which are possibly ship targets from a complex external background to identify whether the marine infrared image ship targets are real ship targets. Due to the performance limitation and the internal noise influence of the infrared detector, and the infrared detector is in a severe marine environment, the generated image has the phenomena of low signal-to-noise ratio and low contrast. Meanwhile, as the infrared imaging equipment is mostly in a motion state and the ship target is mostly in a non-static state, the infrared ship image has the problems of blurred edges and unclear detailed information. In addition, in an actual scene, the detector equipment is far away from the ship target, so that the target imaging area is small, and at the moment, the separation difficulty of the small target and the background is high, so that real-time and effective target detection is difficult to realize.
The existing infrared ship target identification method is realized based on gray level statistical characteristics, deep learning, an ITTI visual saliency algorithm and the like, and as an infrared image only has gray level information, the image signal to noise ratio is low, the infrared image is greatly influenced by temperature and is fuzzy, and from the view of an actual test result, the biggest challenges of ship target identification under a sea surface background are still caused by insufficient intelligent identification capability, poor identification instantaneity, insufficient snow and fog interference resistance and the like. Therefore, an infrared ship target identification algorithm which is suitable for complex backgrounds, strong in noise suppression capability, high in identification rate and high in identification speed is urgently needed.
Disclosure of Invention
The invention provides an infrared image ship target rapid identification method based on an improved HOU model, aiming at the problem that the existing infrared ship target identification method is low in detection and identification accuracy rate under a complex sea surface background.
The invention relates to an infrared image ship target rapid identification method based on an improved HOU model, which comprises the following steps,
the method comprises the following steps: preprocessing the collected original infrared ship image to obtain a preprocessed gray image;
step two: calculating brightness information of the preprocessed gray level image to obtain a brightness characteristic diagram; meanwhile, carrying out frequency domain and space domain transformation on the preprocessed gray level image to obtain an HOU model saliency map; carrying out weighted fusion on the brightness characteristic diagram and the HOU model saliency map to obtain a fused saliency map;
increasing a threshold adjustment coefficient on the basis of the existing OTSU algorithm to obtain an improved OTSU algorithm; performing threshold segmentation on the fused saliency map by using an improved OTSU algorithm, and determining a background area and a target area in the fused saliency map through morphological closed operation;
step three: and extracting geometric features of the target area, and determining a ship target according to the geometric features.
According to the infrared image ship target rapid identification method based on the improved HOU model, preprocessing in the first step comprises the following steps: and (3) after the pixels of the original infrared ship image are subjected to color reversal, performing morphological filtering to obtain a preprocessed gray level image.
According to the infrared image ship target rapid identification method based on the improved HOU model, the morphological filtering formula is as follows:
d(x,y)=s(x,y)+d top-hat -d bottom-hat
wherein d (x, y) is the pixel of the preprocessed gray image, s (x, y) is the pixel of the pixel reverse color image, d top-hat For top hat operation, d bottom-hat For bottom-hat operation, x is the abscissa of the pixel and y is the ordinate of the pixel;
Figure BDA0003824702310000021
Figure BDA0003824702310000022
s (x + x ', y + y') is the neighboring point pixel of s (x, y), x 'is the abscissa of the neighboring point pixel, y' is the ordinate of the neighboring point pixel, ele (x ', y') is the neighboring point architecture factor.
According to the infrared image ship target rapid identification method based on the improved HOU model, the method for obtaining the brightness characteristic diagram in the second step comprises the following steps:
according to the brightness information of the preprocessed gray level image, down-sampling the preprocessed gray level image to construct a gray level Gaussian pyramid; performing central peripheral difference operation on the gray Gaussian pyramid, performing point-to-point difference calculation on different image layers to obtain a plurality of brightness characteristic initial images, and normalizing the plurality of brightness characteristic initial images to obtain a brightness characteristic image; and if the two layers for carrying out the point difference calculation have different sizes, obtaining the size of the relatively small scale layer which is the same as that of the relatively large scale layer by a linear interpolation method, and then carrying out the point difference calculation.
According to the infrared image ship target rapid identification method based on the improved HOU model, the method for obtaining the HOU model saliency map in the second step comprises the following steps:
and transforming the preprocessed gray level image into a frequency domain, calculating a log amplitude spectrum and carrying out mean filtering to obtain a phase spectrum, calculating a log residual spectrum by using the phase spectrum, and transforming the log residual spectrum from the frequency domain to a space domain by adopting Fourier inversion transformation to obtain the HOU model saliency map.
According to the infrared image ship target rapid identification method based on the improved HOU model, the formula of the fused saliency map obtained in the second step is as follows:
Figure BDA0003824702310000031
wherein S (x) is a saliency map after fusion, L (x) is a luminance feature map, and H (x) is a HOU model saliency map.
According to the infrared image ship target rapid identification method based on the improved HOU model, the threshold T of the improved OTSU algorithm in the second step is adjusted to be 1.1 times of the threshold of the existing OTSU algorithm;
the threshold segmentation of the fused saliency map by using the improved OTSU algorithm comprises the following steps:
and dividing pixels of the fused saliency map into a target area and a background area according to a threshold T, and continuously optimizing the threshold T to obtain an optimized target area and an optimized background area with the maximum inter-class variance as a target.
According to the infrared image ship target rapid identification method based on the improved HOU model, in the second step, a morphological closed operation is adopted for the optimized target area and the optimized background area to convert the target multi-connected area in the optimized target area into the target single-connected area, and finally the background area and the target area in the fused saliency map are determined.
According to the infrared image ship target rapid identification method based on the improved HOU model, in the third step, aiming at a target area of multiple targets, an original polygon outline is set for each suspected target, the original polygons are sequentially rotated according to preset angles, simple circumscribed rectangles of the original polygons under corresponding angles are obtained, and a minimum area circumscribed rectangle in all the simple circumscribed rectangles is used as a target rectangle; inversely transforming the target rectangle to an actual minimum circumscribed rectangle; taking the length and the width of the actual minimum circumscribed rectangle as the geometric characteristics of the suspected target, and determining whether the suspected target is a ship target or not according to the geometric characteristics;
the simple circumscribed rectangle is a circumscribed rectangle parallel to the X axis and the Y axis.
According to the infrared image ship target rapid identification method based on the improved HOU model, in the third step, aiming at the target area of the single target, the rectangular suspected contour of the single target is searched, the rectangular suspected contour with the largest area is taken as the target contour, the length and the width of the target contour are taken as the geometric characteristics, and whether the single target is a ship target or not is determined according to the geometric characteristics.
The invention has the beneficial effects that: the method disclosed by the invention is used for carrying out target identification based on the improved HOU model, can well balance the detection effect and speed, effectively improve the algorithm performance, and is suitable for detecting and identifying the ship target under the complex sea surface background.
The method of the invention is based on the advantages of the improved OTSU algorithm and the HOU model algorithm, can obviously improve the target identification rate, and the verification proves that the accuracy rate of the target identification can reach more than 92 percent, and the omission factor is reduced to below 8 percent. After morphological closed operation is added, the false detection rate can be obviously reduced, so that the infrared ship can be identified more completely, and the contour positioning of the ship is closer to reality. For a single target detection sample, the segmentation algorithm in the method is beneficial to greatly improving the identification rate, the accuracy can reach 94.0%, and the omission factor is reduced.
Drawings
FIG. 1 is a flow chart of the infrared image ship target rapid identification method based on the improved HOU model of the present invention;
FIG. 2 is a block flow diagram of the infrared image ship target rapid identification method based on the improved HOU model according to the present invention;
FIG. 3 is a flow chart for obtaining a post-fusion saliency map;
FIG. 4 is a diagram of a system processing framework for implementing the method of the present invention in an exemplary embodiment;
FIG. 5 is a preprocessed grayscale image of an original infrared ship image;
FIG. 6 is a graph of the transformed luminance characteristics of FIG. 5;
FIG. 7 is a saliency map of the HOU model of FIG. 5 after transformation;
FIG. 8 is a post-fusion saliency map obtained based on FIGS. 6 and 7;
FIG. 9 is a thresholding map for a post-fusion saliency map;
FIG. 10 is a graph of geometric features obtained from a thresholding map;
FIG. 11 is a preprocessed grayscale image of a multi-target infrared ship image;
FIG. 12 is a graph of the transformed luminance characteristics of FIG. 11;
fig. 13 is a luminance feature recognition diagram of fig. 12;
FIG. 14 is a saliency map of the HOU model of FIG. 11 after transformation;
FIG. 15 is a HOU model identification diagram of FIG. 14;
FIG. 16 is a post-fusion saliency map obtained from FIGS. 12 and 14;
FIG. 17 is a geometric feature map for multi-target recognition;
FIG. 18 is a pre-processed grayscale image of a single-target infrared ship image;
FIG. 19 is a graph of the transformed luminance characteristics of FIG. 18;
FIG. 20 is the luminance feature recognition diagram of FIG. 19;
FIG. 21 is a saliency map of the HOU model of FIG. 18 after transformation;
FIG. 22 is a HOU model identification diagram of FIG. 21;
FIG. 23 is a post-fusion saliency map obtained from FIGS. 19 and 21;
FIG. 24 is a geometric feature map for single target recognition;
fig. 25 is a schematic two-dimensional fourier transform.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In a first embodiment, as shown in fig. 1 to 3, the present invention provides a method for quickly identifying a ship target based on an infrared image of an improved HOU model, including,
the method comprises the following steps: preprocessing the collected original infrared ship image to obtain a preprocessed gray image;
step two: calculating brightness information of the preprocessed gray level image to obtain a brightness characteristic diagram; meanwhile, the preprocessed gray level image is transformed in a frequency domain and a space domain to obtain an HOU model saliency map; carrying out weighted fusion on the brightness characteristic graph and the HOU model saliency graph to obtain a fused saliency graph;
increasing a threshold adjustment coefficient on the basis of the existing OTSU algorithm to obtain an improved OTSU algorithm; performing threshold segmentation on the fused saliency map by using an improved OTSU algorithm, and determining a background area and a target area in the fused saliency map through morphological closed operation; extracting an interested area which may be a ship target to complete target detection;
step three: and extracting geometric features of the target area, and determining a ship target according to the geometric features.
Further, the pretreatment in the first step comprises: and performing morphological filtering after gray level conversion of pixel reverse color of the original infrared ship image to obtain a preprocessed gray level image.
Because the background of the input infrared ship image is dark, the ship image is difficult to observe and distinguish by naked eyes, the original image is reversely colored, and the contrast of the ship target and the sea surface background in the infrared ship image is enhanced. Morphological filtering is used to improve the signal-to-noise ratio of the image.
Morphological filtering is an important component of digital image filtering, and is based on mathematical theory and an image analysis method established on the basis of lattice theory and topology, and the morphological filtering plays a very key role in image preprocessing.
The morphological filtering is formulated as follows:
d(x,y)=s(x,y)+d top-hat -d bottom-hat
wherein d (x, y) is the pixel of the preprocessed gray scale image, s (x, y) is the pixel of the pixel reverse color image, d top-hat For top hat operation, d bottom-hat For bottom-hat operation, x is the abscissa of the pixel, and y is the ordinate of the pixel;
Figure BDA0003824702310000051
Figure BDA0003824702310000052
s (x + x ', y + y') is a neighboring point pixel of s (x, y), x 'is an abscissa of the neighboring point pixel, y' is an ordinate of the neighboring point pixel, ele (x ', y') is a neighboring point architecture factor.
The bottom-hat operation is a typical filtering algorithm in a morphological advanced filtering algorithm, is generally used for solving the problem of uneven illumination, is suitable for darker objects in a brighter background, can enable the dark area of the edge area of an original image to be more obvious, has a practical effect related to the size of an operator kernel, and can separate adjacent and closer dark spots. The bottom-hat operation performs a difference operation between the original image and the closed operation.
The top hat operation is one of the extremely important advanced filtering algorithms in the morphological filtering, and has a very good filtering effect. The top hat algorithm has stronger anti-interference performance, is an effective means for reducing the noise of the infrared image, can obviously improve the contrast of the infrared image, and is easier to position to a real target position in a brightness area. The top hat operation is an operation of differentiating the original image and the open operation, and can highlight a region brighter than the surroundings in the image.
Still further, the method for obtaining the brightness characteristic map in the second step comprises the following steps:
according to the brightness information of the preprocessed gray level image, down-sampling the preprocessed gray level image to construct a gray level Gaussian pyramid; performing central periphery difference operation on the gray Gaussian pyramid, performing point-to-point difference calculation on different image layers to obtain a plurality of brightness characteristic initial images, and normalizing the plurality of brightness characteristic initial images to obtain a brightness characteristic image; and if the two layers for carrying out the point difference calculation have different sizes, obtaining the size of the relatively small scale layer which is the same as that of the relatively large scale layer by a linear interpolation method, and then carrying out the point difference calculation.
The method for obtaining the HOU model saliency map in the step two comprises the following steps:
and transforming the preprocessed gray level image into a frequency domain, calculating a log amplitude spectrum and carrying out mean filtering to obtain a phase spectrum, calculating a log residual spectrum by using the phase spectrum, and transforming the log residual spectrum from the frequency domain to a space domain by adopting Fourier inversion transformation to obtain the HOU model saliency map.
The formula of the fused saliency map obtained in the second step is as follows:
Figure BDA0003824702310000061
wherein S (x) is a saliency map after fusion, L (x) is a luminance feature map, and H (x) is a HOU model saliency map.
On the basis of the HOU model, the method fuses the brightness characteristic graph to make up the deficiency of the salient map main body deletion of the HOU model, and the fusion mode is to perform weighted fusion on the brightness characteristic graph and the HOU model salient map by taking half weight respectively to obtain an improved salient map. The weights can be distributed according to the tendency, and in the embodiment, the weights are treated equally, and the same weight is taken.
Adjusting the threshold T of the improved OTSU algorithm to be 1.1 times of the threshold of the existing OTSU algorithm in the second step;
the threshold segmentation of the fused saliency map by using the improved OTSU algorithm comprises the following steps:
and dividing pixels of the fused saliency map into a target area and a background area according to a threshold T, and continuously optimizing the threshold T to obtain an optimized target area and an optimized background area with the maximum inter-class variance as a target.
Threshold segmentation: in the threshold value calculation of the OTSU algorithm, because the brightness of a target area is large in target identification, the brightness of a background and an interference area is greatly different from that of the target area, and meanwhile, the target area accounts for ten percent of the whole area of an image, and the segmentation threshold value is low by calculating the threshold value of the existing OTSU algorithm. In the embodiment, the threshold is increased by multiplying one parameter t =1.1 to the threshold calculation of the conventional OTSU algorithm, so that the interference of image noise is reduced.
The core idea of the OTSU algorithm is that the inter-class variance is maximized, firstly, the image pixels are divided into a target part and a background part according to a threshold value T, then, according to the purpose of the maximum inter-class variance, the background and the target part are divided most reasonably by continuously optimizing the threshold value T, and the resolution of the two types of pixels is maximized. The between-class variance g is defined as:
g=w 0 w 101 ) 2
wherein w 0 Is the ratio of target pixel points in the image, w 1 Is the ratio of background pixels in the image, mu 0 Is the pixel average gray scale, mu, of the target portion 1 Is the pixel average gray of the background portion.
And step two, converting the target multi-connected region in the optimized target region into a target single-connected region by adopting morphological closed operation aiming at the optimized target region and the optimized background region, and finally determining the background region and the target region in the fused saliency map.
Aiming at the problem of multi-connected regions of the same target, the same target is changed into a single-connected region by adopting morphological closed operation after threshold segmentation. After threshold segmentation, morphological closing operation is added, some dark pixel points can be eliminated, small black holes and black noise points are removed, meanwhile, the connection effect of two weakly connected targets can be enhanced, the recognition degree of target recognition can be improved, and the false alarm rate is reduced.
Furthermore, in the third step, aiming at the target area of the multiple targets, setting the outline of an original polygon for each suspected target, sequentially rotating the original polygons according to preset angles to obtain simple circumscribed rectangles of the original polygons under corresponding angles, and taking the minimum-area circumscribed rectangle in all the simple circumscribed rectangles as a target rectangle; inversely transforming the target rectangle to an actual minimum circumscribed rectangle; taking the length and the width of the actual minimum circumscribed rectangle as the geometric characteristics of the suspected target, and determining whether the suspected target is a ship target or not according to the geometric characteristics; and judging whether the target is a ship target or not according to the length-width ratio of the geometric features.
The simple external rectangle is a external rectangle parallel to the X axis and the Y axis.
In this embodiment, minimum circumscribed rectangle operation is employed for multiple targets. The minimum bounding rectangle is the adjacent minimum rectangle aiming at providing a polygon and has the smallest area, and the core idea is that: and rotating the original polygon, stepping the original polygon by the same small angle every time, then solving a simple circumscribed rectangle of the current polygon, storing the calculated circumscribed rectangle area and position, searching the circumscribed rectangle with the minimum area in different rotations, and inversely transforming the circumscribed rectangle with the minimum area to the actual minimum circumscribed rectangle through the simple circumscribed rectangle. The geometric characteristics of the ship target are then approximated by calculating the length and width of the minimum bounding rectangle of the target.
And in the third step, aiming at the target area of the single target, searching a rectangular suspected contour of the single target, taking the rectangular suspected contour with the largest area as a target contour, taking the length and the width of the target contour as geometric characteristics, and determining whether the single target is a ship target or not according to the geometric characteristics. And judging whether the target is a ship target or not according to the length-width ratio of the geometric features.
The present embodiment employs maximum contour region operation for single target detection, since there is only one target in the image, which is larger than the interfering object.
The aspect ratio is the ratio of the length and width of the geometric feature. For the ship target, in the infrared ship image, the length H of the ship should be greater than the width W, the aspect ratio should be greater than 1, for the actual ship image, the aspect ratio is within a certain range, and the aspect ratio P1 is calculated as follows:
Figure BDA0003824702310000081
the area of the ship is also an obvious characteristic of the ship, the area of most warships is within a certain range, and due to the shape of the ship, the target area of the ship occupies most part in the minimum external rectangle of the ship under the condition of multiple targets. However, the area in the actual image needs to be subjected to the actual calibration range according to the shot distance and angle. Area S calculation formula:
S=H*W。
the method of the present invention is described below with reference to fig. 5 to 25:
FIGS. 5 to 10 are schematic diagrams of processing results of steps of infrared image ship target identification by using the method of the present invention; fig. 11-17 are simulation results for multi-target infrared ship images. In fig. 12, for a weak target, due to over-down-sampling, the weak target disappears prematurely in the pyramid sub-image, and a small-scale pyramid image layer always reflects a low-frequency signal after being smaller than the size of the target, so that the target is not obvious enough, in a lower right corner circle, the resolution of adjacent multiple targets is also reduced by adjacent interpolation of central peripheral difference operation, because the adjacent interpolation blurs gaps between adjacent targets, and the model makes the width of a ship body larger, the identification fails because the target feature extraction does not accord with the ship length-width ratio, the identification rate is only 25.9%, and the omission rate is 74.1%. It can be seen in fig. 14 that the HOU model can roughly identify the contour of the hull, and the saliency is obvious at the end of the hull, because the HOU model focuses on the background and considers as a local background in a large continuous area, the main contour of the hull is identified, and the width of the hull is small, so that after image threshold segmentation, the condition of agreeing to hull segmentation appears, and the identification rate is reduced, which is about 44.4%, the false detection rate is 14.8%, the false detection rate is 55.6%, and the identification target area is incomplete. In the fusion algorithm, a brightness saliency map is added on the basis of an HOU model algorithm, and fig. 16 shows that the fusion algorithm enables a saliency target to be closer to a real ship body, so that the outline of the ship body is maintained, the main body part of the ship body is supplemented, the improved saliency map has obvious advantages, the saliency of the ship target can be reflected, the recognition rate is 92.6%, the false detection rate is 0.04%, and the omission factor is 7.4%.
Fig. 18 to fig. 24 are simulation results for single-target infrared ship images, and the conclusions obtained by the above can be verified in fig. 23 and fig. 24, which show that the improved algorithm of the present invention has very obvious advantages, and in the case of infrared ship images with high-complexity background interference, it is seen from fig. 19 and fig. 20 that the luminance feature recognition area is large, and it is seen from fig. 21 and fig. 22 that the HOU model recognition area is small, and in the context of a complex environment, the recognition effect of the HOU model is limited, and the significance of the target cannot be highlighted. The method of the invention conforms to the positioning of the target, and can more accurately position the contour of the ship target, thereby greatly improving the recognition rate and reducing the omission factor.
The specific embodiment is as follows:
the system based on the embodiment is realized on the FPGA, the adopted FPGA chip is an xc7z045ffg1761-2 chip of ZYNQ7000 series of Xilinx company, and the target detection of the infrared ship image is carried out on a matched development board ZYNQ 7045.
The system mainly completes target detection and extraction of the saliency map, other parts are realized at a PC (personal computer) end, and a hardware system mainly comprises three parts: image storage, image processing (object detection), and image display. Fig. 4 is a diagram of a hardware implementation system architecture.
Step 1: image storage:
step 1.1: preprocessing an infrared image;
step 1.2: the coe file obtained after preprocessing is stored in a ROM, and the read image data needs to be converted into an AXI-Stream data format and then is transmitted to an image processing module.
Step 2: target detection:
the ZYNQ is used for controlling different AXI interfaces, and the PS end provides clock driving of the image processing part, wherein the target detection part adopts 100M clock driving, the target detection algorithm adopts an IP core generated by HLS, and the interface is in AXI-Stream format.
Step 2.1: improving HOU model algorithm target detection:
Figure BDA0003824702310000091
wherein S (x) is a fused saliency map, L (x) is a luminance feature saliency map, and H (x) is an HOU model saliency map.
Calculating a saliency map of the HOU model, transforming the grayscale image to a frequency domain, completing two-dimensional Fourier transform by using two times of one-dimensional Fourier transform, and specifically performing HLS (hyper text transfer library) operation as follows: a one-dimensional FFT is performed for the rows and columns, respectively.
A two-dimensional image f (x, y) of M x N, whose two-dimensional fourier transform is calculated as:
Figure BDA0003824702310000092
according to the integration sequence, one-dimensional discrete Fourier transform with the length of N is firstly carried out on the row vector, and then one-dimensional discrete Fourier transform with the length of M is carried out on the column vector of the result, so that two-dimensional Fourier transform of the image is obtained. The formula for the calculation of the separation is as follows:
Figure BDA0003824702310000093
Figure BDA0003824702310000094
in the actual HLS, a one-dimensional fast fourier transform FFT is used for two-dimensional transformation of an image, the schematic transformation diagram is shown as 25, and the same principle is applied to two-dimensional inverse transformation.
Step 2.2: and extracting geometric features for target recognition.
And step 3: and (3) image display:
the HDMI display is adopted, the FPGA is ZYNQ7045, the driving start clock is driven by a 200M differential clock at a PL end of the board card, an HDMI interface of the HDMI display is provided with an ADV7511 with a coding chip, and IIC protocol configuration is required. And obtaining display results aiming at the infrared ship images of multiple targets and a single target respectively.
The ZYNQ7045 resource used in this embodiment is sufficient, and the lookup table resource, the multiplier resource, the block memory resource and the trigger resource required to complete this embodiment occupy 74%, 42%, 63% and 35%, respectively. After HLS synthesis, the minimum delay of the system is 2333957 cycles, the maximum delay is 3027330 cycles, the average processing delay is 2680644 clock cycles, and the used program clock cycle is 5ns, so the average processing delay is 13.40ms, the method accords with high-speed image processing and meets the rapid condition. For images with 1024 × 512 resolutions, the processing rate of the images can reach 313.01Mbps, and the processing speed is quite high. The maximum processing delay is 15.14ms, the minimum image processing rate is 277.03Mbps, and the real-time processing requirement is met under certain conditions.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (10)

1. An infrared image ship target rapid identification method based on an improved HOU model is characterized by comprising the following steps,
the method comprises the following steps: preprocessing the collected original infrared ship image to obtain a preprocessed gray image;
step two: calculating brightness information of the preprocessed gray level image to obtain a brightness characteristic diagram; meanwhile, the preprocessed gray level image is transformed in a frequency domain and a space domain to obtain an HOU model saliency map; carrying out weighted fusion on the brightness characteristic diagram and the HOU model saliency map to obtain a fused saliency map;
increasing a threshold adjustment coefficient on the basis of the existing OTSU algorithm to obtain an improved OTSU algorithm; performing threshold segmentation on the fused saliency map by using an improved OTSU algorithm, and determining a background area and a target area in the fused saliency map through morphological closed operation;
step three: and extracting geometric features of the target area, and determining a ship target according to the geometric features.
2. The improved HOU model-based infrared image ship target rapid identification method according to claim 1,
the pretreatment in the first step comprises the following steps: and (3) after the pixels of the original infrared ship image are subjected to color reversal, performing morphological filtering to obtain a preprocessed gray level image.
3. The improved HOU model-based infrared image ship target rapid identification method according to claim 2, wherein the morphological filtering is formulated as follows:
d(x,y)=s(x,y)+d top-hat -d bottom-hat
wherein d (x, y) is the pixel of the preprocessed gray image, s (x, y) is the pixel of the pixel reverse color image, d top-hat For top hat operation, d bottom-hat For bottom-hat operation, x is the abscissa of the pixel and y is the ordinate of the pixel;
Figure FDA0003824702300000011
Figure FDA0003824702300000012
s (x + x ', y + y') is the neighboring point pixel of s (x, y), x 'is the abscissa of the neighboring point pixel, y' is the ordinate of the neighboring point pixel, ele (x ', y') is the neighboring point architecture factor.
4. The infrared image ship target rapid identification method based on the improved HOU model as claimed in claim 3, wherein the method for obtaining the brightness characteristic map in the second step comprises:
according to the brightness information of the preprocessed gray level image, down-sampling the preprocessed gray level image to construct a gray level Gaussian pyramid; performing central peripheral difference operation on the gray Gaussian pyramid, performing point-to-point difference calculation on different image layers to obtain a plurality of brightness characteristic initial images, and normalizing the plurality of brightness characteristic initial images to obtain a brightness characteristic image; and if the two layers for carrying out the point difference calculation have different sizes, obtaining the size of the relatively small scale layer which is the same as that of the relatively large scale layer by a linear interpolation method, and then carrying out the point difference calculation.
5. The method for rapidly identifying infrared image ship targets based on the improved HOU model as claimed in claim 4, wherein the method for obtaining the HOU model saliency map in the second step comprises:
and transforming the preprocessed gray level image into a frequency domain, calculating a log amplitude spectrum and carrying out mean filtering to obtain a phase spectrum, calculating a log residual spectrum by using the phase spectrum, and transforming the log residual spectrum from the frequency domain to a space domain by adopting Fourier inversion transformation to obtain the HOU model saliency map.
6. The infrared image ship target rapid identification method based on the improved HOU model according to claim 5, characterized in that the formula for obtaining the fused saliency map in the second step is as follows:
Figure FDA0003824702300000021
wherein S (x) is a saliency map after fusion, L (x) is a luminance feature map, and H (x) is a HOU model saliency map.
7. The infrared image ship target rapid identification method based on the improved HOU model according to claim 6, characterized in that the threshold T of the improved OTSU algorithm in the second step is adjusted to be 1.1 times of the threshold of the existing OTSU algorithm;
the threshold segmentation of the fused saliency map by using the improved OTSU algorithm comprises the following steps:
and dividing pixels of the fused saliency map into a target area and a background area according to a threshold T, and continuously optimizing the threshold T to obtain an optimized target area and an optimized background area with the maximum inter-class variance as a target.
8. The infrared image ship target rapid identification method based on the improved HOU model as claimed in claim 7, wherein in step two, morphological close operation is adopted for the optimized target area and the optimized background area to convert the target multi-connected area in the optimized target area into the target single-connected area, and finally the background area and the target area in the fused saliency map are determined.
9. The infrared image ship target rapid identification method based on the improved HOU model according to claim 8, characterized in that, in the third step, aiming at a target area of multiple targets, an original polygon outline is set for each suspected target, the original polygons are sequentially rotated according to preset angles, simple circumscribed rectangles of the original polygons under corresponding angles are obtained, and a minimum area circumscribed rectangle in all the simple circumscribed rectangles is taken as a target rectangle; inversely transforming the target rectangle to an actual minimum circumscribed rectangle; taking the length and the width of the actual minimum circumscribed rectangle as the geometric characteristics of the suspected target, and determining whether the suspected target is a ship target or not according to the geometric characteristics;
the simple circumscribed rectangle is a circumscribed rectangle parallel to the X axis and the Y axis.
10. The infrared image ship target rapid identification method based on the improved HOU model as claimed in claim 8, wherein in the third step, aiming at the target area of the single target, a rectangular suspected contour of the single target is searched, the rectangular suspected contour of the largest area is taken as the target contour, the length and width of the target contour are taken as the geometric characteristics, and whether the single target is a ship target or not is determined according to the geometric characteristics.
CN202211055328.4A 2022-08-31 2022-08-31 Infrared image ship target rapid identification method based on improved HOU model Pending CN115439497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211055328.4A CN115439497A (en) 2022-08-31 2022-08-31 Infrared image ship target rapid identification method based on improved HOU model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211055328.4A CN115439497A (en) 2022-08-31 2022-08-31 Infrared image ship target rapid identification method based on improved HOU model

Publications (1)

Publication Number Publication Date
CN115439497A true CN115439497A (en) 2022-12-06

Family

ID=84244130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211055328.4A Pending CN115439497A (en) 2022-08-31 2022-08-31 Infrared image ship target rapid identification method based on improved HOU model

Country Status (1)

Country Link
CN (1) CN115439497A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782464A (en) * 2022-04-07 2022-07-22 中国人民解放军国防科技大学 Reflection chromatography laser radar image segmentation method based on local enhancement of target region
CN116934697A (en) * 2023-07-13 2023-10-24 衡阳市大井医疗器械科技有限公司 Blood vessel image acquisition method and device based on endoscope

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782464A (en) * 2022-04-07 2022-07-22 中国人民解放军国防科技大学 Reflection chromatography laser radar image segmentation method based on local enhancement of target region
CN114782464B (en) * 2022-04-07 2023-04-07 中国人民解放军国防科技大学 Reflection chromatography laser radar image segmentation method based on local enhancement of target region
CN116934697A (en) * 2023-07-13 2023-10-24 衡阳市大井医疗器械科技有限公司 Blood vessel image acquisition method and device based on endoscope

Similar Documents

Publication Publication Date Title
CN107563303B (en) Robust ship target detection method based on deep learning
Nie et al. Inshore ship detection based on mask R-CNN
CN108765458B (en) Sea surface target scale self-adaptive tracking method of high-sea-condition unmanned ship based on correlation filtering
CN111582089B (en) Maritime target information fusion method based on satellite infrared and visible light images
WO2017148265A1 (en) Word segmentation method and apparatus
CN109427055B (en) Remote sensing image sea surface ship detection method based on visual attention mechanism and information entropy
CN115439497A (en) Infrared image ship target rapid identification method based on improved HOU model
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
Zhao et al. SAR ship detection based on end-to-end morphological feature pyramid network
CN114821358A (en) Optical remote sensing image marine ship target extraction and identification method
Cruz et al. Aerial detection in maritime scenarios using convolutional neural networks
CN114764801A (en) Weak and small ship target fusion detection method and device based on multi-vision significant features
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
Zhou et al. A fusion algorithm of object detection and tracking for unmanned surface vehicles
CN110334703B (en) Ship detection and identification method in day and night image
CN109285148B (en) Infrared weak and small target detection method based on heavily weighted low rank and enhanced sparsity
CN112686222B (en) Method and system for detecting ship target by satellite-borne visible light detector
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof
CN110567886B (en) Multispectral cloud detection method based on semi-supervised spatial spectrum characteristics
Yao et al. Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector
CN111950549A (en) Sea surface obstacle detection method based on fusion of sea antennas and visual saliency
Matos et al. Robust tracking of vessels in oceanographic airborne images
Yang et al. Recognition of military and civilian ships in sar images based on ellipse fitting similarity
Zhang et al. Scale adaptive infrared small target detection with patch contrast measure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination