CN103226820B - The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved - Google Patents
The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved Download PDFInfo
- Publication number
- CN103226820B CN103226820B CN201310134194.XA CN201310134194A CN103226820B CN 103226820 B CN103226820 B CN 103226820B CN 201310134194 A CN201310134194 A CN 201310134194A CN 103226820 B CN103226820 B CN 103226820B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- dimensional
- gray
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 230000004927 fusion Effects 0.000 title claims abstract description 19
- 230000004297 night vision Effects 0.000 title claims abstract description 13
- 230000011218 segmentation Effects 0.000 claims abstract description 37
- 239000000284 extract Substances 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000003709 image segmentation Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 5
- 230000005855 radiation Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种改进的二维最大熵分割夜视图像融合目标检测算法。首先通过对二维直方图的改进,即灰度级-加权的区域灰度最大值的灰度级建立二维直方图,选取权值利用该直方图计算最大熵对红外和微光图像分割,较传统最大熵分割算法在目标检测方面效果明显,具有抑制背景和提取目标的作用。然后验证多维特征的相与运算的有效性,对分割后的红外和微光图像特征级融合检测出目标。本发明的检测算法对于复杂背景下的目标检测及多目标检测方面都具有较好的效果和适用性。
The invention discloses an improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm. First, through the improvement of the two-dimensional histogram, that is, the gray level-weighted gray level of the maximum gray level of the region to establish a two-dimensional histogram, select the weight and use the histogram to calculate the maximum entropy for infrared and low-light image segmentation, Compared with the traditional maximum entropy segmentation algorithm, the effect of target detection is obvious, and it can suppress the background and extract the target. Then, the validity of the phase AND operation of multi-dimensional features is verified, and the target is detected by feature-level fusion of the segmented infrared and low-light images. The detection algorithm of the present invention has good effect and applicability for target detection and multi-target detection in complex backgrounds.
Description
技术领域technical field
本发明属于红外和微光图像处理领域,特别是一种改进的二维最大熵分割夜视图像融合目标检测算法。The invention belongs to the field of infrared and low-light image processing, in particular to an improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm.
背景技术Background technique
夜视图像目标检测技术在军事领域和民用领域发展都比较迅速。在夜视成像方面,单一波段图像如红外波段的图像主要反应的是目标物体和场景的热辐射差异,其场景元素不够丰富,成像不够清晰且不利于人眼辨别;微光图像对比度比较低,灰度动态范围不够大,信噪比低,噪声比较严重。现阶段,一般通过图像分割和目标提取实现目标的检测,具体分割算法主要有边缘检测和区域生长两类。其中阈值分割,是用途比较广的图像分割方法,如常见的迭代式阈值选择、Otsu法阈值选择,最小误差阈值选择法等。由于红外和微光的图像特性,采用常见的分割算法检测目标,分割得到二值图像往往目标被淹没在背景中且分割算法不具有通用性,同时得到的背景区域比较多,不能够准确的提取出目标。Night vision image target detection technology is developing rapidly in both military and civilian fields. In terms of night vision imaging, single-band images such as infrared band images mainly reflect the difference in thermal radiation between the target object and the scene. The scene elements are not rich enough, the imaging is not clear enough, and it is not conducive to human eyes to distinguish; low-light image contrast is relatively low, The grayscale dynamic range is not large enough, the signal-to-noise ratio is low, and the noise is serious. At this stage, target detection is generally achieved through image segmentation and target extraction. The specific segmentation algorithms mainly include edge detection and region growing. Among them, threshold segmentation is a widely used image segmentation method, such as common iterative threshold selection, Otsu method threshold selection, minimum error threshold selection method, etc. Due to the characteristics of infrared and low-light images, common segmentation algorithms are used to detect targets, and the binary images obtained by segmentation are often submerged in the background and the segmentation algorithm is not universal. At the same time, there are many background areas, which cannot be accurately extracted. out target.
一般情况下的目标如“人”,在红外夜视图像中所占的部分是亮度比较大的。加上复杂背景如相近的热辐射背景,传统的二维直方图选取最大熵分割得到的结果往往使得目标被相近辐射的背景所掩盖。对于微光图像,目标一般是较暗区域或者较亮区域,由于微光图像中的场景元素比较丰富与目标灰度级接近的背景和噪声比较多,使得分割结果中虚假目标比较多,较难提取出目标。In general, targets such as "people" occupy a relatively large part of the infrared night vision image. Coupled with a complex background such as a similar thermal radiation background, the results obtained by selecting the maximum entropy segmentation of the traditional two-dimensional histogram often make the target covered by the similar radiation background. For low-light images, the target is generally a darker area or a brighter area. Since the scene elements in the low-light image are richer and the background and noise close to the gray level of the target are more, there are more false targets in the segmentation results, which is difficult. Extract the target.
发明内容Contents of the invention
本发明的目的在于提供一种分割效果好,对夜视图像具有较好的目标检测功能的改进的二维最大熵分割夜视图像融合目标检测算法。The purpose of the present invention is to provide an improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm with good segmentation effect and good target detection function for night vision images.
实现本发明目的的技术解决方案为:The technical solution that realizes the object of the present invention is:
一种改进的二维最大熵分割夜视图像融合目标检测算法,包括以下步骤:An improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm, including the following steps:
步骤一:在二维直方图的建立方法上进行改进,选择不同的权值λ,λ的取值在(0.01,3)之间改进算法有效,权值λ相当于纵坐标的灰度级伸缩因子,即原图像与区域灰度增强图像构造二维直方图,根据红外和微光图像中目标的灰度特性,如果是对红外图像分割则λ取小于1的值,如果是对微光图像分割则λ取大于1的值;Step 1: Improve the establishment method of the two-dimensional histogram, choose a different weight λ, the value of λ is between (0.01, 3), the improved algorithm is effective, and the weight λ is equivalent to the gray scale of the vertical coordinate factor, that is, the original image and the area grayscale enhanced image to construct a two-dimensional histogram. According to the grayscale characteristics of the target in the infrared and low-light images, if the infrared image is segmented, λ takes a value less than 1. If it is for the low-light image For segmentation, λ takes a value greater than 1;
步骤二:对直方图进行划分,用像素灰度等于t和邻域平均灰度等于s的两个门限来划分图像的二维直方图,直方图划分出的区域以最靠近坐标原点的区域为起始,顺时针方向依次为分别表示背景、噪声、目标信息和边缘信息的区域,图像的灰度级f(m,n)和g(m,n)组成灰度级对,其中g(m,n)是一个像素点为(m,n)的灰度值,用二维直方图区域划分中定义的二元组(i,j)表示,图像中点灰度值为i、邻域灰度均值为j,k(i,j)为在二维直方图中对应的灰度级对的像素点总数,M×N为图像大小,M为行,N为列,L为一幅图像的灰度级数,于是可得图像的联合概率密度为:Step 2: Divide the histogram, and divide the two-dimensional histogram of the image with the two thresholds that the pixel gray level is equal to t and the neighborhood average gray level is equal to s. The area divided by the histogram is the area closest to the origin of the coordinates. At the beginning, in the clockwise direction, there are areas respectively representing the background, noise, target information and edge information. The gray level f(m,n) and g(m,n) of the image form a gray level pair, where g(m , n) is the gray value of a pixel point (m, n), represented by the binary group (i, j) defined in the two-dimensional histogram area division, the gray value of the point in the image is i, the neighborhood gray The mean value of the degree is j, k(i,j) is the total number of pixels of the corresponding gray level pair in the two-dimensional histogram, M×N is the image size, M is the row, N is the column, and L is the size of an image The number of gray levels, so the joint probability density of the image can be obtained as:
定义背景和目标两个区域的二维熵为:Define the two-dimensional entropy of the background and target regions as:
式中为背景区域的概率分布,为目标区域的概率分布,将噪声和边缘信息区域忽略,于是p1(t,s)+p2(t,s)=1,则目标和背景区域的熵为:In the formula is the probability distribution of the background region, is the probability distribution of the target area, and the noise and edge information areas are ignored, so p 1 (t,s)+p 2 (t,s)=1, then the entropy of the target and background areas is:
总的信息熵为目标和背景区域的相加,即The total information entropy is the addition of the target and background regions, namely
使得H(t,s)获得最大值时的(t*,s*)值即为二维最大熵法求得的最佳阈值,(t*,s*)分别为最佳阈值对应的像素灰度和邻域平均灰度,最佳阈值向量如下式表示:The (t * , s * ) value when H(t, s) obtains the maximum value is the optimal threshold value obtained by the two-dimensional maximum entropy method, and (t * , s * ) are the pixel gray values corresponding to the optimal threshold value. degree and neighborhood average gray level, the optimal threshold vector is expressed as follows:
步骤三:对分割后的图像进行提取工作,利用图像之间的相与运算,使得处理结果中比较清楚的目标被提取出来;Step 3: Extract the segmented image, and use the phase AND operation between the images to extract the clearer target in the processing result;
步骤四:最后,验证对分割后的二值图像采用相与运算的可行性,设微光图像经过改进的最大熵分割后的图像为Ia(x1,y1),红外图像经过同样方法分割后的图像为Ib(x2,y2),按照下式对它们进行特征级信息融合,提取目标得到最终的检测结果T(x,y),其中,:x1为微光图像的行变量y1为微光图像的列变量x2为红外图像的行变量y2为红外图像的列变量x为最终检测结果的行变量y为最终检测结果的列变量,Step 4: Finally, verify the feasibility of using phase-and operation on the segmented binary image. Let the low-light image be I a (x 1 , y 1 ) after the improved maximum entropy segmentation, and the infrared image is subjected to the same method The segmented image is I b (x 2 , y 2 ), and the feature-level information fusion is performed on them according to the following formula, and the target is extracted to obtain the final detection result T(x, y), where: x 1 is the low-light image The row variable y 1 is the column variable x of the low light image, 2 is the row variable y of the infrared image, 2 is the column variable of the infrared image, x is the row variable y of the final detection result, and y is the column variable of the final detection result,
T(x,y)=Ia(x1,y1)∧Ib(x2,y2)(8)。T(x,y)=I a (x 1 ,y 1 )∧I b (x 2 ,y 2 ) (8).
本发明与现有技术相比,其显著优点:Compared with the prior art, the present invention has significant advantages:
本发明提出了一种改进的二维最大熵分割夜视图像融合目标检测算法。由于红外和微光的图像特性,采用常见的分割算法检测目标,分割得到二值图像往往目标被淹没在背景中且分割算法不具有通用性,同时得到的背景区域比较多,不能够准确的提取出目标。本发明针对红外和微光图像目标检测的这一难点提出了灰度级-加权的区域灰度最大值灰度级建立二维直方图的改进二维最大熵分割方法,可以通过设定不同权值,对红外和微光图像分割,能较好的在各自图像中提取出目标,并且对背景和噪声做到了很大程度上的抑制,使得算法在目标检测方面具有一定的实用性,对目标分割的效果比传统方法要好。然后对分割后的图像信息融合,结合了红外图像对热辐射敏感和微光图像细节丰富的特点,提高目标检测的准确性并且进一步抑制噪声达到目标检测的目的。The invention proposes an improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm. Due to the characteristics of infrared and low-light images, common segmentation algorithms are used to detect targets, and the binary images obtained by segmentation are often submerged in the background and the segmentation algorithm is not universal. At the same time, there are many background areas, which cannot be accurately extracted. out target. The present invention proposes an improved two-dimensional maximum entropy segmentation method for establishing a two-dimensional histogram by gray level-weighted regional gray level maximum gray level for the difficulty of infrared and low-light image target detection, which can be achieved by setting different weights value, for infrared and low-light image segmentation, the target can be extracted from the respective images, and the background and noise can be suppressed to a large extent, so that the algorithm has certain practicability in target detection. Segmentation works better than traditional methods. Then, the segmented image information is fused, combining the infrared image's sensitivity to thermal radiation and the rich details of the low-light image to improve the accuracy of target detection and further suppress noise to achieve the purpose of target detection.
下面结合附图对本发明作进一步详细描述。The present invention will be described in further detail below in conjunction with the accompanying drawings.
附图说明Description of drawings
图1是本发明的算法流程图。Fig. 1 is an algorithm flow chart of the present invention.
图2是现有的信息融合目标检测的步骤图。Fig. 2 is a step diagram of existing information fusion target detection.
图3是本发明改进的二维直方图的划分示意图。Fig. 3 is a schematic diagram of division of the improved two-dimensional histogram of the present invention.
图4是本发明改进的二维最大熵分割示意图;其中图(a)、(b)、(c)分别对应于红外图像原像,经传统的二维最大熵处理后的图像以及经本文算法处理后的图像,(d)、(e)、(f)分别对应于对应于微光图像原像,经传统的二维最大熵处理后的图像以及经本文算法处理后的图像。Fig. 4 is the improved two-dimensional maximum entropy segmentation schematic diagram of the present invention; Wherein figure (a), (b), (c) correspond to infrared image original image respectively, the image after traditional two-dimensional maximum entropy processing and through this paper algorithm The processed images, (d), (e), and (f) correspond to the low-light image original image, the image processed by the traditional two-dimensional maximum entropy and the image processed by the algorithm of this paper.
图5是本发明最终检测结果示意图。Fig. 5 is a schematic diagram of the final detection result of the present invention.
图6是本发明多目标检测示意图;其中图(a)、(b)、(c)、(d)、(e)分别对应于微光图像原像,红外图像原像,微光图像分割,红外图像分割以及经相与运算后的图像。Fig. 6 is a schematic diagram of the multi-target detection of the present invention; wherein (a), (b), (c), (d), (e) respectively correspond to the low-light image original image, the infrared image original image, the low-light image segmentation, Infrared image segmentation and the image after phase and operation.
具体实施方式detailed description
如图1所示:本发明一种改进的二维最大熵分割夜视图像融合目标检测算法,包括以下步骤:As shown in Figure 1: an improved two-dimensional maximum entropy segmentation night vision image fusion target detection algorithm of the present invention comprises the following steps:
步骤一:在二维直方图的建立方法上进行改进,选择不同的权值λ,λ的取值在(0.01,3)之间改进算法有效,权值λ相当于纵坐标的灰度级伸缩因子,即原图像与区域灰度增强图像构造二维直方图,根据红外和微光图像中目标的灰度特性,如果是对红外图像分割则λ取小于1的值,如果是对微光图像分割则λ取大于1的值;Step 1: Improve the establishment method of the two-dimensional histogram, choose a different weight λ, the value of λ is between (0.01, 3), the improved algorithm is effective, and the weight λ is equivalent to the gray scale of the vertical coordinate factor, that is, the original image and the area grayscale enhanced image to construct a two-dimensional histogram. According to the grayscale characteristics of the target in the infrared and low-light images, if the infrared image is segmented, λ takes a value less than 1. If it is for the low-light image For segmentation, λ takes a value greater than 1;
步骤二:对直方图进行划分,用像素灰度等于t和邻域平均灰度等于s的两个门限来划分图像的二维直方图,直方图划分出的区域以最靠近坐标原点的区域为起始,顺时针方向依次为分别表示背景、噪声、目标信息和边缘信息的区域,图像的灰度级f(m,n)和g(m,n)组成灰度级对,其中g(m,n)是一个像素点为(m,n)的灰度值,用二维直方图区域划分中定义的二元组(i,j)表示,图像中点灰度值为i、邻域灰度均值为j,k(i,j)为在二维直方图中对应的灰度级对的像素点总数,M×N为图像大小,M为行,N为列,L为一幅图像的灰度级数,于是可得图像的联合概率密度为:Step 2: Divide the histogram, and divide the two-dimensional histogram of the image with the two thresholds that the pixel gray level is equal to t and the neighborhood average gray level is equal to s. The area divided by the histogram is the area closest to the origin of the coordinates. At the beginning, in the clockwise direction, there are areas respectively representing the background, noise, target information and edge information. The gray level f(m,n) and g(m,n) of the image form a gray level pair, where g(m , n) is the gray value of a pixel point (m, n), represented by the binary group (i, j) defined in the two-dimensional histogram area division, the gray value of the point in the image is i, the neighborhood gray The mean value of the degree is j, k(i,j) is the total number of pixels of the corresponding gray level pair in the two-dimensional histogram, M×N is the image size, M is the row, N is the column, and L is the size of an image The number of gray levels, so the joint probability density of the image can be obtained as:
定义背景和目标两个区域的二维熵为:Define the two-dimensional entropy of the background and target regions as:
式中为背景区域的概率分布,为目标区域的概率分布,将噪声和边缘信息区域忽略,于是p1(t,s)+p2(t,s)=1,则目标和背景区域的熵为:In the formula is the probability distribution of the background region, is the probability distribution of the target area, and the noise and edge information areas are ignored, so p 1 (t,s)+p 2 (t,s)=1, then the entropy of the target and background areas is:
总的信息熵为目标和背景区域的相加,即The total information entropy is the addition of the target and background regions, namely
使得H(t,s)获得最大值时的(t*,s*)值即为二维最大熵法求得的最佳阈值,(t*,s*)分别为最佳阈值对应的像素灰度和邻域平均灰度,最佳阈值向量如下式表示:The (t * , s * ) value when H(t, s) obtains the maximum value is the optimal threshold value obtained by the two-dimensional maximum entropy method, and (t * , s * ) are the pixel gray values corresponding to the optimal threshold value. degree and neighborhood average gray level, the optimal threshold vector is expressed as follows:
步骤三:对分割后的图像进行提取工作,利用图像之间的相与运算,使得处理结果中比较清楚的目标被提取出来;Step 3: Extract the segmented image, and use the phase AND operation between the images to extract the clearer target in the processing result;
步骤四:最后,验证对分割后的二值图像采用相与运算的可行性,设微光图像经过改进的最大熵分割后的图像为Ia(x1,y1),红外图像经过同样方法分割后的图像为Ib(x2,y2),按照下式对它们进行特征级信息融合,提取目标得到最终的检测结果T(x,y),其中,x1为微光图像的行变量y1为微光图像的列变量x2为红外图像的行变量y2为红外图像的列变量x为最终检测结果的行变量y为最终检测结果的列变量,Step 4: Finally, verify the feasibility of using phase-and operation on the segmented binary image. Let the low-light image be I a (x 1 , y 1 ) after the improved maximum entropy segmentation, and the infrared image is subjected to the same method The segmented image is I b (x 2 , y 2 ), and the feature-level information fusion is performed on them according to the following formula, and the target is extracted to obtain the final detection result T(x, y), where x 1 is the line of the low-light image Variable y 1 is the column variable x of the low-light image, 2 is the row variable y of the infrared image, 2 is the column variable of the infrared image, x is the row variable y of the final detection result, and y is the column variable of the final detection result,
T(x,y)=Ia(x1,y1)∧Ib(x2,y2)(8)。T(x,y)=I a (x 1 ,y 1 )∧I b (x 2 ,y 2 ) (8).
具体步骤如下:Specific steps are as follows:
一、改进的二维直方图:1. Improved two-dimensional histogram:
本发明在二维最大熵分割算法当中进行了改进,提出了灰度级-加权的区域灰度最大值的灰度级建立二维直方图的方法进行最大熵分割,通过选取适当的权值使得其能够比较好的在红外和微光图像中提取需要的亮(暗)目标,同时对噪声和背景进行了抑制,这是传统的二维最大熵分割算法所不具有的。The present invention improves the two-dimensional maximum entropy segmentation algorithm, and proposes a gray level-weighted method for establishing a two-dimensional histogram of the gray level of the maximum gray level of the region to perform maximum entropy segmentation, by selecting appropriate weights so that It can better extract the desired bright (dark) target in the infrared and low-light images, and at the same time suppress the noise and background, which is not available in the traditional two-dimensional maximum entropy segmentation algorithm.
二、改进的二维最大熵分割:2. Improved two-dimensional maximum entropy segmentation:
首先在大小为M×N的图像中选取以像素点(m,n)为中心的3×3区域G(p,q)中的最大值,相当于提取图像中的细节部分,其中M,N分别为图像的行和列,m和n为像素点的坐标值,G(p,q)表示中心为(m,n)的3×3区域。然后对其加权(权值用λ表示),即加权的3×3区域G(p,q)内最大值的灰度级取代邻域平均值的灰度级作为纵坐标,像素点(m,n)的灰度值用g(m,n)表示,则First, select the maximum value in the 3×3 area G(p,q) centered on the pixel point (m,n) in the image of size M×N, which is equivalent to extracting the details in the image, where M, N are the row and column of the image respectively, m and n are the coordinate values of the pixel point, and G(p,q) represents the 3×3 area whose center is (m,n). Then it is weighted (the weight is represented by λ), that is, the gray level of the maximum value in the weighted 3×3 area G(p,q) replaces the gray level of the neighborhood average value as the ordinate, and the pixel point (m, The gray value of n) is represented by g(m,n), then
g(m,n)=λmax(G(p,q))(10)g(m,n)=λmax(G(p,q))(10)
f(m,n)表示像素点(m,n)的灰度值与g(m,n)构成新的二维直方图,从上式可以看出,权值λ相当于纵坐标的灰度级伸缩因子,即原图像与“区域灰度增强图像”构造二维直方图。选择不同的权值,实现不同的分割目的,增加了算法适用性。f(m,n) represents the gray value of the pixel point (m,n) and g(m,n) form a new two-dimensional histogram. It can be seen from the above formula that the weight λ is equivalent to the gray value of the ordinate Level scaling factor, that is, the original image and the "area gray level enhanced image" construct a two-dimensional histogram. Choose different weights to achieve different segmentation purposes and increase the applicability of the algorithm.
通过多次对红外和微光图像分割实验得到,λ的取值在(0.01,3)之间该算法有效。根据红外和微光图像中目标的灰度特性,一般情况如果是对红外图像分割则式(10)中的λ取小于1的值,如果是对微光图像分割则λ取大于1的值。The algorithm is effective when the value of λ is between (0.01,3) through multiple experiments on infrared and low-light image segmentation. According to the grayscale characteristics of the target in the infrared and low-light images, in general, if the infrared image is segmented, λ in formula (10) takes a value less than 1, and if the low-light image is segmented, λ takes a value greater than 1.
对红外图像采用本发明中建立直方图的新方法建立二维直方图,采用与横坐标和纵坐标平行的十字线对直方图划分如图3。图像的灰度级f(m,n)和g(m,n)组成灰度级对,即式(1)中的i和j,再通过式(1)到(7)的计算步骤计算最大熵求得相应的阈值(t*,s*),将图像中g(m,n)小于s*和f(m,n)小于t*的部分赋0,其余部分赋255,建立二值图像得到分割结果,具体效果见图4,图5为最终检测结果。The infrared image adopts the new method of establishing a histogram in the present invention to establish a two-dimensional histogram, and adopts cross lines parallel to the abscissa and ordinate to divide the histogram as shown in Figure 3. The gray level f(m,n) and g(m,n) of the image form a gray level pair, that is, i and j in formula (1), and then calculate the maximum Entropy calculates the corresponding threshold (t * , s * ), assigns 0 to the part of the image where g(m, n) is less than s * and f(m, n) is less than t * , and assigns 255 to the rest to create a binary image The segmentation result is obtained, the specific effect is shown in Figure 4, and Figure 5 is the final detection result.
三、多维特征的相与运算3. Phase AND operation of multidimensional features
在图像的信息融合中两图像的相与运算运用的比较多。在粗糙集理论当中有个比较重要的知识是属性约简,其定义从数学的角度出发,将n维数据x=(x1,x2…xn),通过一定手段降维,得到的新数据达到简化计算,消除冗余信息,提高准确性的目的。多维特征矩阵相与运算就是将多维特征矩阵按位相与(文中主要是两维特征矩阵),定义两个m×n矩阵Am×n和Bm×n如下式表示:In the image information fusion, the phase AND operation of two images is used more. One of the more important knowledge in rough set theory is attribute reduction. Its definition starts from a mathematical point of view. The n-dimensional data x=(x 1 ,x 2 …x n ) is reduced by certain means to obtain a new The data achieves the purpose of simplifying calculation, eliminating redundant information and improving accuracy. The phase-and operation of multi-dimensional feature matrices is to phase-AND multi-dimensional feature matrices (mainly two-dimensional feature matrices in this paper), and define two m×n matrices A m×n and B m×n as follows:
则两个矩阵按位与运算如下式:Then the bitwise AND operation of two matrices is as follows:
一幅分割后的二值图像可以看成是一个M×N的矩阵,矩阵的元素是由0和255组成。对上式举例分析,假设两个3×3二值图像矩阵A3×3,B3×3分别为:A segmented binary image can be regarded as an M×N matrix, and the elements of the matrix are composed of 0 and 255. For example analysis of the above formula, assuming two 3×3 binary image matrices A 3×3 and B 3×3 are respectively:
求取C=A3×3∧B3×3,则C为:Calculate C=A 3×3 ∧B 3×3 , then C is:
对于二值图A3×3来说相当于一维特征信息,a13是噪点,a22、a23、a32、a33四个灰度值组成目标区域;而对于B3×3来说相当于另一维特征信息,b11是噪点,b22、b23、b32、b33四个灰度值组成目标区域,从相与运算结果C中可以看出最后滤除了噪点,保留了目标区域。化两维特征为一维特征,完成了属性约简理论的功能也就是去除冗余提高准确度,证明了多维特征矩阵的相与运算可以实现特征级融合检测功能。For binary image A 3×3 , it is equivalent to one-dimensional feature information, a 13 is a noise point, and four gray values of a 22 , a 23 , a 32 , and a 33 form the target area; while for B 3×3 Equivalent to another dimension of feature information, b 11 is the noise point, and the four gray values of b 22 , b 23 , b 32 , and b 33 form the target area. From the result C of the phase AND operation, it can be seen that the noise point is finally filtered out and the target area. Turning two-dimensional features into one-dimensional features completes the function of attribute reduction theory, that is, removes redundancy and improves accuracy, and proves that the phase-and operation of multi-dimensional feature matrices can realize feature-level fusion detection.
四、多维目标检测4. Multi-dimensional target detection
为了从理论上验证了对分割后的二值图像采用相与运算的可行性,本发明针对所要检测的目标特性以及背景的特点,结合信息融合的概念,将本发明分割算法得到的红外和微光二值图像相与,滤除各自二值图像中的剩余噪声、背景和虚假目标、提高目标检测的准确性。In order to theoretically verify the feasibility of using phase AND operation on the segmented binary image, the present invention aims at the target characteristics to be detected and the characteristics of the background, combined with the concept of information fusion, the infrared Optical binary images are combined to filter the remaining noise, background and false targets in the respective binary images, and improve the accuracy of target detection.
如步骤图2,假设微光图像经过改进的最大熵分割后的图像为Ia(x,y),红外图像经过同样方法分割后的图像为Ib(x,y),按照下式对它们进行特征级信息融合,提取目标得到最终的检测结果T(x,y):As shown in step Figure 2, it is assumed that the image of the low-light image after the improved maximum entropy segmentation is I a (x, y), and the image of the infrared image segmented by the same method is I b (x, y), and they are divided according to the following formula Carry out feature-level information fusion, extract the target to get the final detection result T(x,y):
T(x,y)=Ia(x,y)∧Ib(x,y)(8)T(x,y)=I a (x,y)∧I b (x,y)(8)
最后得到的检测结果结合了融合技术的优点,滤除了红外波段图像和微光图像在各自分割当中的一些背景和噪声,提取出目标,如图6所示,提高了再复杂背景和多目标检测中的检测的准确性。The final detection result combines the advantages of fusion technology, filters out some background and noise in the respective segmentation of infrared band images and low-light images, and extracts the target, as shown in Figure 6, which improves complex background and multi-target detection The detection accuracy in .
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310134194.XA CN103226820B (en) | 2013-04-17 | 2013-04-17 | The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310134194.XA CN103226820B (en) | 2013-04-17 | 2013-04-17 | The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103226820A CN103226820A (en) | 2013-07-31 |
CN103226820B true CN103226820B (en) | 2016-01-20 |
Family
ID=48837255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310134194.XA Expired - Fee Related CN103226820B (en) | 2013-04-17 | 2013-04-17 | The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103226820B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106296671B (en) * | 2016-08-03 | 2019-06-11 | 昆明理工大学 | An Image Segmentation Method Based on Gabor Histogram Entropy |
CN106529391B (en) * | 2016-09-08 | 2019-06-18 | 长安大学 | A Robust Speed Limit Traffic Sign Detection and Recognition Method |
CN107194943A (en) * | 2017-04-07 | 2017-09-22 | 北京工业大学 | Image partition method and device, image partition method and device for slag piece |
JP7114965B2 (en) * | 2017-07-14 | 2022-08-09 | 富士通株式会社 | Target detection method, device and image processing device |
CN108782145B (en) * | 2018-05-30 | 2020-11-13 | 宁夏悦丰生态农业科技股份有限公司 | Intelligent orchard management system |
CN108834667A (en) * | 2018-05-30 | 2018-11-20 | 深圳市鑫汇达机械设计有限公司 | A kind of greenhouse system based on Internet of Things |
CN109035217B (en) * | 2018-07-06 | 2021-02-02 | 同济大学 | Quantitative evaluation method for soil body fracture based on image two-dimensional entropy |
CN109272503B (en) * | 2018-09-29 | 2019-11-19 | 西安工程大学 | A hairiness detection method based on the fusion of maximum entropy and DBSCAN |
CN109859245B (en) * | 2019-01-22 | 2020-12-11 | 深圳大学 | Multi-target tracking method, device and storage medium for video target |
CN110033499B (en) * | 2019-04-21 | 2021-01-15 | 南京林业大学 | Landsat OLI image-based water body mapping method |
CN110084818B (en) * | 2019-04-29 | 2021-04-23 | 清华大学深圳研究生院 | Dynamic down-sampling image segmentation method |
CN111291762B (en) * | 2020-03-10 | 2022-12-13 | 上海航天控制技术研究所 | Multi-feature-point-difference-based multi-band image fusion detection method |
CN112837335B (en) * | 2021-01-27 | 2023-05-09 | 上海航天控制技术研究所 | Medium-long wave infrared composite anti-interference method |
CN118469994A (en) * | 2024-06-07 | 2024-08-09 | 奈米科学仪器装备(杭州)有限公司 | Chip defect detection method based on maximum entropy double-threshold image segmentation algorithm |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059870A (en) * | 2007-05-23 | 2007-10-24 | 华中科技大学 | Image cutting method based on attribute histogram |
CN102831604A (en) * | 2012-07-30 | 2012-12-19 | 常州大学 | Two-dimensional Renyi entropic threshold segmentation method for grayscale images |
-
2013
- 2013-04-17 CN CN201310134194.XA patent/CN103226820B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101059870A (en) * | 2007-05-23 | 2007-10-24 | 华中科技大学 | Image cutting method based on attribute histogram |
CN102831604A (en) * | 2012-07-30 | 2012-12-19 | 常州大学 | Two-dimensional Renyi entropic threshold segmentation method for grayscale images |
Non-Patent Citations (5)
Title |
---|
A review on image segmentaion techniques;Nikthl R.pal,Sankar K.pal;《Pattern Recognition Society》;19931231;第26卷(第9期);全文 * |
二维直方图创建的新方法实现图像自动分割;张云飞等;《光电工程》;20070131;第34卷(第1期);全文 * |
基于二维最大熵阈值图像分割技术的改进方法;谈国军等;《软件导刊》;20080131;第7卷(第1期);全文 * |
改进的最大熵阈值分割及其快速实现;张新明等;《计算机科学》;20110831;第38卷(第8期);全文 * |
粒矩阵属性约简的启发式算法;钟珞;《小型微型计算机系统》;20110331;第32卷(第3期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103226820A (en) | 2013-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103226820B (en) | The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved | |
CN110544251B (en) | Dam crack detection method based on multi-migration learning model fusion | |
CN109272489B (en) | Infrared weak and small target detection method based on background suppression and multi-scale local entropy | |
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN101923711B (en) | SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement | |
CN102034239B (en) | Local gray abrupt change-based infrared small target detection method | |
WO2019007004A1 (en) | Image feature extraction method for person re-identification | |
CN105678723B (en) | Multi-focus image fusing method based on sparse decomposition and difference image | |
CN104182985B (en) | Remote sensing image change detection method | |
CN109242888A (en) | Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation | |
CN103955949B (en) | Moving target detecting method based on Mean-shift algorithm | |
CN107392885A (en) | A Method of Infrared Dim Small Target Detection Based on Visual Contrast Mechanism | |
CN103985091A (en) | Single image defogging method based on luminance dark priori method and bilateral filtering | |
CN107154048A (en) | The remote sensing image segmentation method and device of a kind of Pulse-coupled Neural Network Model | |
CN101694720B (en) | Multi-temporal SAR Image Change Detection Method Based on Spatial Correlation Conditional Probability Fusion | |
CN101976436A (en) | Pixel-level multi-focus image fusion method based on correction of differential image | |
CN113095385B (en) | Multimode image matching method based on global and local feature description | |
CN106530271A (en) | Infrared image significance detection method | |
CN103810716A (en) | Image segmentation method based on grey scale flitting and Renyi entropy | |
CN105678249A (en) | Face identification method aiming at registered face and to-be-identified face image quality difference | |
CN104021567B (en) | Based on the fuzzy altering detecting method of image Gauss of first numeral law | |
Zhu et al. | Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks | |
CN105787912A (en) | Classification-based step type edge sub pixel localization method | |
CN106600613A (en) | Embedded GPU-based improved LBP infrared target detection method | |
CN107463895A (en) | Weak and small damage target detection method based on neighborhood vector PCA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160120 |