CN115294606B - Millimeter wave image dark target enhancement method - Google Patents
Millimeter wave image dark target enhancement method Download PDFInfo
- Publication number
- CN115294606B CN115294606B CN202210938996.5A CN202210938996A CN115294606B CN 115294606 B CN115294606 B CN 115294606B CN 202210938996 A CN202210938996 A CN 202210938996A CN 115294606 B CN115294606 B CN 115294606B
- Authority
- CN
- China
- Prior art keywords
- human body
- image
- axis
- gray value
- dark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000295 complement effect Effects 0.000 claims abstract description 70
- 238000004364 calculation method Methods 0.000 claims description 17
- 230000002708 enhancing effect Effects 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 11
- 230000000877 morphologic effect Effects 0.000 claims description 9
- 210000000746 body region Anatomy 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims 1
- 230000000717 retained effect Effects 0.000 abstract 1
- 238000001514 detection method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 241001416181 Axis axis Species 0.000 description 4
- 244000309466 calf Species 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种毫米波图像暗目标增强方法,首先计算人体毫米波图像中人体中轴位置、头顶位置与肩部位置,限制处理区域,再计算人体平均灰度值,创建补集图像,在补集图像中存储原图中灰度值低于平均灰度值的像素点的补集;提取补集图像中为暗目标的区域,排除由身体结构导致的错误增强部分,仅保留暗目标的增强区域。最后将补集图像与原图加权融合,保留暗目标纹理特征并增强暗目标的灰度特征,获取暗目标增强后的毫米波图像。本发明方法结合暗目标的灰度特征及暗目标与人体区域的位置关系,在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。
The invention discloses a dark target enhancement method of a millimeter-wave image. Firstly, the center axis position, the top position and the shoulder position of the human body are calculated in the millimeter-wave image of the human body, the processing area is limited, and the average gray value of the human body is calculated to create a complementary image. In the complement image, the complement of pixels whose gray value is lower than the average gray value in the original image is stored; the area of the dark target in the complement image is extracted, the wrong enhancement part caused by the body structure is excluded, and only the dark target is retained enhanced area. Finally, the complement image is weighted and fused with the original image, the dark target texture features are preserved and the gray features of the dark target are enhanced, and the enhanced millimeter-wave image of the dark target is obtained. The method of the invention combines the grayscale feature of the dark object and the positional relationship between the dark object and the human body area, effectively enhances the grayscale feature of the dark object and at the same time effectively retains the texture feature of the dark object.
Description
技术领域technical field
本发明属于人体毫米波图像目标检测领域,特别涉及一种毫米波图像暗目标增强方法。The invention belongs to the field of human body millimeter wave image target detection, in particular to a millimeter wave image dark target enhancement method.
背景技术Background technique
毫米波图像目标检测是实现人体体表携带违禁品检测的关键,可以广泛应用于机场、车站等安检工作,是现有人体安检手段的有效替代。使用毫米波对人体成像是实现毫米波图像目标检测的前提,主动式毫米波成像技术通过向人体照射毫米波并使用毫米波雷达接收毫米波回波,以回波的强弱差异生成图像。按照对入射毫米波的作用,人体所携违禁品目标的性质可分为两类:一类对入射毫米波的反射作用强于吸收作用,在毫米波图像中表现为灰度值高于人体区域(图1(a)矩形框,下称亮目标);一类对入射毫米波的吸收作用强于反射作用,在毫米波图像中表现为灰度值远低于人体区域(图1(b)矩形框,下称暗目标)。Millimeter-wave image target detection is the key to realize the detection of contraband on the human body surface. It can be widely used in security checks such as airports and stations, and is an effective replacement for existing human security checks. The use of millimeter waves to image the human body is a prerequisite for the realization of millimeter wave image target detection. Active millimeter wave imaging technology irradiates millimeter waves to the human body and uses millimeter wave radar to receive millimeter wave echoes to generate images based on the difference in the strength of the echoes. According to the effect on the incident millimeter wave, the properties of contraband targets carried by the human body can be divided into two categories: one kind has a stronger reflection effect on the incident millimeter wave than absorption effect, and the gray value in the millimeter wave image is higher than that of the human body area (Fig. 1(a) rectangular frame, hereinafter referred to as bright target); one type has a stronger absorption effect on the incident millimeter wave than reflection, and the gray value in the millimeter wave image is much lower than that of the human body area (Fig. 1(b) rectangular frame, hereinafter referred to as the dark target).
目前毫米波图像目标检测方法大多使用机器学习技术,其训练及测试原理可以简单概述为:大量学习所需目标的特征后,在测试图像中寻找与已学习特征相似的位置。其中,“目标的特征”指目标区域(相对于人体区域)独特的纹理、灰度等特征。从图1可以看出,亮目标相对于暗目标具有更加丰富的纹理,而暗目标的目标特征则近似于人体区域外的黑色背景,上述亮、暗目标在目标特征方面的差异导致暗目标的检出率不佳。At present, most millimeter-wave image target detection methods use machine learning technology, and its training and testing principles can be briefly summarized as follows: After learning a large number of features of the required target, look for positions similar to the learned features in the test image. Among them, the "features of the target" refer to the unique texture, grayscale and other characteristics of the target area (relative to the human body area). It can be seen from Figure 1 that the bright target has richer textures than the dark target, while the target features of the dark target are similar to the black background outside the human body area. The detection rate is poor.
本发明针对该问题,结合暗目标的灰度特征及暗目标与人体区域的位置关系,在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。Aiming at this problem, the present invention combines the grayscale characteristics of the dark object and the positional relationship between the dark object and the human body area, effectively enhancing the grayscale characteristics of the dark object while effectively retaining the texture characteristics of the dark object.
发明内容Contents of the invention
本发明针对人体毫米波图像中暗目标的目标特征相对于亮目标不明显所导致的暗目标检出率不佳的问题,结合暗目标的灰度特征及暗目标与人体区域的位置关系,在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。The present invention aims at the problem that the detection rate of the dark target is not good due to the inconspicuous target features of the dark target compared with the bright target in the millimeter-wave image of the human body. Combining the grayscale characteristics of the dark target and the positional relationship between the dark target and the human body area, While effectively enhancing the gray features of the dark target, the texture feature of the dark target is effectively preserved.
一种毫米波图像暗目标增强方法,包括如下步骤:A method for enhancing a dark target in a millimeter wave image, comprising the steps of:
步骤1、计算人体毫米波图像中人体中轴位置、头顶位置与肩部位置,限制处理区域;Step 1. Calculate the central axis position, head position and shoulder position of the human body in the millimeter wave image of the human body, and limit the processing area;
步骤2、计算人体平均灰度值;Step 2, calculate the average gray value of the human body;
步骤3、创建补集图像,在补集图像中存储原图中灰度值低于平均灰度值的像素点的补集;Step 3, creating a complementary set image, and storing a complementary set of pixels whose gray value in the original image is lower than the average gray value in the complementary set image;
步骤4、提取补集图像中为暗目标的区域,排除由身体结构导致的错误增强部分,仅保留暗目标的增强区域。Step 4. Extract the region of the dark target in the complement image, exclude the wrong enhanced part caused by the body structure, and only keep the enhanced region of the dark target.
步骤5、将补集图像与原图加权融合,保留暗目标纹理特征并增强暗目标的灰度特征,获取暗目标增强后的毫米波图像。Step 5. The complement image is weighted and fused with the original image, the texture feature of the dark object is preserved and the grayscale feature of the dark object is enhanced, and the enhanced millimeter wave image of the dark object is obtained.
进一步的,步骤1具体方法如下;Further, the specific method of step 1 is as follows;
由于毫米波人体图像背景灰度值几乎为0,因此可以使用最大类间方差法对人体区域进行分割,得到人体区域的二值图像。以图像左上角点为原点O建立平面直角坐标系,原点向下、向右分别为x轴与y轴的正方向。Since the background gray value of the millimeter-wave human body image is almost 0, the maximum inter-class variance method can be used to segment the human body region to obtain a binary image of the human body region. A plane Cartesian coordinate system is established with the upper left corner of the image as the origin O, and the downward and rightward directions of the origin are the positive directions of the x-axis and y-axis respectively.
计算中轴位置:Calculate the center axis position:
在同一纵坐标下分别对人体左小腿、右小腿最外缘处取点,二者横坐标分别记为legLeft,legRight。记人体中轴横坐标为axis,则axis表示为:Take points at the outermost edge of the left calf and right calf of the human body under the same ordinate, and record the two abscissas as legLeft and legRight respectively. Note that the abscissa of the central axis of the human body is axis, then axis is expressed as:
计算头顶位置:Calculate the overhead position:
借助axis计算头部位置,记头顶纵坐标为headTop、人体二值图像为IOTSU、图像总行数为imgRows,headTop的计算方法具体步骤如下:Use the axis to calculate the head position, record the vertical coordinate of the top of the head as headTop, the binary image of the human body as I OTSU , and the total number of rows in the image as imgRows. The specific steps of the calculation method of headTop are as follows:
1-1:在IOTSU中的点(axis,0)处,沿y轴正方向遍历。1-1: At the point (axis,0) in I OTSU , traverse along the positive direction of the y-axis.
1-2:当遍历至第一个灰度值不为0的像素点时,记该像素点的纵坐标为headTop并停止遍历,计算结束。1-2: When traversing to the first pixel whose gray value is not 0, record the ordinate of the pixel as headTop and stop traversing, and the calculation ends.
计算肩部位置:Calculate the shoulder position:
首先,在二值图像IOTSU中,从头顶位置(坐标(axis,headTop))沿x轴负方向遍历至第一个不为0的点,横坐标记为leftArm。针对(leftArm,headTop)至(axis,headTop)内的axis-leftArm个像素点,依次沿y轴正方向遍历至第一个不为0的点,记该点纵坐标为leftShoulderi,i∈[0,axis-leftArm)。First, in the binary image I OTSU , traverse from the position of the top of the head (coordinates (axis, headTop)) along the negative direction of the x-axis to the first point that is not 0, and the abscissa is marked as leftArm. For the axis-leftArm pixel points within (leftArm, headTop) to (axis, headTop), traverse along the positive direction of the y-axis to the first point that is not 0, and record the ordinate of this point as leftShoulder i , i∈[ 0,axis-leftArm).
同样地,在二值图像IOTSU中,从头顶位置(坐标(axis,headTop))沿x轴正方向遍历至第一个不为0的点,横坐标记为rightArm。针对(axis,headTop)至(rightArm,headTop)内的rightArm-axis个像素点,依次沿y轴正方向遍历至第一个不为0的点,记该点纵坐标为rightShoulderj,j∈[0,rightArm-axis)。Similarly, in the binary image I OTSU , traverse from the position of the top of the head (coordinates (axis, headTop)) along the positive direction of the x-axis to the first point that is not 0, and the abscissa is marked as rightArm. For the rightArm-axis pixels from (axis, headTop) to (rightArm, headTop), traverse along the positive direction of the y-axis to the first point that is not 0, and record the ordinate of this point as rightShoulder j , j∈[ 0, rightArm-axis).
最后,在纵坐标集合leftShoulderi∩rightShoulderj中查找纵坐标的最大值作为肩部位置的纵坐标,记为Shoulder。Finally, find the maximum value of the ordinate in the ordinate set leftShoulder i ∩ rightShoulder j as the ordinate of the shoulder position, which is recorded as Shoulder.
进一步的,步骤2具体方法如下;Further, the specific method of step 2 is as follows;
由于人体毫米波图像的背景灰度值为0,因此非0像素点的平均灰度值即为人体区域的平均灰度值。记人体平均灰度值为avgGrey,记人体毫米波图像为Isrc,分别记Isrc的总行、列数为imgRows、imgCols。avgGrey的计算方法具体步骤如下:Since the background gray value of the millimeter wave image of the human body is 0, the average gray value of non-zero pixels is the average gray value of the human body area. Record the average gray value of the human body as avgGrey, record the millimeter-wave image of the human body as I src , and record the total number of rows and columns of I src as imgRows and imgCols. The specific steps of the calculation method of avgGrey are as follows:
2-1:从原点处开始遍历Isrc的所有像素点,记录Isrc所有像素点的灰度值总值及灰度值不为0的像素点个数。2-1: Start traversing all pixels of I src from the origin, record the total gray value of all pixels of I src and the number of pixels whose gray value is not 0.
2-2:遍历完毕后,计算Isrc所有像素点的灰度值总值与灰度值不为0的像素点个数的比值,该比值记为Isrc的avgGrey,计算结束。2-2: After the traversal is completed, calculate the ratio of the total gray value of all pixels in I src to the number of pixels whose gray value is not 0, and record the ratio as avgGrey of I src , and the calculation ends.
进一步的,步骤3具体方法如下;Further, the specific method of step 3 is as follows;
记补集图像为Ienhance、补集图像文件单个像素点所能存储的最大灰度值为maxGrey,Ienhance生成的具体步骤如下:Note that the complement image is I enhance , and the maximum gray value that can be stored by a single pixel in the complement image file is maxGrey. The specific steps for generating I enhance are as follows:
3-1:创建一张与Isrc同尺寸的补集图像Ienhance,Ienhance内所有像素点的初始灰度值为0。3-1: Create a complement image I enhance with the same size as I src , and the initial gray value of all pixels in I enhance is 0.
3-2:遍历Isrc中人体的肩膀以下部分的所有像素点,以avgGray为取补阈值,对于Isrc内灰度值低于avgGray的像素点,在Ienhance中以像素点灰度值的补值为Ienhance对应位置的像素点赋值。3-2: Traverse all the pixels below the shoulders of the human body in I src , take avgGray as the complement threshold, and for pixels whose gray value is lower than avgGray in I src , use the gray value of the pixel in I enhance The complement value is assigned to the pixel at the position corresponding to I enhance .
3-3:遍历完毕后,计算结束。3-3: After the traversal is completed, the calculation ends.
对于步骤3-2中补值的解释:以8位无符号单通道灰度图为例:其单个像素点可存储的最大灰度值为255,假设其有一像素点灰度值为20,则该像素点的补值为255-20=235。Explanation of the complementary value in step 3-2: Take an 8-bit unsigned single-channel grayscale image as an example: the maximum grayscale value that can be stored in a single pixel is 255, assuming that one pixel has a grayscale value of 20, then The complementary value of this pixel is 255-20=235.
进一步的,步骤4具体方法如下;Further, the specific method of step 4 is as follows;
在人体二值图像IOTSU内沿人体中轴axis自图像底部向上遍历,遇到第一个不为0的点停止,记该点纵坐标为buttDown,该点的纵坐标即为人体胯部的纵坐标。因此人体腿部纵坐标区间为[buttDown,imgRows),且人体左腿位于人体中轴axis左侧、人体右腿位于人体中轴axis右侧,利用上述信息定位人体双腿内侧位置,并去除双腿内侧外边缘补集区域,获得人体外边缘的补集区域去除后的补集图像I′enhance。In the human body binary image I OTSU , traverse upward from the bottom of the image along the axis axis of the human body, and stop when encountering the first point that is not 0, record the ordinate of this point as buttDown, and the ordinate of this point is the crotch of the human body Y-axis. Therefore, the ordinate interval of the human leg is [buttDown,imgRows), and the left leg of the human body is located on the left side of the axis of the human body, and the right leg of the human body is located on the right side of the axis axis of the human body. The inner and outer edge complement area of the leg is used to obtain the complement image I′ enhance after the complement area of the outer edge of the human body is removed.
在横向方向上按照从人体外侧向人体内侧的方向遍历图像像素,并设置遍历终止条件,保证暗目标的补集不会被一同去除。以去除人体左侧的外边缘补集区域为例,人体左侧的外边缘补集区域的具体去除方法如下:In the horizontal direction, the image pixels are traversed from the outside of the human body to the inside of the human body, and the traversal termination condition is set to ensure that the complement of dark objects will not be removed together. Taking the removal of the outer edge complement area on the left side of the human body as an example, the specific removal method of the outer edge complement area on the left side of the human body is as follows:
4-1:在x∈[0,axis]区域内,从x=0位置出发,沿x轴正方向按行遍历Ienhance中的每一行像素点,并将遍历过程中的非零像素点的灰度值置零。当出现当前像素点非零,而下一个位于遍历位置的像素点灰度值为0时,置零当前位置的非零像素点,结束该行的遍历并跳转至下一行像素点的起始位置按照上述规则继续遍历。4-1: In the area of x∈[0,axis], start from the position x=0, traverse each row of pixels in I enhance along the positive direction of the x-axis, and set the non-zero pixels in the traversal process The gray value is set to zero. When the current pixel is non-zero and the gray value of the next pixel at the traversal position is 0, set the non-zero pixel at the current position to zero, end the traversal of the row and jump to the beginning of the next row of pixels Positions continue to traverse according to the above rules.
4-2:遍历完毕Ienhance的所有行,计算结束。4-2: After traversing all the lines of I enhance , the calculation ends.
使用形态学开运算消除可能存在的非暗目标补集区域的杂斑:记图像的较短边长的最高位为m,使用尺寸为m×m的原点位于中心处的矩形结构元对I′enhance形态学开运算处理,记经过形态学开运算处理的补集图像为I″enhance。Use the morphological opening operation to eliminate the clutter in the non-dark target complement area that may exist: record the highest bit of the shorter side length of the image as m, and use the rectangular structure element pair I' with the origin at the center of the size m×m Enhance morphological opening operation processing, record the complement image processed by morphological opening operation as I″ enhance .
进一步的,步骤5具体方法如下。Further, the specific method of step 5 is as follows.
使用步骤2相同方法计算I″enhance的平均灰度值,记为avgGreyenhance。为了在增强暗目标灰度特征的同时保留暗目标原有的纹理特征,使用avgGreyenhance与Isrc的平均灰度值avgGrey计算加权系数,加权融合Isrc与I″enhance。记融合后的图像为Ifusion,则Ifusion的融合过程表示为:Use the same method in step 2 to calculate the average gray value of I″ enhance , which is recorded as avgGrey enhance . In order to enhance the gray feature of the dark target while retaining the original texture features of the dark target, use the average gray value of avgGrey enhance and I src avgGrey calculates the weighting coefficient, and the weighting is fused with I src and I″ enhance . Note that the fused image is I fusion , and the fusion process of I fusion is expressed as:
Ifusion为对人体毫米波图像中暗目标的最终增强结果,在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。I fusion is the final enhancement result of the dark target in the human millimeter wave image, which effectively preserves the texture feature of the dark target while effectively enhancing the gray feature of the dark target.
本发明有益效果如下:The beneficial effects of the present invention are as follows:
针对人体毫米波图像中暗目标的目标特征相对于亮目标不明显所导致的暗目标检出率不佳的问题,结合暗目标的灰度特征及暗目标与人体区域的位置关系,在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。Aiming at the poor detection rate of dark targets caused by the inconspicuous target features of dark targets compared with bright targets in human millimeter-wave images, combined with the grayscale features of dark targets and the positional relationship between dark targets and human body areas, it effectively enhances the The texture features of the dark target are effectively preserved while the gray features of the dark target are preserved.
附图说明Description of drawings
图1是人体毫米波图像中的亮目标、暗目标示意图;Figure 1 is a schematic diagram of a bright target and a dark target in a millimeter wave image of a human body;
图2是人体毫米波图像OTSU分割结果示意图;Fig. 2 is a schematic diagram of the OTSU segmentation result of the millimeter-wave image of the human body;
图3是本发明空域坐标系示意图;Fig. 3 is a schematic diagram of the airspace coordinate system of the present invention;
图4是肩部位置计算示意图;Fig. 4 is a schematic diagram of shoulder position calculation;
图5是本发明增强流程示意图。Fig. 5 is a schematic diagram of the enhancement flow of the present invention.
具体实施方式Detailed ways
下面结合附图与实施例对本发明方法进一步说明。The method of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.
一种毫米波图像暗目标增强方法具体包括以下步骤:A method for enhancing a dark target in a millimeter-wave image specifically includes the following steps:
步骤1、计算人体中轴位置、头顶位置与肩部位置;Step 1. Calculate the position of the central axis of the human body, the position of the top of the head and the position of the shoulder;
由于毫米波人体图像背景灰度值几乎为0,因此可以使用最大类间方差法(OTSU,Otsu N.A Threshold Selection Method from Gray-Level Histograms[J].IEEETransactions on Systems Man&Cybernetics,2007,9(1):62-66.)对人体区域进行分割,得到如图2所示的人体区域的二值图像。如图3所示,以图像左上角点为原点O建立平面直角坐标系,原点向下、向右分别为x轴与y轴的正方向,坐标系示意图及下文关键点位均在图3标出。Since the background gray value of the millimeter-wave human body image is almost 0, the maximum inter-class variance method (OTSU, Otsu N.A Threshold Selection Method from Gray-Level Histograms[J].IEEETransactions on Systems Man&Cybernetics,2007,9(1): 62-66.) Segment the human body region to obtain the binary image of the human body region as shown in Figure 2. As shown in Figure 3, a plane Cartesian coordinate system is established with the upper left corner of the image as the origin O, and the downward and rightward points of the origin are the positive directions of the x-axis and y-axis respectively. The schematic diagram of the coordinate system and the key points below are marked in Figure 3 out.
计算中轴位置:Calculate the center axis position:
在同一纵坐标下分别对人体左小腿、右小腿最外缘处取点,二者横坐标分别记为legLeft,legRight。记人体中轴横坐标为axis,则axis表示为:Take points at the outermost edge of the left calf and right calf of the human body under the same ordinate, and record the two abscissas as legLeft and legRight respectively. Note that the abscissa of the central axis of the human body is axis, then axis is expressed as:
计算头顶位置:Calculate the overhead position:
借助axis计算头部位置,记头顶纵坐标为headTop、人体二值图像为IOTSU、图像总行数为imgRows,headTop的计算方法具体步骤如下:Use the axis to calculate the head position, record the vertical coordinate of the top of the head as headTop, the binary image of the human body as I OTSU , and the total number of rows in the image as imgRows. The specific steps of the calculation method of headTop are as follows:
1-1:在IOTSU中的点(axis,0)处,沿y轴正方向遍历。1-1: At the point (axis,0) in I OTSU , traverse along the positive direction of the y-axis.
1-2:当遍历至第一个灰度值不为0的像素点时,记该像素点的纵坐标为headTop并停止遍历,计算结束。1-2: When traversing to the first pixel whose gray value is not 0, record the ordinate of the pixel as headTop and stop traversing, and the calculation ends.
headTop计算算法的伪代码如下:The pseudocode of the headTop calculation algorithm is as follows:
计算肩部位置:Calculate the shoulder position:
首先,在二值图像IOTSU中,从头顶位置(坐标(axis,headTop))沿x轴负方向遍历至第一个不为0的点,横坐标记为leftArm。针对(leftArm,headTop)至(axis,headTop)内的axis-leftArm个像素点,依次沿y轴正方向遍历至第一个不为0的点,记该点纵坐标为leftShoulderi,i∈[0,axis-leftArm)。First, in the binary image I OTSU , traverse from the position of the top of the head (coordinates (axis, headTop)) along the negative direction of the x-axis to the first point that is not 0, and the abscissa is marked as leftArm. For the axis-leftArm pixel points within (leftArm, headTop) to (axis, headTop), traverse along the positive direction of the y-axis to the first point that is not 0, and record the ordinate of this point as leftShoulder i , i∈[ 0,axis-leftArm).
同样地,在二值图像IOTSU中,从头顶位置(坐标(axis,headTop))沿x轴正方向遍历至第一个不为0的点,横坐标记为rightArm。针对(axis,headTop)至(rightArm,headTop)内的rightArm-axis个像素点,依次沿y轴正方向遍历至第一个不为0的点,记该点纵坐标为rightShoulderj,j∈[0,rightArm-axis)。Similarly, in the binary image I OTSU , traverse from the position of the top of the head (coordinates (axis, headTop)) along the positive direction of the x-axis to the first point that is not 0, and the abscissa is marked as rightArm. For the rightArm-axis pixels from (axis, headTop) to (rightArm, headTop), traverse along the positive direction of the y-axis to the first point that is not 0, and record the ordinate of this point as rightShoulder j , j∈[ 0, rightArm-axis).
最后,在纵坐标集合leftShoulderi∩rightShoulderj中查找纵坐标的最大值作为肩部位置的纵坐标,记为Shoulder。Finally, find the maximum value of the ordinate in the ordinate set leftShoulder i ∩ rightShoulder j as the ordinate of the shoulder position, which is recorded as Shoulder.
计算肩部位置的示意图见图4,其中的箭头方向表示像素点遍历方向。The schematic diagram of calculating the shoulder position is shown in FIG. 4 , where the direction of the arrow indicates the direction of pixel traversal.
步骤2、计算人体平均灰度值;Step 2, calculate the average gray value of the human body;
由于人体毫米波图像的背景灰度值为0,因此非0像素点的平均灰度值即为人体区域的平均灰度值。记人体平均灰度值为avgGrey,记人体毫米波图像为Isrc,分别记Isrc的总行、列数为imgRows、imgCols。avgGrey的计算方法具体步骤如下:Since the background gray value of the millimeter wave image of the human body is 0, the average gray value of non-zero pixels is the average gray value of the human body area. Record the average gray value of the human body as avgGrey, record the millimeter-wave image of the human body as I src , and record the total number of rows and columns of I src as imgRows and imgCols. The specific steps of the calculation method of avgGrey are as follows:
2-1:从原点处开始遍历Isrc的所有像素点,记录Isrc所有像素点的灰度值总值及灰度值不为0的像素点个数。2-1: Start traversing all pixels of I src from the origin, record the total gray value of all pixels of I src and the number of pixels whose gray value is not 0.
2-2:遍历完毕后,计算Isrc所有像素点的灰度值总值与灰度值不为0的像素点个数的比值,该比值记为Isrc的avgGrey,计算结束。2-2: After the traversal is completed, calculate the ratio of the total gray value of all pixels in I src to the number of pixels whose gray value is not 0, and record the ratio as avgGrey of I src , and the calculation ends.
avgGrey计算算法的伪代码如下:The pseudocode of the avgGrey calculation algorithm is as follows:
步骤3、创建补集图像,补集图像中存储Isrc中灰度值低于平均灰度值的像素点的补集;Step 3, create a complementary set image, and store a complementary set of pixels whose grayscale value in I src is lower than the average grayscale value in the complementary set image;
记补集图像为Ienhance、补集图像文件单个像素点所能存储的最大灰度值为maxGrey,Ienhance生成的具体步骤如下:Note that the complement image is I enhance , and the maximum gray value that can be stored by a single pixel in the complement image file is maxGrey. The specific steps for generating I enhance are as follows:
3-1:创建一张与Isrc同尺寸的补集图像Ienhance,Ienhance内所有像素点的初始灰度值为0。3-1: Create a complement image I enhance with the same size as I src , and the initial gray value of all pixels in I enhance is 0.
3-2:遍历Isrc中人体的肩膀以下部分的所有像素点,以avgGray为取补阈值,对于Isrc内灰度值低于avgGray的像素点,在Ienhance中以像素点灰度值的补值为Ienhance对应位置的像素点赋值。3-2: Traverse all the pixels below the shoulders of the human body in I src , take avgGray as the complement threshold, and for pixels whose gray value is lower than avgGray in I src , use the gray value of the pixel in I enhance The complement value is assigned to the pixel at the position corresponding to I enhance .
3-3:遍历完毕后,计算结束。3-3: After the traversal is completed, the calculation ends.
对于步骤3-2中补值的解释:以8位无符号单通道灰度图为例:其单个像素点可存储的最大灰度值为255,假设其有一像素点灰度值为20,则该像素点的补值为255-20=235。Explanation of the complementary value in step 3-2: Take an 8-bit unsigned single-channel grayscale image as an example: the maximum grayscale value that can be stored in a single pixel is 255, assuming that one pixel has a grayscale value of 20, then The complementary value of this pixel is 255-20=235.
补集图像生成算法的伪代码如下:The pseudocode of the complement image generation algorithm is as follows:
补集图像示例见图5(a)。An example of a complement image is shown in Figure 5(a).
步骤4、提取补集图像中为暗目标的区域;Step 4, extracting the region that is the dark target in the complement image;
如图5(a)所示,人体外边缘具有与暗目标相似的灰度特征,因此补集图像中会同时存在人体外边缘与暗目标的补集,因此需要去除非暗目标的补集区域。As shown in Figure 5(a), the outer edge of the human body has similar grayscale features to the dark object, so there will be complements of the outer edge of the human body and the dark object in the complementary image, so it is necessary to remove the complementary area of the non-dark object .
在人体二值图像IOTSU内沿人体中轴axis自图像底部向上遍历,遇到第一个不为0的点停止(见图3,记该点纵坐标为buttDown),该点的纵坐标即为人体胯部的纵坐标。因此人体腿部纵坐标区间为[buttDown,imgRows),且人体左腿位于人体中轴axis左侧、人体右腿位于人体中轴axis右侧,利用上述信息定位人体双腿内侧位置,并去除双腿内侧外边缘补集区域,获得人体外边缘的补集区域去除后的补集图像I′enhance。In the binary image I OTSU of the human body, traverse upward from the bottom of the image along the axis axis of the human body, and stop when encountering the first point that is not 0 (see Figure 3, record the ordinate of this point as buttDown), the ordinate of this point is is the vertical coordinate of the human crotch. Therefore, the ordinate interval of the human leg is [buttDown,imgRows), and the left leg of the human body is located on the left side of the axis of the human body, and the right leg of the human body is located on the right side of the axis axis of the human body. The inner and outer edge complement area of the leg is used to obtain the complement image I′ enhance after the complement area of the outer edge of the human body is removed.
在横向方向上按照从人体外侧向人体内侧的方向遍历图像像素,并设置遍历终止条件,保证暗目标的补集不会被一同去除。以去除人体左侧的外边缘补集区域为例,人体左侧的外边缘补集区域的具体去除方法如下:In the horizontal direction, the image pixels are traversed in the direction from the outside of the human body to the inside of the human body, and the traversal termination condition is set to ensure that the complement of dark objects will not be removed together. Taking the removal of the outer edge complement area on the left side of the human body as an example, the specific removal method of the outer edge complement area on the left side of the human body is as follows:
4-1:在x∈[0,axis]区域内,从x=0位置出发,沿x轴正方向按行遍历Ienhance中的每一行像素点,并将遍历过程中的非零像素点的灰度值置零。当出现当前像素点非零,而下一个位于遍历位置的像素点灰度值为0时,置零当前位置的非零像素点,结束该行的遍历并跳转至下一行像素点的起始位置(x=0的位置)按照上述规则继续遍历。4-1: In the area of x∈[0,axis], start from the position x=0, traverse each row of pixels in I enhance along the positive direction of the x-axis, and set the non-zero pixels in the traversal process The gray value is set to zero. When the current pixel is non-zero and the gray value of the next pixel at the traversal position is 0, set the non-zero pixel at the current position to zero, end the traversal of the row and jump to the beginning of the next row of pixels The location (the location of x=0) continues to traverse according to the above rules.
4-2:遍历完毕Ienhance的所有行,计算结束。4-2: After traversing all the lines of I enhance , the calculation ends.
人体外边缘的补集区域去除的伪代码如下:The pseudo code for removing the complement area of the outer edge of the human body is as follows:
数学形态学(Serra J.Image Analysis and Mathematical Morphology-VolumeI.Academic,1982.)开运算能够消除小的连通域,保留较大的连通域(连通域:图像中一块互相连通的有值像素点的集合)。使用形态学开运算消除可能存在的非暗目标补集区域的杂斑:记图像的较短边长的最高位为m,使用尺寸为m×m的原点位于中心处的矩形结构元对I′enhance形态学开运算处理,记经过形态学开运算处理的补集图像为I″enhance。例如,图5(b)I′enhance尺寸为400×768,因此m=4。图5(c)为对图5(b)开运算处理得到的I″enhance。Mathematical Morphology (Serra J.Image Analysis and Mathematical Morphology-VolumeI.Academic, 1982.) The opening operation can eliminate small connected domains and retain larger connected domains (connected domain: a piece of interconnected valuable pixels in the image gather). Use the morphological opening operation to eliminate the clutter in the non-dark target complement area that may exist: record the highest bit of the shorter side length of the image as m, and use the rectangular structure element pair I' with the origin at the center of the size m×m Enhance morphological opening operation processing, record the complement image processed by morphological opening operation as I "enhance" . For example, the size of I'enhance in Fig. 5(b) is 400×768, so m=4. Fig. 5(c) is The I″ enhance obtained by the operation processing in Fig. 5(b) is performed.
步骤5、将补集图像与原图加权融合。Step 5. Weighted fusion of the complement image and the original image.
使用步骤2相同方法计算I″enhance的平均灰度值,记为avgGreyenhance。为了在增强暗目标灰度特征的同时保留暗目标原有的纹理特征,使用avgGreyenhance与Isrc的平均灰度值avgGrey计算加权系数,加权融合Isrc与I″enhance。记融合后的图像为Ifusion,则Ifusion的融合过程表示为:Use the same method in step 2 to calculate the average gray value of I″ enhance , which is recorded as avgGrey enhance . In order to enhance the gray feature of the dark target while retaining the original texture features of the dark target, use the average gray value of avgGrey enhance and I src avgGrey calculates the weighting coefficient, and the weighting is fused with I src and I″ enhance . Note that the fused image is I fusion , and the fusion process of I fusion is expressed as:
Ifusion为对人体毫米波图像中暗目标的最终增强结果(即暗目标增强后的毫米波图像,如图5(d)所示),在有效增强暗目标灰度特征的同时有效保留了暗目标的纹理特征。I fusion is the final enhancement result of the dark target in the human millimeter-wave image (that is, the millimeter-wave image after the dark target is enhanced, as shown in Figure 5(d)), which effectively preserves the dark target while effectively enhancing the gray feature of the dark target. Texture features of the target.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210938996.5A CN115294606B (en) | 2022-08-05 | 2022-08-05 | Millimeter wave image dark target enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210938996.5A CN115294606B (en) | 2022-08-05 | 2022-08-05 | Millimeter wave image dark target enhancement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115294606A CN115294606A (en) | 2022-11-04 |
CN115294606B true CN115294606B (en) | 2023-03-21 |
Family
ID=83827593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210938996.5A Active CN115294606B (en) | 2022-08-05 | 2022-08-05 | Millimeter wave image dark target enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115294606B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9424499B2 (en) * | 2014-07-11 | 2016-08-23 | Oce-Technologies B.V. | Method for printing a grayscale raster image by grayscale value dispersion |
CN110264523B (en) * | 2019-06-25 | 2021-06-18 | 亮风台(上海)信息科技有限公司 | Method and equipment for determining position information of target image in test image |
CN113158899B (en) * | 2021-04-22 | 2022-07-29 | 中国科学院地理科学与资源研究所 | A method for measuring the development status of villages and towns based on remote sensing night light and dark target enhancement technology |
CN113487642A (en) * | 2021-07-08 | 2021-10-08 | 杭州电子科技大学 | Method for detecting in-vitro target by using millimeter waves for significance vision |
-
2022
- 2022-08-05 CN CN202210938996.5A patent/CN115294606B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115294606A (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114418957B (en) | Image Crack Segmentation Method Based on Global and Local Binary Patterns Based on Robot Vision | |
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN111325764B (en) | A method of fruit image contour recognition | |
CN108898610B (en) | An object contour extraction method based on mask-RCNN | |
CN111445488B (en) | A Weakly Supervised Learning Approach to Automatically Identify and Segment Salt Bodies | |
CN111310760B (en) | Oracle Bone Inscription Text Detection Method Combining Local Prior Features and Deep Convolution Features | |
CN106446894B (en) | A method of based on outline identification ball-type target object location | |
CN104268872B (en) | Consistency-based edge detection method | |
CN109859181A (en) | A kind of PCB welding point defect detection method | |
CN110569857B (en) | An image contour corner detection method based on centroid distance calculation | |
CN108898147A (en) | A kind of two dimensional image edge straightened method, apparatus based on Corner Detection | |
CN105160686B (en) | A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators | |
Manana et al. | Preprocessed faster RCNN for vehicle detection | |
CN114863492B (en) | Method and device for repairing low-quality fingerprint image | |
CN103955950B (en) | Image tracking method utilizing key point feature matching | |
CN106709927A (en) | Method for extracting target from acoustic image under complex background | |
CN114926407A (en) | Steel surface defect detection system based on deep learning | |
Czyzewski et al. | Chessboard and chess piece recognition with the support of neural networks | |
CN112446871A (en) | Tunnel crack identification method based on deep learning and OpenCV | |
CN107909085A (en) | A kind of characteristics of image Angular Point Extracting Method based on Harris operators | |
CN116468640B (en) | Video image enhancement method for Internet teaching | |
CN106485252A (en) | Dot matrix target image Feature point recognition method is tested in image registration | |
CN109101985A (en) | It is a kind of based on adaptive neighborhood test image mismatch point to elimination method | |
CN112686265A (en) | Hierarchic contour extraction-based pictograph segmentation method | |
CN110163894B (en) | Subpixel-level target tracking method based on feature matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |