CN112700459A - Level set infrared image segmentation method based on multi-feature information fusion - Google Patents

Level set infrared image segmentation method based on multi-feature information fusion Download PDF

Info

Publication number
CN112700459A
CN112700459A CN202011638494.8A CN202011638494A CN112700459A CN 112700459 A CN112700459 A CN 112700459A CN 202011638494 A CN202011638494 A CN 202011638494A CN 112700459 A CN112700459 A CN 112700459A
Authority
CN
China
Prior art keywords
image
gray
value
fitting
entropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011638494.8A
Other languages
Chinese (zh)
Other versions
CN112700459B (en
Inventor
黄琴燕
顾国华
万敏杰
陈钱
钱惟贤
任侃
路东明
马超
王佳节
陈欣
许运凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Nanjing University of Science and Technology
Original Assignee
Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ligong Chengao Optoelectronics Technology Co ltd, Nanjing University of Science and Technology filed Critical Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Priority to CN202011638494.8A priority Critical patent/CN112700459B/en
Publication of CN112700459A publication Critical patent/CN112700459A/en
Application granted granted Critical
Publication of CN112700459B publication Critical patent/CN112700459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a level set infrared image segmentation method based on multi-feature information fusion. The method comprises the following steps: firstly, inputting an infrared image to be segmented, and initializing a contour curve into a binary function; then constructing a gray level feature image, a local entropy value feature image and a local standard deviation feature image, and constructing a gray level fitting image, an entropy value fitting image and a standard deviation fitting image; then comparing similarity difference between the characteristic image and the characteristic fitting image to obtain a symbol pressure function driven by gray information, a symbol pressure function driven by entropy information and a symbol pressure function driven by standard deviation information, and adding and normalizing the three functions to obtain a final symbol pressure function; and finally substituting the final symbol pressure function into a level set evolution equation for evolution, regularizing an evolution result of each time by using a Gaussian filter until the equation converges, and outputting a segmentation result. The method improves the accuracy and the segmentation efficiency of segmenting the infrared image with uneven gray scale.

Description

Level set infrared image segmentation method based on multi-feature information fusion
Technical Field
The invention relates to the technical field of infrared target segmentation, in particular to a level set infrared image segmentation method based on multi-feature information fusion.
Background
Image segmentation is an important part in the image processing process, and in various specific application fields related to images, such as aerospace engineering, geological exploration, safety monitoring and the like, the image segmentation technology is required to be relied on, and the quality of the image segmentation technology has a direct influence on the accuracy of later-stage image processing. In recent years, the image segmentation method based on the level set expresses the target contour as a zero level set of a high-dimensional level set function, the process of solving a level set evolution equation is a curve evolution process, and the corresponding zero level set when the level set equation converges is a target segmentation result. Due to the fact that the method can obtain smooth and closed segmentation contours and solve the topological problem of curve evolution, wide attention is paid to the method. The infrared image has the characteristics of low contrast, uneven gray scale, fuzzy boundary and the like, the conventional level set image segmentation method usually only utilizes single characteristic information of the image to drive curve evolution, an ideal segmentation effect cannot be obtained when the method is used for processing the infrared image, and the situations of wrong segmentation and missing segmentation occur occasionally. Therefore, how to correctly segment the infrared image with uneven gray scale is still a research hotspot in the field of infrared image processing.
The traditional level set segmentation method is mainly divided into two types: one is a segmentation method based on edge information, and the other is a segmentation method based on region information. The edge information-based method utilizes image gradient information to establish an edge stopping function, so that an evolution curve stops evolving when approaching a target boundary, and a segmentation result is obtained, such as a GAC model (shells V, Kimmel R, Sapiro G. Geodestic active constraints [ C ]// Proceedings of IEEE international conference on computer vision. IEEE,1995:694-699.) proposed by Caselles et al. The area information-based method utilizes the gray mean information inside and outside the contour to establish an energy function, and drives a curve to be close to the edge of a target by minimizing the energy function, such as a CV model (Chan T F, dimension L A. active connections with out edges [ J ]. IEEE Transactions on image processing,2001,10(2): 266-. Zhang et al combined with the advantages of CV model and GAC model constructed SLGS model (Zhang K, Zhang L, Song H, et al. active constraints with selective local or global segmentation: a new deformation and level set method [ J ] Image and Vision computing,2010,28(4):668-676.), can effectively segment the Image with blurred boundary and is insensitive to the initial position of the outline, but the model still based on the assumption of uniform gray scale can not achieve ideal effect on the Image with non-uniform gray scale.
Disclosure of Invention
The invention aims to provide an infrared image segmentation method with high accuracy and segmentation efficiency.
The technical solution for realizing the purpose of the invention is as follows: a level set image segmentation method based on multi-feature information weighted fusion comprises the following steps:
step 1, inputting an infrared image to be segmented, and initializing an initial contour into a binary function;
step 2, solving the input image to obtain a gray characteristic image, a local entropy characteristic image and a standard deviation characteristic image;
step 3, solving global characteristic information and local characteristic information inside and outside the contour, and constructing a gray fitting image, an entropy fitting image and a standard deviation fitting image;
step 4, comparing similarity differences of the feature image in the step 2 and the feature fitting image in the step 3 to respectively obtain a symbol pressure function driven by gray information, a symbol pressure function driven by entropy information and a symbol pressure function driven by standard deviation information, and adding and normalizing the three functions to obtain a final symbol pressure function;
and 5, substituting the final symbol pressure function into the level set evolution equation for evolution, regularizing each evolution result by a Gaussian filter until the equation is converged, and outputting a segmentation result.
Compared with the prior art, the invention has the following remarkable advantages: (1) the feature fitting image is constructed by calculating the global feature fitting value and the local feature fitting value, global information avoids falling into a local minimum value during curve evolution, and local information provides more detailed information for a curve in a gray uneven area, so that the accuracy and robustness of infrared image segmentation are improved; the symbol pressure function is composed of three parts of gray information, entropy value information and standard deviation information, the contained characteristic information is richer, and more accurate curve evolution driving force can be provided, so that the accuracy of image segmentation is improved; (3) the contour is initialized to be a binary function, and the curve evolution result is regularized by a Gaussian kernel function after each iteration is finished, so that reinitialization calculation in the evolution process is avoided, and the operation efficiency of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of a multi-feature information-driven level set infrared image segmentation method according to the present invention.
FIG. 2 is a comparison graph of the segmentation results of the infrared test image using the method of the present invention and the prior art level set method in the embodiment of the present invention.
FIG. 3 is a comparison graph of the results of binary segmentation of an infrared test image using the method of the present invention and a prior art level set method in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
With reference to fig. 1, the invention relates to a level set image segmentation method based on multi-feature information weighted fusion, which comprises the following steps:
step 1, inputting an infrared image to be segmented, and initializing a contour curve into a binary function, wherein the method specifically comprises the following steps: .
Initializing a level set equation phi into a binary function, wherein the signs inside and outside the outline are opposite, and the expression is as follows:
Figure 100002_1
where Ω represents the input two-dimensional image and Ω0Is a subset of the area of the image,
Figure 100002_2
represents omega0The boundary of (2); c. C0Representing a constant greater than 0.
Step 2, solving the input image to obtain a gray characteristic image I, a local entropy characteristic image I _ en and a local standard deviation characteristic image I _ std, wherein the specific steps are as follows:
step 2.1, obtaining a gray level image I (x, y) according to the gray level value of the input infrared image I;
step 2.2, for any pixel point (x, y) in the gray level image I (x, y), setting 9 book with the pixel point (x, y) as the central point9 size partial window WnThe local entropy image I _ en (x, y) is:
Figure BDA0002879267420000033
wherein L is the total number of gray levels in the local window, niThe number of pixels corresponding to a certain gray value, piIs the probability of the occurrence of a pixel with a grey value i within the window. At the target boundary, the gray value is most prone to change, the number of included gray levels is most frequent, and therefore the local entropy value of the boundary is larger; conversely, in the target or background region, the gray level variation is relatively small, and the corresponding local entropy value is also small.
Step 2.3, the local standard deviation image of the gray level image I (x, y) is as follows:
Figure BDA0002879267420000034
wherein mu is the gray average value of all pixel points in the local window, and the calculation formula is
Figure BDA0002879267420000035
When the target edge is approached, the gray level change in the local window is large, and the corresponding standard deviation is large; otherwise, when the standard deviation in the local window is smaller, the gray level change in the window is more smooth, and the probability of the edge is smaller.
Step 3, solving the global characteristic information and the local characteristic information inside and outside the contour, and constructing a gray fitting image ILFTEntropy fitting image I _ enLFTSum standard deviation fitted image I _ stdLFTThe method comprises the following steps:
step 3.1, solving the global gray fitting value and the local gray fitting value inside and outside the contour, and constructing a gray fitting image:
for the original gray level feature image I, the contour curve divides the original gray level feature image I into two regions, and the mean value c of the mean value of the inner and outer gray levels of the contourinAnd coutIs calculated byThe formula is as follows:
Figure BDA0002879267420000041
wherein, cinIs the mean value of the gray levels within the contour, coutH (phi) is the average value of gray scale outside the outline, and the calculation formula is
Figure BDA0002879267420000042
According to the LBF model, for any pixel point (x, y) in the image, local gray information of the image and a gray average value f inside and outside a neighborhood contour can be embedded by using a Gaussian kernel functioninAnd foutThe calculation formula of (2) is as follows:
Figure BDA0002879267420000043
wherein f isinIs the mean value of the gray levels, f, in the neighborhood contour of the pixel point (x, y)outIs the mean value of gray scale outside the (x, y) neighborhood outline of the pixel point, KβIs a gaussian function with standard deviation β set to 3.0, which is the convolution operation factor.
And combining the global gray mean information and the local gray information by using the weight coefficient omega, and calculating gray fitting values C1 and C2 inside and outside the contour, wherein the calculation formula is as follows:
C1=ω·cin+(1-ω)·fin
C2=ω·cout+(1-ω)·fout
where C1 is the fit value of the inner gray scale of the contour, C2 is the fit value of the outer gray scale of the contour, and ω is a constant that adjusts the weights of the global term and the local term, set to 0.5.
According to the property of the Heaviside function, when phi is greater than 0, H (phi) is 1, when phi is less than 0, H (phi) is 0, and the gray-scale fitting image is defined as:
ILFI=C1·H(φ)+C2·(1-H(φ))
step 3.2, solving global entropy fitting values and local entropy fitting values inside and outside the contour, and constructing an entropy characteristic fitting image:
for the entropy characteristic image I _ en, a contour curve divides the image into two regions, and the calculation formula of the inner and outer entropy mean constants of the contour is as follows:
Figure BDA0002879267420000051
wherein eninMean value of entropy values, en, within the contouroutMean entropy values outside the contour.
For any pixel point (x, y) in the image, local entropy information of the image is embedded by utilizing a Gaussian kernel function, and a calculation formula of the entropy information in a neighborhood contour is as follows:
Figure BDA0002879267420000052
wherein std isinIs entropy information, std, within a pixel's (x, y) neighborhood contouroutEntropy information outside the (x, y) neighborhood contour of a pixel point.
Through a weight coefficient omega, the global entropy information and the local entropy information are combined to calculate entropy fitting values E1 and E2 inside and outside the contour, and the calculation formula is as follows:
E1=ω·enin+(1-ω)·entroin
E2=ω·enout+(1-ω)·entroout
wherein, E1 is the fitting value of entropy value in the outline, E2 is the fitting value of entropy value outside the outline, and the definition of the fitting image of entropy value is:
I_enLFI=E1·H(φ)+E2·(1-H(φ))
step 3.3, solving the global standard deviation fitting value and the local standard deviation fitting value inside and outside the contour, and constructing a local standard deviation feature fitting image:
for the local standard deviation feature image I _ std, the contour curve divides the local standard deviation feature image I _ std into two regions, and the calculation formula of the mean constant of the standard deviation inside and outside the contour is as follows:
Figure BDA0002879267420000061
wherein s isinIs the mean of the standard deviations, s, within the profileoutMean standard deviation outside the contour.
For any pixel point (x, y) in the image, local standard deviation information of the image is embedded by utilizing a Gaussian kernel function, and a calculation formula of the standard deviation information in the neighborhood outline is as follows:
Figure BDA0002879267420000062
wherein std isinIs the standard deviation information, std, in the neighborhood outline of a pixel point (x, y)outAnd standard deviation information outside the neighborhood outline of the pixel point (x, y).
And combining the global standard deviation information and the local standard deviation information through a weight coefficient omega, and calculating standard deviation fitting values S1 and S2 inside and outside the contour, wherein the calculation formula is as follows:
S1=ω·sin+(1-ω)·stdin
S2=ω·sout+(1-ω)·stdout
wherein, S1 is the fitting value of standard deviation in the outline, S2 is the fitting value of standard deviation outside the outline, and the fitting image of the defined entropy is:
I_stdLFI=S1·H(φ)+S2·(1-H(φ))
step 4, comparing the similarity difference between the characteristic image in the step 2 and the characteristic fitting image in the step 3 to respectively obtain a symbol pressure function spf driven by gray informationiEntropy information driven symbol pressure function spfenAnd the sign pressure function spf driven by the standard deviation informationstdAdding and normalizing the three functions to obtain a final symbol pressure function spftotalThe method comprises the following steps:
step 4.1, comparing similarity differences of the feature image and the feature fitting image, and determining the evolution direction of each pixel point on the evolution curve, wherein the calculation formula is as follows:
spfi(x,y)=I(x,y)-ILFI(x,y)
spfen(x,y)=I_en(x,y)-I_enLFI(x,y)
spfstd(x,y)=I_std(x,y)-I_stdLFI(x,y)
wherein spfi、spfenAnd spfstdThe symbol pressure functions are driven by gray scale information, entropy value information and standard deviation information respectively;
and 4.2, performing weighted combination on the three functions to obtain a final symbol pressure function, wherein the calculation formula is as follows:
spftotal(x,y)=spfi(x,y)+spfen(x,y)+spfstd(x,y)
wherein spftotalIs normalized to [ -1,1 [)]An interval.
From the above equation, it can be seen that the newly constructed symbol pressure function spftotalThe method is driven by three kinds of information, namely gray scale, entropy and standard deviation, the contained characteristic information is richer, and a more accurate evolution direction can be provided for a curve in a region with uneven gray scale, so that the method can obtain a more accurate segmentation result.
Step 5, the final symbol pressure function spftotalSubstituting into level set evolution equation phi to perform evolution iteration, and using Gaussian filter Gσ_φRegularizing the evolution result of each time until the equation converges, and outputting a segmentation result, which is as follows:
step 5.1, rewriting the evolution equation of the level set from the final symbol pressure function into:
Figure BDA0002879267420000071
Figure BDA0002879267420000072
Figure BDA0002879267420000073
wherein phi ist-1Represents the evolution result of the t-1 st order, phitThen the state of the level set after the t evolution is represented; Δ t represents a time step, typically set to a constant of 1;
Figure BDA0002879267420000074
representing a differential operator. Alpha is the balloon force required by the profile evolution, the curve has faster convergence speed but reduced convergence precision when alpha is larger, and has higher convergence precision but slow convergence speed when alpha is smaller, and the curve is generally set to be a constant of 400.
Step 5.2, in order to avoid performing reinitialization calculation in the iterative process, regularizing each evolution result by using a Gaussian filter function, namely:
φt+1=φt*Gσ_φ
Gσ_φrepresenting a Gaussian filter with a standard deviation of σ φ, the value of σ φ is typically set to between 0.8 and 1.5, and in the present invention is set to 1.0, φt+1And (3) representing the initial state of the level set at the time of the t +1 th evolution after Gaussian filtering.
Step 5.3, in order to stop the evolution process in time, the convergence threshold δ is set to 10-5As a curve evolution stopping condition, i.e. when | + -tt-1And when the | is less than the delta, stopping the evolution of the curve, wherein the zero level equation at the moment is the image segmentation result.
Example 1
This example compares the present invention with four conventional level set segmentation methods, GAC, CV, LPF, LIC. The level set function is uniformly set to a binary function, and an initial contour of 80 × 80 is set at the center of the image. The operation results of the Matlab 2016a platform under the Window10 operating system are shown in FIG. 2.
The 1 st column in fig. 2 is an input test image original image, the 2 nd to 6 th columns in fig. 2 are GAC, CV, LPF, LIC and program operation results of the method provided by the present invention in sequence, and the 7 th column in fig. 2 is an image segmentation true value. The GAC model establishes an edge stopping function through image gradient information, so that a curve stops evolving when being close to the edge of an object, but the method needs to initialize a contour at a proper position, otherwise, a segmentation result cannot be accurately obtained. In the simulation experiment, the contour is initialized in the middle of the image, and the target boundary is not completely contained, so that the segmentation effect is not ideal. The CV model minimizes an energy function by using gray mean value information inside and outside a contour, but when an image contains a region with uneven gray, excessive segmentation is often caused, and for example, regions such as the abdomen and the legs of a human body in a test image are excessively segmented. The LPF model is a classical segmentation model that uses local gray information of an image to build an energy function, but is prone to fall into a local minimum value due to the lack of global information. As can be seen from the segmentation result of the LPF model in the figure, the curve stops evolving inside the target, so that the target is segmented into a plurality of regions by mistake, and a complete target contour cannot be obtained. The LIC model estimates the image gray level of each pixel neighborhood by designing a clustering criterion function, and performs gray level correction using the estimated offset field, but this method is only applicable to images with simple background, and when the background is complex, the background contour curve is not easy to converge, so it can be seen from column 5 of fig. 2 that many evolution curves do not converge in the background region, resulting in erroneous segmentation. In addition, the LIC model does not consider the clustering difference, so that the segmentation error condition still occurs for the uneven gray scale region. The method utilizes three kinds of characteristic information, namely gray scale, entropy value and standard deviation to drive curve evolution, considers global information and simultaneously contains more local characteristic information, and therefore compared with other methods, the method can more effectively segment uneven infrared images.
FIG. 3 is a diagram of a segmentation result of binary segmentation, columns 1 to 5 of FIG. 3 are GAC, CV, LPF, LIC and the operation result of the text method in sequence, and column 6 of FIG. 3 is an image true value. In order to further compare the segmentation results of different methods, an F value is introduced as a quantitative evaluation index of the segmentation results, and the calculation formula is as follows:
Figure BDA0002879267420000081
where P is the accuracy, R represents the recall, β is usually set to a constant of 1.0, and the larger the value of F, the more accurate the segmentation result. As can be seen from table 1, the GAC model has the smallest F value in each test image because the method can accurately segment the object only when the contour is initialized at the target boundary, and the initial condition is not satisfied in this experiment. The CV model has an ideal F value compared to the GAC model, but when an object includes a gray scale non-uniform region, boundary leakage and over-segmentation are likely to occur, and the F value at this time cannot satisfy the segmentation requirement. Because the LPF model only utilizes the local gray scale information to construct the energy function, the evolution curve is easy to fall into local minimum, so that the object is segmented into a plurality of subregions by mistake, and the F value is also small. Since the LIC model cannot process images with complex backgrounds, the F value is larger in some test images and is not ideal in some test images. The method obtains ideal F value on each test image, and the average F value of all the test images is maximum. In general, the method has the highest accuracy for the segmentation of the uneven infrared image.
Table 1: f value comparison table
Figure BDA0002879267420000091
Table 2 is a comparison table of the run times of the methods, and the GAC model has a run time of approximately 7 times that of the present invention due to the non-convergence of its energy function. The CV model utilizes the global mean gray value information to construct an energy function, which has a significantly increased operating speed compared to the GAC model, but still lower than the operating speed of the method of the present invention. Both the LPF model and the LIC model establish a local energy function by estimating a local gray value, and then construct a total energy functional by integrating the local energy function, so that the local energy functional needs more operation time to be minimized, and the efficiency is still not ideal. Compared with other methods, the method has the advantages that the running time of each test image is in the front row, the total running time is the shortest, and the efficiency is the highest.
Table 2: run time comparison table (Unit: second)
Figure BDA0002879267420000092
By combining the analysis, the invention can effectively segment the infrared nonuniform image by utilizing the multi-feature information to jointly drive the evolution of the level set equation, and the segmentation accuracy and the segmentation efficiency are higher than those of the traditional level set method.

Claims (6)

1. A level set infrared image segmentation method based on multi-feature information fusion is characterized by comprising the following steps:
step 1, inputting an infrared image to be segmented, and initializing a contour curve into a binary function;
step 2, solving the input image to obtain a gray characteristic image I, a local entropy characteristic image I _ en and a local standard deviation characteristic image I _ std;
step 3, solving the global characteristic information and the local characteristic information inside and outside the contour, and constructing a gray fitting image ILFTEntropy fitting image I _ enLFTSum standard deviation fitted image I _ stdLFT
Step 4, comparing the similarity difference between the characteristic image in the step 2 and the characteristic fitting image in the step 3 to respectively obtain a symbol pressure function spf driven by gray informationiEntropy information driven symbol pressure function spfenAnd the sign pressure function spf driven by the standard deviation informationstdAdding and normalizing the three functions to obtain a final symbol pressure function spftotal
Step 5, the final symbol pressure function spftotalEvolution equation phi is substituted into the level set and a Gaussian filter G is usedσ_φAnd regularizing the evolution result of each time until the equation converges, and outputting a segmentation result.
2. The level set infrared image segmentation method based on multi-feature information fusion as claimed in claim 1, wherein the step 1 of inputting the infrared image to be segmented initializes a contour curve to a binary function, specifically as follows:
initializing a level set equation phi into a binary function, wherein the signs inside and outside the outline are opposite, and the expression is as follows:
Figure 1
where Ω is the input two-dimensional image, Ω0Is a subset of the area of the image,
Figure 2
is omega0The boundary of (2); c. C0Is a constant greater than 0.
3. The level set infrared image segmentation method based on multi-feature information fusion according to claim 1, wherein the gray level feature image I, the local entropy feature image I _ en, and the local standard deviation feature image I _ std obtained by solving the input image in step 2 are specifically as follows:
step 2.1, carrying out gray level processing on the image of the input image I to obtain a gray level characteristic image I (x, y);
step 2.2, for any pixel point (x, y) in the gray level image I (x, y), setting a local window W with the size of 9 multiplied by 9 by taking the pixel point (x, y) as a central pointnThe local entropy value image I _ en (x, y) is:
Figure FDA0002879267410000021
wherein L is the total number of gray levels in the local window, niThe number of pixels with gray value i, piThe probability of the occurrence of the pixel with the gray value i in the window is shown;
step 2.3, the local standard deviation image I _ std (x, y) of the gray level image I (x, y) is:
Figure FDA0002879267410000022
in the formula, mu is the gray average value of all pixel points in the local window, and the calculation formula is
Figure FDA0002879267410000023
4. The level set infrared image segmentation method based on multi-feature information fusion as claimed in claim 1, wherein the global feature information and the local feature information inside and outside the contour are solved in step 3, and a gray-scale fitting image I is constructedLFTEntropy fitting image I _ enLFTSum standard deviation fitted image I _ stdLFTThe method comprises the following steps:
step 3.1, solving the global gray fitting value and the local gray fitting value inside and outside the contour, and constructing a gray fitting image:
for the gray level image I, solving the gray level mean value c inside and outside the contourinAnd coutThe formula is as follows:
Figure FDA0002879267410000024
for each pixel point (x, y) in the image, solving the gray level mean value f inside and outside the neighborhood outlineinAnd foutThe formula is as follows:
Figure FDA0002879267410000025
fitting the gray values inside and outside the contour by combining the weight coefficient omega with the global gray average value and the local gray value, wherein the formula is as follows:
C1=ω·cin+(1-ω)·fin
C2=ω·cout+(1-ω)·fout
wherein, C1 is the gray fitting value inside the contour, and C2 is the gray fitting value outside the contour;
the grayscale fit image was constructed using the Heaviside function combination C1 and C2, as follows:
ILFI=C1·H(φ)+C2·(1-H(φ))
step 3.2, solving global entropy fitting values and local entropy fitting values inside and outside the contour, and constructing an entropy fitting image:
for the entropy value image I _ en, solving the entropy value mean value en inside and outside the contourinAnd enoutThe formula is as follows:
Figure FDA0002879267410000031
for the pixel point (x, y), solving the entropy value mean entro inside and outside the neighborhood contourinAnd entrooutThe formula is as follows:
Figure FDA0002879267410000032
fitting the entropy values inside and outside the contour by using the coefficient omega in combination with the global entropy value and the local entropy value, wherein the formula is as follows:
E1=ω·enin+(1-ω)·entroin
E2=ω·enout+(1-ω)·entroout
wherein E1 is the contour inner entropy fitting value, and E2 is the contour outer entropy fitting value;
defining an entropy fitting image as:
I_enLFI=E1·H(φ)+E2·(1-H(φ))
step 3.3, solving the global standard deviation fitting value and the local standard deviation fitting value inside and outside the contour, and constructing a standard deviation fitting image:
for the standard deviation image I _ std, solving the mean value s of entropy values inside and outside the contourinAnd soutThe formula is as follows:
Figure FDA0002879267410000033
for a pixel point (x, y)) Solving the mean std of the standard deviations inside and outside the neighborhood contourinAnd stdoutThe formula is as follows:
Figure FDA0002879267410000041
and fitting the standard deviation values inside and outside the contour by combining the coefficient omega with the global standard deviation mean value and the local standard deviation value, wherein the formula is as follows:
S1=ω·sin+(1-ω)·stdin
S2=ω·sout+(1-ω)·stdout
wherein S1 is the fit value of the standard deviation in the contour, and S2 is the fit value of the standard deviation outside the contour;
define standard deviation fit image as:
I_stdLFI=S1·H(φ)+S2·(1-H(φ))。
5. the method for segmenting the level set infrared image based on multi-feature information fusion according to claim 1, characterized in that the similarity difference between the feature image of the step 2 and the feature fitting image of the step 3 is compared in the step 4, and a symbol pressure function spf driven by gray scale information is obtained respectivelyiEntropy information driven symbol pressure function spfenAnd the sign pressure function spf driven by the standard deviation informationstdAdding and normalizing the three functions to obtain a final symbol pressure function spftotalThe method comprises the following steps:
step 4.1, determining similarity difference according to the difference value between the characteristic image and the characteristic fitting image to obtain a symbol pressure function spf driven by three characteristics of gray scale, entropy and standard deviationi、spfenAnd spfstdThe formula is as follows:
spfi(x,y)=I(x,y)-ILFI(x,y)
spfen(x,y)=I_en(x,y)-I_enLFI(x,y)
spfstd(x,y)=I_std(x,y)-I_stdLFI(x,y)
step 4.2, a symbol pressure function spf driven by three characteristics of gray scale, entropy value and standard deviationi、spfenAnd spfstdAnd performing weighted combination to obtain a final symbol pressure function, wherein the formula is as follows:
spftotal(x,y)=spfi(x,y)+spfen(x,y)+spfstd(x,y)
wherein spftotalIs normalized to [ -1,1 [)]An interval.
6. The method for segmenting the level set infrared image based on multi-feature information fusion according to claim 1, characterized in that the final symbol pressure function spf in the step 5totalEvolution equation phi is substituted into the level set and a Gaussian filter G is usedσ_φRegularizing the evolution result of each time until the equation converges, and outputting a segmentation result, which is as follows:
step 5.1, rewriting the level set equation phi from the final symbol pressure function as:
Figure FDA0002879267410000051
Figure FDA0002879267410000052
wherein, Δ t is a time step and is set as a constant 1; alpha is balloon stretching force and is set as constant 400; phi is at-1The level set equation is the t-1 st order; phi is atThe level set equation for the t-th order;
step 5.2, based on the scale space theory, performing Gaussian filtering on the result after each iteration to obtain the initial state of the level set of the next iteration, wherein the process is represented as:
φt+1=φt*Gσ_φ
in the formula, phit+1Is the initial state of the level set in the t +1 th evolution; gσ_φIs a standard deviation of sigmaA Gaussian kernel function of phi;
and 5.3, when the difference between the two adjacent evolution results is smaller than a threshold value, the equation is converged, and an image segmentation result is output.
CN202011638494.8A 2020-12-31 2020-12-31 Level set infrared image segmentation method based on multi-feature information fusion Active CN112700459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011638494.8A CN112700459B (en) 2020-12-31 2020-12-31 Level set infrared image segmentation method based on multi-feature information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011638494.8A CN112700459B (en) 2020-12-31 2020-12-31 Level set infrared image segmentation method based on multi-feature information fusion

Publications (2)

Publication Number Publication Date
CN112700459A true CN112700459A (en) 2021-04-23
CN112700459B CN112700459B (en) 2023-10-17

Family

ID=75513949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011638494.8A Active CN112700459B (en) 2020-12-31 2020-12-31 Level set infrared image segmentation method based on multi-feature information fusion

Country Status (1)

Country Link
CN (1) CN112700459B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132667A (en) * 2023-10-26 2023-11-28 湖南半岛医疗科技有限公司 Thermal image processing method and related device based on environmental temperature feedback

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446357A (en) * 2011-11-23 2012-05-09 浙江工商大学 Level set SAR (Synthetic Aperture Radar) image segmentation method based on self-adaptive finite element
CN104123719A (en) * 2014-06-03 2014-10-29 南京理工大学 Method for carrying out infrared image segmentation by virtue of active outline
CN109472792A (en) * 2018-10-29 2019-03-15 石家庄学院 In conjunction with the local energy functional of local entropy and the image partition method of non-convex regular terms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446357A (en) * 2011-11-23 2012-05-09 浙江工商大学 Level set SAR (Synthetic Aperture Radar) image segmentation method based on self-adaptive finite element
CN104123719A (en) * 2014-06-03 2014-10-29 南京理工大学 Method for carrying out infrared image segmentation by virtue of active outline
CN109472792A (en) * 2018-10-29 2019-03-15 石家庄学院 In conjunction with the local energy functional of local entropy and the image partition method of non-convex regular terms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MINJIE WAN等: "A Level Set Method for Infrared Image Segmentation Using Global and Local Information", 《REMOTE SENSING》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132667A (en) * 2023-10-26 2023-11-28 湖南半岛医疗科技有限公司 Thermal image processing method and related device based on environmental temperature feedback
CN117132667B (en) * 2023-10-26 2024-02-06 湖南半岛医疗科技有限公司 Thermal image processing method and related device based on environmental temperature feedback

Also Published As

Publication number Publication date
CN112700459B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN108776969B (en) Breast ultrasound image tumor segmentation method based on full convolution network
CN109345508B (en) Bone age evaluation method based on two-stage neural network
CN108596053B (en) Vehicle detection method and system based on SSD and vehicle posture classification
WO2022166800A1 (en) Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region
CN107516316B (en) Method for segmenting static human body image by introducing focusing mechanism into FCN
CN109472792B (en) Local energy functional and non-convex regular term image segmentation method combining local entropy
CN111652892A (en) Remote sensing image building vector extraction and optimization method based on deep learning
Cai et al. Saliency-guided level set model for automatic object segmentation
CN111191583A (en) Space target identification system and method based on convolutional neural network
CN112365514A (en) Semantic segmentation method based on improved PSPNet
Zhang et al. Level set evolution driven by optimized area energy term for image segmentation
CN111553873B (en) Automatic detection method for brain neurons based on multi-scale convolution neural network
CN113450397B (en) Image deformation registration method based on deep learning
CN102135606A (en) KNN (K-Nearest Neighbor) sorting algorithm based method for correcting and segmenting grayscale nonuniformity of MR (Magnetic Resonance) image
Wang Image segmentation by combining the global and local properties
CN107895379A (en) The innovatory algorithm of foreground extraction in a kind of video monitoring
CN112836820B (en) Deep convolution network training method, device and system for image classification task
CN112750106A (en) Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium
CN112837320A (en) Remote sensing image semantic segmentation method based on parallel hole convolution
Adoram et al. IRUS: image retrieval using shape
CN112435264A (en) 42CrMo single-phase metallographic structure segmentation method and system based on deep learning
CN112700459A (en) Level set infrared image segmentation method based on multi-feature information fusion
Lv et al. Robust active contour model using patch-based signed pressure force and optimized fractional-order edge
CN114998592A (en) Method, apparatus, device and storage medium for instance partitioning
CN112329716A (en) Pedestrian age group identification method based on gait characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant