CN111402284B - Image threshold value determination method and device based on three-dimensional connectivity - Google Patents

Image threshold value determination method and device based on three-dimensional connectivity Download PDF

Info

Publication number
CN111402284B
CN111402284B CN202010188542.1A CN202010188542A CN111402284B CN 111402284 B CN111402284 B CN 111402284B CN 202010188542 A CN202010188542 A CN 202010188542A CN 111402284 B CN111402284 B CN 111402284B
Authority
CN
China
Prior art keywords
threshold
voxels
total number
value
foreground region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010188542.1A
Other languages
Chinese (zh)
Other versions
CN111402284A (en
Inventor
汪昌健
郭凌超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202010188542.1A priority Critical patent/CN111402284B/en
Publication of CN111402284A publication Critical patent/CN111402284A/en
Application granted granted Critical
Publication of CN111402284B publication Critical patent/CN111402284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Abstract

The invention provides an image threshold value method and device based on three-dimensional connectivity, comprising the following steps: importing all the two-dimensional images acquired at one time; a segmentation threshold interval o is set,judging a parameter threshold epsilon; selecting a threshold search initial value mu 0 The threshold value searches for an initial value mu 0 Less than or equal to the segmentation threshold; separately calculating the threshold mu 0 -2 x o, threshold μ 0 -omicron & mu 0 The total number N of the voxels of the foreground region segmented down, the total number M of the voxels segmented down, and/or the total number L of the voxels of the non-foreground region segmented down; setting a discrimination parameter beta; if the discrimination parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 O, as an optimal threshold; otherwise, mu 0 ←μ 0 And +omicron, continuing the threshold detection. The method and the device provided by the invention use the significant change of the voxel space characteristics to find the optimal segmentation pixel value, so that the segmentation of the foreground is more accurate.

Description

Image threshold value determination method and device based on three-dimensional connectivity
Technical Field
The invention belongs to the field of image processing, relates to an image threshold measurement method and device, and particularly relates to an image threshold measurement method and device based on three-dimensional connectivity.
Background
The threshold method is one of the common methods for image segmentation, and uses the difference between the target and the background in the image in gray value to classify the pixels by setting a threshold value, thereby realizing the separation of the target and the background. Common thresholding methods include:
(1) Artificial experience selection method
According to priori knowledge or by analyzing and summarizing rules of targets and backgrounds in images, pixel value intervals of the targets and the backgrounds are obtained, and a better threshold value is found out on the basis. The method cannot realize automatic threshold selection, so that the efficiency is low, and the method is easy to be subjected to images with image quality, so that obvious segmentation errors are caused.
(2) Maximum inter-class variance method
The basic idea of the method is to divide an image into two parts, namely a foreground and a background according to the gray characteristic of the image, wherein the threshold value is the best when the difference between the two parts is the largest, and the standard for measuring the difference is the largest inter-class variance.
Let M be the gray scale number of the image, N be the total number of pixels, N 1 N is the total number of background pixels 2 P is the total number of foreground pixels i Representing the total number of pixel points with pixel value i
Background pixel duty cycle: omega 0 =N 1 /N
Foreground pixel ratio: omega 1 =N 2 /N
Gray average of background pixels:
gray average value of foreground pixels:
gray average of image: μ=ω 0 ×μ 01 ×μ 1
Inter-class variance of images: sigma=ω 0 ×(μ 0 -μ) 21 ×(μ 1 -μ) 2
Substituting the inter-class mean value formula into the formula:
σ=ω 0 ×ω 1 ×(μ 01 ) 2
the method is the most commonly used threshold calculation method at present. Since an equal threshold value is used for each pixel in the segmentation process, the segmentation method is only applicable to images with continuous foreground gray features.
(3) Iterative method
The basic idea is to divide the image into two parts, foreground and background, the threshold value is the best when the two parts remain stable, and the standard for measuring the division stability is the average value of the centers of the pixels of the two parts.
Let M be the gray scale number of the image, P i Representing the total number of pixel points with pixel value i
Gray center value of background pixel:
gray center value of foreground pixel:
average of foreground and background centers:
iteratively generating a T value as a new threshold, when T t =T (t-1) At this time, the T value is the optimal threshold. The method is applicable to two portions of an image that differ significantly.
Both the maximum inter-class variance method and the iterative method require consideration of the pixel value distribution characteristics of the foreground and background regions at the same time, and therefore, the process of determining the threshold value is interfered by the background characteristic information. Furthermore, both methods require continuity in the distribution of pixel values for each part of the foreground, and large segmentation errors may occur if the foreground has areas of pixel value contrast or local information is missing, etc.
Disclosure of Invention
In order to overcome the defects in the prior art, the inventor performs intensive research and provides an image threshold measurement method and device based on three-dimensional connectivity, which combines the spatial distribution characteristics of foreground voxels with the value distribution characteristics of pixels (voxels) and searches for the optimal segmentation pixel value by using the significant change of the spatial characteristics of the voxels, thereby completing the invention.
The invention aims to provide the following technical scheme:
in a first aspect, an image thresholding method based on three-dimensional connectivity includes:
s100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, wherein all the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
s300, selecting a threshold search initial value mu 0 The threshold value searches for an initial value mu 0 Less thanEqual to the segmentation threshold;
s400, respectively calculating threshold mu 0 -2 x o, threshold μ 0 -omicron and threshold μ 0 The total number N of the voxels of the foreground region segmented down, the total number M of the voxels segmented down, and/or the total number L of the voxels of the non-foreground region segmented down;
s500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not;
s600, if the discrimination parameter beta > the discrimination parameter threshold epsilon, jumping to S800; otherwise, continuing S700;
S700,μ 0 ←μ 0 +omicron, return to S400;
s800, taking the threshold value mu corresponding to the current beta 0 And output mu 0 -omicron as optimal threshold.
Further, when the search range of the threshold μ can be predicted, the method can be implemented by:
s100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, wherein all the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
s300, selecting a value smaller than the segmentation threshold as the threshold search lower bound mu 0 The method comprises the steps of carrying out a first treatment on the surface of the Selecting a value greater than the segmentation threshold as the threshold search upper bound μ 1
S400, calculating mu with the O as interval 0 To mu 1 Every threshold value μ in between, and adding the above parameters to the queue list N And/or list M And/or list L
S500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not; calculating the discrimination parameters beta under each threshold value and adding the discrimination parameters beta into a queue list β
S600, sequentially taking the values in the queue list according to the order from the smaller threshold mu to the larger threshold mu β The corresponding beta value in the spectrum is judged until beta>ε;
S700, taking the threshold value mu corresponding to the current beta, and outputting mu-o as the optimal threshold value.
In a second aspect, an image thresholding device based on three-dimensional connectivity, for implementing the image thresholding method based on three-dimensional connectivity according to the first aspect, includes:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
the parameter setting module is used for inputting the numerical value or calculation mode of the set parameter, and comprises dividing a threshold interval omicron, judging parameter threshold epsilon and threshold searching initial value mu 0 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity determination module for determining a threshold mu 0 -2 x o, threshold μ 0 -omicron & mu 0 The total number N of the voxels of the foreground region segmented down, the total number M of the voxels segmented down, and/or the total number L of the voxels of the non-foreground region segmented down;
the judging parameter measuring module sequentially measures judging parameters beta according to the sequence from small threshold mu to large threshold mu;
a threshold value judging module for judging the numerical relation between the discrimination parameter beta and the discrimination parameter threshold value epsilon, if the discrimination parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 O, as an optimal threshold; if the discrimination parameter beta is less than or equal to the discrimination parameter threshold epsilon, the current threshold is used as a new search threshold after being increased by a segmentation threshold interval, and the voxel quantity measuring module and the discrimination parameter measuring module are started to perform next subthreshold operation again.
Further, when the search range of the threshold μ can be predicted, the apparatus includes:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
the parameter setting module is used for inputting the numerical value or calculation mode of the set parameter, and comprises dividing the threshold interval omicron, judging the parameter threshold epsilon and searching the lower boundary mu for the threshold value 0 Sum threshold search upper bound mu 1 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity measuring module for calculating mu with omicron as interval 0 To mu 1 Every threshold value μ in between, and adding the above parameters to the queue list N And/or list M And/or list L
The discrimination parameter measuring module calculates discrimination parameters beta under each threshold value and adds the discrimination parameters beta into the queue list β
The threshold value judging module sequentially takes the thresholds from small to large in the queue list β The corresponding beta value in the spectrum is judged until the judgment parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 -omicron as optimal threshold.
According to the image threshold value measuring method and device based on three-dimensional connectivity, the following beneficial technical effects are brought:
compared with the traditional method, the novel method can automatically determine the threshold according to the spatial distribution characteristics of the image voxels and the distribution characteristics of the pixel (voxel) values; only the characteristics of the foreground area need to be considered, so that the interference of the background can be avoided; through continuous threshold adjustment, the connectivity of three-dimensional voxels is utilized to increase the effective pixels of the foreground information missing region, and the contrast of the pixel values inside the foreground is reduced, so that the segmentation of the foreground is more accurate.
The image threshold value measuring method and device based on three-dimensional connectivity are suitable for any two-dimensional image in a group of two-dimensional images capable of forming three-dimensional images through three-dimensional reconstruction, and are particularly suitable for various tomographic images generated aiming at a target human body part in medical treatment, such as CT (X-ray computed tomography) images, MRI (magnetic resonance imaging) images, PET (positron emission tomography) images, PET-CT images, PET-MRI images, digital mammary gland tomography (3D molybdenum target) images and the like. This is because in clinical observation, the physician is more focused on the features of the foreground lesion region in the relevant image, while the background region is often only a comparative reference or even directly ignored. The threshold value measuring method based on three-dimensional connectivity provided by the invention takes the foreground characteristics as the center, can avoid the interference of the background characteristics on the setting of the threshold value, reserves as many pixels (voxels) of a foreground target as possible, and provides possibility for more accurate analysis in the later stage. Compared with the prior maximum inter-class variance method and iteration method, the method is more easily affected by background features and areas with larger contrast in the foreground, and can cause defects of the foreground areas or loss of more details, which is not beneficial to later analysis and processing.
Drawings
FIG. 1 is a flow chart of a method in a preferred embodiment of the invention;
FIG. 2 is a flow chart of a method in another preferred embodiment of the invention;
FIG. 3 is a block flow diagram of the method of embodiment 1 of the present invention;
FIG. 4 is a CT image of the lung with a large area high density image;
fig. 5 is a graph of thresholding segmentation results (μ= -350) based on artificial empirical selection;
FIG. 6 is a graph of thresholding segmentation results based on maximum inter-class variance;
FIG. 7 is a graph of a thresholding segmentation result based on an iterative method;
fig. 8 is a thresholding segmentation result graph (μ= -280) of a thresholding method based on three-dimensional connectivity.
Detailed Description
The invention is further described in detail below by means of the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The method mainly adopted in the current image threshold calculation, the maximum inter-class variance method and the iteration method are all dependent on pixel value distribution characteristics, and the thought has the following problems:
(1) The threshold calculation is based on the pixel values of the foreground and background areas, so that the threshold is interfered by the background information to influence the accuracy of foreground area segmentation, and in fact, only the foreground segmentation result is usually concerned;
(2) They are very sensitive to the pixel value distribution of foreground and background regions, and can lead to serious segmentation errors when there is a significant pixel value contrast region in the foreground, or when local information is missing.
In fact, in addition to the pixel value distribution characteristics, the three-dimensional image reconstructed from the two-dimensional image has voxel space distribution characteristics, such as voxel connectivity, and the defects of image segmentation based on pixel values can be overcome by the characteristics.
Aiming at the problems of the traditional thresholding method, the invention provides an image thresholding method and device based on three-dimensional connectivity, which combines the spatial distribution characteristics of foreground voxels with the distribution characteristics of pixel (voxel) values, observes the spatial characteristic statistics of the voxels through the adjustment of the segmentation threshold, searches the optimal segmentation threshold by using the obvious change of the statistics, and effectively improves the accuracy of foreground segmentation.
In the present invention, a pixel (voxel) value refers to a value of a pixel of a two-dimensional image or a voxel in a three-dimensional image. For example, in a general two-dimensional gray scale, the value range is 0 to 255; the range of values for the pixels in the CT image (i.e., CT values) may be-1000-2000, and correspondingly, the range of values for each voxel in the three-dimensional image generated from a set of CT sectional images may be-1000-2000. Conventional thresholding methods, such as maximum inter-class variance, iterative methods, etc., support image segmentation that takes a single value in units of pixels or voxels. In the present invention, unless otherwise specified, the pixel (voxel) value refers to a value taken by a pixel of a two-dimensional image or a voxel of a three-dimensional image.
The image threshold measuring method and device based on three-dimensional connectivity are suitable for any two-dimensional image in a group of two-dimensional images capable of forming three-dimensional images through three-dimensional reconstruction, and the method and device comprise various tomographic images generated aiming at a target body, such as CT images, MRI images, PET-CT images, PET-MRI images, digital breast tomography (3D molybdenum target) images and the like.
CT, MRI and other methods in the medical treatment process can obtain the imaging of the relevant part through continuous section scanning of a certain part of the human body, and the method is used for checking various diseases. The continuous sections are mutually overlapped and can form a three-dimensional image of the relevant part through technical treatment, so that a plurality of sections have correlation, and the three-dimensional spatial distribution characteristics of the relevant part are reflected. The combination of such three-dimensional spatial distribution features with their pixel (voxel) value distribution features for thresholding is the main idea of the invention. On the premise of controlling the signal-to-noise ratio, the method can increase meaningful pixel (voxel) points as much as possible, thereby improving the precision of image segmentation. Although this approach may also result in increased noise and may not yield completely accurate results, it may result in less detailed information being lost than conventional approaches, which may be used as a basis for subsequent image processing methods, providing the possibility for more accurate segmentation algorithm designs.
The flow chart of the threshold measurement method of the present invention is shown in fig. 1.
S100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, wherein all the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
smaller omicrons can observe the statistic change of finer granularity, but this leads to larger calculation amount, and at the same time, the statistic value used for discrimination is easier to be fluctuated by noise interference, and the accuracy of the judging result is affected, so that the omicrons are selected to consider the granularity as fine as possible under the control noise interference, and 10 can be selected from lung CT images according to experience;
s300, selecting a threshold search initial value mu 0 The threshold value searches for an initial value mu 0 Less than or equal to the segmentation threshold;
threshold search initial value mu 0 Can be selected from a range of empirical values of the foreground, such as from-450 to-600 in a lung tissue region window level in a lung CT image, so that a lower value can be selected from the interval as a search initial value (e.g., -600);
s400, respectively calculating threshold values (mu) 0 -2 x × o), threshold (μ) 0 -omicron) and threshold μ 0 The total number N of voxels of the three-dimensional connected domain (called the total number of voxels of the segmented foreground region for short) of which the number of voxels in the foreground region is larger than or equal to a set threshold value v, and/or the total number M of voxels of the segmented foreground region and/or the total number L of voxels of the segmented non-foreground region; wherein the threshold value v is set as a threshold value (mu) 0 -2 x o) setting a value of a threshold v for distinguishing between connected domains excluding other noise generation, the size of the identifiable three-dimensional connected domain in the image set being equal to or smaller than the value thereof;
s500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not;
s600, if the discrimination parameter beta > the discrimination parameter threshold epsilon, jumping to S800; otherwise, continuing S700;
S700,μ 0 ←μ 0 +omicron, return to S400;
s800, taking the threshold value mu corresponding to the current beta 0 And output mu 0 -omicron (i.e. the threshold μ to which the current β corresponds 0 Minus the segmentation threshold interval o) as the optimal threshold.
In the present invention, the two-dimensional image is a gray-scale image. If the two-dimensional image is an RGB color image, the RGB color image needs to be converted into a grayscale image. The conversion method includes, but is not limited to, an averaging method, a weighted averaging method, a maximum-minimum averaging method, or the like. The method is suitable for the situation that the pixel value of the foreground region in the image is lower than the pixel value of the background region, if the pixel value of the foreground region in the image is higher than the pixel value of the background region, the maximum value of the pixel value of the current image is taken, the current value of each pixel is subtracted from the maximum value to be used as the new value of the pixel, a new image is constructed, and the situation that the pixel value of the foreground region is higher than the pixel value of the background region can be restored by the method.
The basic idea of the method is divided into three steps, namely: determining a pixel (voxel) segmentation threshold search range; foreground region identification based on voxel connectivity; and judging based on the threshold value of the voxel statistical characteristics.
(1) Determination of a search range for a pixel (voxel) segmentation threshold
Selecting a pixel (voxel) value smaller than the segmentation threshold as the initial threshold mu 0 As a lower bound of the threshold search range; the division threshold interval omicrons are increased stepwise in units of division threshold intervals omicrons. Obviously, as the threshold value increases, the non-zero value pixel (voxel) points in the binarized image also increase (the background area is set to be zero value, and the non-background area is set to be non-zero value), and the increased pixel (voxel) points also include noise pixel (voxel) points partially located in the foreground area or the background area in addition to some foreground pixels hidden in the foreground area.
(2) Foreground region identification based on voxel connectivity
Since the foreground is a meaningful whole, its voxel distribution has connectivity, and the larger the interconnected regions are, the higher the likelihood of being foreground regions due to its dominance in the three-dimensional image. Based on this assumption, a three-dimensional connected domain with the number of voxels equal to or greater than v at different thresholds μ can be marked as a foreground region corresponding to the current threshold μ.
(3) Threshold determination based on voxel statistics
As the threshold increases, the number of voxels of the foreground region that can be segmented increases, and simultaneously, the noise data increases. Early in the threshold iteration, the noise data is mainly random noise in the image, and only part of the noise near the foreground region is likely to be connected into the foreground region, so the influence on the foreground region is small. Until the pixel (voxel) value interval of the background area is reached, a large number of pixels of the background area will suddenly appear, and they will form a continuous pixel (voxel) area together with the previous partial noise point, and be integrated with the foreground area, so that the number of voxels of the foreground area is greatly increased. Obviously, the previous threshold at which such a steep increase variation occurs is the best threshold that can be obtained based on the segmentation threshold interval omicron, at which time as many foreground region pixels as possible are identified, while random noise is in a relatively controllable range, and the interference of the background region is as small as possible.
We can design a statistical value, i.e. a decision parameter, to find this steep increase variation to determine the optimal threshold.
In a preferred embodiment, the discrimination parameter β may be a differential adjacent ratio a of the total number N of voxels of the three-dimensional connected domain divided at the threshold μ and equal to or greater than the set threshold v (simply referred to as the total number of voxels of the divided foreground region). When A exceeds the threshold epsilon, namely the current threshold mu is considered to exceed the optimal segmentation threshold, and (mu-o) is selected as the optimal segmentation threshold.
Let t-th threshold mu t The total number of voxels of the three-dimensional connected domain with the lower voxel number larger than upsilon is N 0 The (t-1) th threshold value mu (t-1) The total number of voxels of the three-dimensional connected domain with the number of voxels being larger than upsilon is N -1 The (t-2) th threshold μ (t-2) The total number of voxels of the three-dimensional connected domain with the number of voxels being larger than upsilon is N -2 The difference value of the total number of foreground region voxels in the iteration of the steps (t-2) to (t-1) is (N) -1 -N -2 ) The difference value of the total number of foreground region voxels in the (t-1) th and t-step iterations is (N) 0 -N -1 ) Thus, the differential neighbor ratio of the total number of foreground region voxels in the (t-2), (t-1) and t-step iterations It has been found that in homogeneous regions such as image regions corresponding to organs, the statistic has stability when the boundary condition is not reached, and that when the threshold μ reaches the pixel (voxel) value interval of the background region, the value increases steeply.
In another preferred embodiment, the decision variable β may also be the differential neighbor ratio B of the total number of voxels M divided at different thresholds μ. The total number of voxels M includes voxels that are foreground regions and other non-zero values of voxels not included therein (i.e., non-foreground region voxels).
Let t-th threshold mu t The total number of voxels segmented down is M 0 The (t-1) th threshold value mu (t-1) The total number of voxels segmented down is M -1 The (t-2) th threshold μ (t-2) The total number of voxels segmented down is M -2 The difference value of the total number of voxels divided in the iterations of the steps (t-2) to (t-1) is (M) -1 -M -2 ) The difference value of the total number of voxels divided in the (t-1) th and t-step iterations is (M 0 -M -1 ) Thus, the differential adjacent ratio of the total number of voxels segmented in the (t-2), (t-1) and t-step iterations
In another preferred embodiment, the discrimination parameter β may also be a differential adjacency ratio C of the total number of non-foreground region voxels (l=m-N) segmented below the threshold μ.
Let t-th threshold mu t The total number of the voxels of the non-foreground region segmented downwards is L 0 The (t-1) th threshold value mu (t-1) The total number of the voxels of the non-foreground region segmented downwards is L -1 The (t-2) th threshold μ (t-2) The total number of the voxels of the non-foreground region segmented downwards is L -2 The difference value of the total number of non-foreground region voxels divided in the iterations of (t-2) to (t-1) is (L -1 -L -2 ) The difference value of the total number of non-foreground region voxels divided in the (t-1) th and t-step iterations is (L 0 -L -1 ) Thus, the differential neighbor ratio of the total number of non-foreground region voxels segmented in the (t-2), (t-1) and t-step iterations
In another preferred embodiment, the discrimination parameter β may also be a differential adjacent ratio D of the total number of segmented foreground region voxels N to the absolute value of the ratio of the total number of segmented voxels M (r= |n/m|) below the threshold μ.
Let t-th threshold mu t The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R 0 The (t-1) th threshold value mu (t-1) The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R -1 The (t-2) th threshold μ (t-2) The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R -2 The difference value of the absolute value of the ratio of the total number N of the foreground region voxels divided and the total number M of the divided voxels in the iterations of the steps (t-2) to (t-1) is (R) -1 -R -2 ) The difference value of the absolute value of the ratio of the total number of voxels N of the foreground region divided and the total number of voxels M divided in the (t-1) th and t-step iterations is (R 0 -R -1 ) Thus, the differential adjacent ratio of the absolute value of the ratio of the total number of foreground region voxels segmented in the iterations of (t-2), (t-1) and t steps to the total number of segmented voxels
In another preferred embodiment, the discrimination parameter β may further include a differential adjacent ratio E of a ratio absolute value (s= |n/l|) of the total number of segmented foreground region voxels N to the total number of segmented non-foreground region voxels (l=m-N) below the threshold μ; or a differential adjacent ratio F of the absolute value (t= |l/m|) of the ratio of the total number L of segmented non-foreground region voxels to the total number M of segmented voxels below the threshold μ; or differential adjacent ratio of the absolute value (1/R= |M/N|) of the ratio of the total number M of the voxels divided under different thresholds mu to the total number N of the voxels of the foreground region divided; or a differential adjacent ratio of the absolute value (1/s= |l/n|) of the ratio of the total number of segmented non-foreground region voxels (l=m-N) to the total number of segmented foreground region voxels N below the threshold μ; or the differential adjacency ratio of the total number of voxels M segmented below the threshold μ to the absolute value of the ratio of the total number of voxels L of the non-foreground region (1/t= |m/l|).
The sensitivity of these statistics varies, so that the discrimination results will differ somewhat, but are very close, reflecting observations at different viewing angles for such steep increases.
For descriptive convenience, we will refer to such statistics for threshold determination collectively as discrimination parameters.
In the present invention, when the upper bound of the division threshold can be determined, the search range of the threshold μ can be predicted. At this time, the number of voxels below each threshold μ is equal to or greater than the voxel total voxel N of the three-dimensional connected domain of v (i.e., the total number of voxels of the segmented foreground region), and/or the threshold (μ) 0 -2 x o) total number of voxels M segmented under, and/or threshold (μ) 0 -2 x o) the total number of non-foreground region voxels L segmented under, and the value of the discrimination parameter beta based on the above parameters can be obtained by parallel computation.
In view of this, a flow of an accelerated threshold measurement method is shown in fig. 2.
S100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group list 0 All the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
s300, selecting a value smaller than the segmentation threshold as the threshold search lower bound mu 0 The method comprises the steps of carrying out a first treatment on the surface of the Selecting a value greater than the segmentation threshold as the threshold search upper bound μ 1 . A value from the empirical range of values of the foreground can be selected as the threshold search lower bound mu 0 For example, in a lung CT image, the empirical value of the window level of the lung tissue region ranges from-450 to-600, so that a lower value can be selected from the interval as the initial search value (e.g., -600); similarly, empirically, the CT values of normal lung tissue regions are generally less than 0, so that 0 can be set accordingly as the threshold search upper bound μ 1 Upper limit searchThe lower bound range can effectively reduce the calculated amount, and meanwhile, the interference of other section values is avoided;
s400, calculating mu with the O as interval 0 To mu 1 Every threshold value mu in the foreground region is equal to or greater than the three-dimensional connected region of the set threshold value v in total number of voxels N (simply called the total number of segmented foreground region voxels), and/or total number of segmented voxels M, and/or total number of segmented non-foreground region voxels L, and the parameters are added into a queue list N And/or list M And/or list L
S500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not; calculating the discrimination parameters beta under each threshold value and adding into a queue list β
S600, sequentially taking the values in the queue list according to the order from the smaller threshold mu to the larger threshold mu β The corresponding beta value in the spectrum is judged until beta>ε;
S700, taking the threshold value mu corresponding to the current beta, and outputting mu-o as the optimal threshold value.
In this method, the setting of the discrimination parameter β is identical to the setting of the discrimination parameter β in the method when the search range of the threshold μ cannot be predicted.
The method can well solve the problems of complex distribution of the pixel values in the background area, missing of the local information of the foreground or image segmentation of the area with the pixel value contrast. Compared with the traditional method, the method fully combines the spatial distribution characteristics of foreground voxels in the image with the value distribution characteristics of pixels (voxels) to carry out image segmentation, and the change of the spatial distribution characteristics of the foreground is utilized to find the optimal segmentation threshold value, so that the background interference can be avoided, the effective pixels (voxels) of the foreground region can be increased on the premise of ensuring the proper signal-to-noise ratio through the adjustment of the threshold value, the pixel value contrast in the foreground is reduced, the available information of the information missing region is increased, and the result of the foreground segmentation is more accurate.
According to two aspects of the present invention, there is provided an image thresholding apparatus based on three-dimensional connectivity, the apparatus comprising:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
the parameter setting module is used for inputting the numerical value or calculation mode of the set parameter, and comprises dividing a threshold interval omicron, judging parameter threshold epsilon and threshold searching initial value mu 0 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity determination module for determining a threshold value (mu) 0 -2 x × o), threshold (μ) 0 -omicron) and threshold μ 0 The total number N of voxels of the three-dimensional connected domain (called the total number of voxels of the segmented foreground region for short) of which the number of voxels in the foreground region is larger than or equal to a set threshold value v, and/or the total number M of voxels of the segmented foreground region and/or the total number L of voxels of the segmented non-foreground region;
the judging parameter measuring module sequentially measures judging parameters beta according to the sequence from small threshold mu to large threshold mu;
a threshold value judging module for judging the numerical relation between the discrimination parameter beta and the discrimination parameter threshold value epsilon, if the discrimination parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 -omicron as optimal threshold; if the discrimination parameter beta is less than or equal to the discrimination parameter threshold epsilon, the current threshold is increased by the segmentation threshold interval mu 0 ←μ 0 +omicron), and then starting the voxel number measurement module and the discrimination parameter measurement module to perform the next sub-threshold operation again.
Further, when the upper bound of the segmentation threshold can be determined, the search range of the threshold μ can be predicted. The image threshold device based on three-dimensional connectivity can be correspondingly adjusted, and the device comprises:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
parameter setting module for inputting numerical value or calculation mode of set parameter, packageIncludes dividing threshold interval omicrons, discrimination parameter threshold epsilon and threshold search lower boundary mu 0 Sum threshold search upper bound mu 1 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity measuring module for calculating mu with omicron as interval 0 To mu 1 Every threshold value mu in the foreground region is equal to or greater than the three-dimensional connected region of the set threshold value v in total number of voxels N (simply called the total number of segmented foreground region voxels), and/or total number of segmented voxels M, and/or total number of segmented non-foreground region voxels L, and the parameters are added into a queue list N And/or list M And/or list L
The discrimination parameter measuring module calculates discrimination parameters beta under each threshold value and adds the discrimination parameters beta into the queue list β
The threshold value judging module sequentially takes the thresholds from small to large in the queue list β The corresponding beta value in the spectrum is judged until the judgment parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 -omicron as optimal threshold.
In the above device, the setting of the discrimination parameter β is identical to the setting of the discrimination parameter β in the corresponding method.
Preferably, the apparatus further comprises a conversion module for converting the RGB color image into a gray scale image.
The implementation principle and the technical effect of the device in the invention are similar, and the corresponding technical scheme for executing the method is not repeated here.
Those skilled in the art will appreciate that: all or part of the steps for implementing the above method may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. The program, when executed, performs steps comprising the method described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Examples
Test environment: a desktop computer was used, containing a block of Intel i9 chips, 64GB memory. And adopting a window10 operating system to install the ITK package based on the C++ environment.
Example 1
Lung segmentation is carried out on a lung CT image with a large-area high-density shadow, and the difference between the threshold value measuring method and the traditional threshold value method is evaluated; the method flow is shown in fig. 3.
(1) Inputting a group of lung CT images with large-area high-density shadows, wherein one of the lung CT images is shown in fig. 4 and has a remarkable high-density shadow area, and all CT images in the image group are arranged according to the generation sequence of the corresponding original images during CT scanning; (2) Setting a segmentation threshold interval omicron (10 is taken by default in the example), and judging a parameter threshold epsilon (100 is taken in the example); (3) Selecting a value mu smaller than the segmentation threshold 0 As a threshold search initial value, according to clinical experience, the window level of human lung tissue is between-450 and-600, so that-600 can be selected as the search initial value; (4) Calculating the threshold (μ) in all CT images 0 -2 x o) three-dimensional connected domain with number of voxels greater than or equal to a set threshold v and calculating the total number of voxels N thereof -2 The method comprises the steps of carrying out a first treatment on the surface of the (5) Calculating the threshold (μ) in all CT images 0 -omicron) three-dimensional connected domain with number of voxels greater than or equal to a set threshold v and calculating total number of voxels N thereof -1 The method comprises the steps of carrying out a first treatment on the surface of the (5) Calculating the threshold μ in all CT images 0 Three-dimensional connected domain with lower voxel number larger than or equal to set threshold value v and calculating total voxel number N of three-dimensional connected domain 0 The method comprises the steps of carrying out a first treatment on the surface of the (6) Calculating a discrimination parameter β= (N) 0 -N -1 )/(N -1 -N -2 ) The method comprises the steps of carrying out a first treatment on the surface of the (7) If beta is>Epsilon, jumping to (9), and if not, continuing to (8); (8) N (N) -2 ←N -1 ,N -1 ←N 0 ,μ 0 ←μ 0 +omicron, return (5); (9) Taking the threshold mu corresponding to beta 0 Output mu 0 -o. Mu under the group of lung CT images 0 -270, then-280. Mu corresponding to FIG. 4 0 The binarization result at the time of = -280 is shown in fig. 8. It is apparent that in FIG. 8, the high density shadow region has been augmented with a number of pixels, including both previously hidden lung tissue pixels and portionsNoise data added due to the threshold increase is divided. The outline of the high-density shadow area is clearer due to the pixel points.
By adopting an artificial experience selection method, according to clinical experience, the CT value window width of lung tissue is about 1500-2000, the window level is about-450 to-600, and the value of-350 which is higher than the maximum window level is selected as a threshold value, and the binarization result is shown in figure 5. The maximum inter-class variance method is adopted, and the calculated threshold result value is mu 0 The result of binarization of = -400 is shown in fig. 6. The calculated threshold result value is mu by adopting an iteration method 0 The result of binarization of = -400 is shown in fig. 7. Obviously, because the value is lower, the pixel points of the high-density shadow area are fewer, and the outline is not complete enough.
The lung segmentation results obtained by the three-dimensional connectivity-based threshold value measuring method, the artificial experience selecting method, the maximum inter-class variance method and the iteration method are known, the results obtained by the three-dimensional connectivity-based threshold value measuring method in lung CT image segmentation of a high-density shadow region are obviously better than the results obtained by other methods, and the artificial experience selecting method depends on priori knowledge and cannot carry out automatic threshold value adjustment according to the actual condition of the image; both the maximum inter-class variance method and the iterative method depend on pixel characteristics of the image, when a large pixel value contrast region exists in the image, such as a large-area high-density shadow region appears in a lung CT image, the CT value of the portion is higher than that of normal lung tissues, and at the moment, the contrast region, namely the high-density shadow region is segmented by the two methods, so that a large segmentation error is caused. Compared with the two methods, the threshold value measuring method based on three-dimensional connectivity can reserve meaningful pixels in a high-density shadow area, so that the outline is more complete, later image analysis processing is facilitated, although a small amount of noise is added at the edge of the outline, the interference of the outline can be reduced through later denoising and other processing, and compared with local loss caused by pixel point deletion, the defect is worth.
The invention has been described above in connection with preferred embodiments, which are, however, exemplary only and for illustrative purposes. On this basis, the invention can be subjected to various substitutions and improvements, and all fall within the protection scope of the invention.

Claims (8)

1. An image threshold measurement method based on three-dimensional connectivity is characterized by comprising the following steps:
s100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, wherein all the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
s300, selecting a threshold search initial value mu 0 The threshold value searches for an initial value mu 0 Less than or equal to the segmentation threshold;
s400, respectively calculating threshold mu 0 -2 x o, threshold μ 0 -omicron and threshold μ 0 The total number N of the voxels of the foreground region segmented down, the total number M of the voxels segmented down, and/or the total number L of the voxels of the non-foreground region segmented down;
s500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not;
s600, if the discrimination parameter beta > the discrimination parameter threshold epsilon, jumping to S800; otherwise, continuing S700;
S700,μ 0 ←μ 0 +omicron, return to S400;
s800, taking the threshold value mu corresponding to the current beta 0 And output mu 0 -omicron as optimal threshold.
2. The method of claim 1, wherein the two-dimensional image is a gray scale image and the foreground region pixel values in the image are lower than the background region pixel values;
if the two-dimensional image is an RGB color image, converting the RGB color image into a gray image;
if the foreground region pixel value is higher than the background region pixel value in the image, taking the maximum value of the current image pixel value, subtracting the current value of each pixel from the maximum value as the new value of the pixel, thereby constructing a new image, and recovering the situation that the foreground region pixel value is higher than the background region pixel value by using the method.
3. The method according to claim 1, wherein the discrimination parameter β is a differential adjacent ratio a of the total number N of segmented foreground region voxels;
let t-th threshold mu t The total number of voxels of the three-dimensional connected domain with the lower voxel number larger than the set threshold value v is N 0 The (t-1) th threshold value mu (t-1) The total number of voxels of the three-dimensional connected domain with the number of voxels being larger than upsilon is N -1 The (t-2) th threshold μ (t-2) The total number of voxels of the three-dimensional connected domain with the number of voxels being larger than upsilon is N -2 The difference value of the total number of foreground region voxels in the iteration of the steps (t-2) to (t-1) is (N) -1 -N -2 ) The difference value of the total number of foreground region voxels in the (t-1) th and t-step iterations is (N) 0 -N -1 ) Thus, the differential neighbor ratio of the total number of foreground region voxels in the (t-2), (t-1) and t-step iterationsOr alternatively
The discrimination parameter beta is the differential adjacent ratio B of the total number M of the divided voxels;
let t-th threshold mu t The total number of voxels segmented down is M 0 The (t-1) th threshold value mu (t-1) The total number of voxels segmented down is M -1 The (t-2) th threshold μ (t-2) The total number of voxels segmented down is M -2 The difference value of the total number of voxels divided in the iterations of the steps (t-2) to (t-1) is (M) -1 -M -2 ) The difference value of the total number of voxels divided in the (t-1) th and t-step iterations is (M 0 -M -1 ) Thus, the differential adjacent ratio of the total number of voxels segmented in the (t-2), (t-1) and t-step iterationsOr alternatively
The discrimination parameter beta is the differential adjacent ratio C of the total number L of the voxels of the segmented non-foreground region;
let t-th threshold mu t The total number of the voxels of the non-foreground region segmented downwards is L 0 The (t-1) th threshold value mu (t-1) The total number of the voxels of the non-foreground region segmented downwards is L -1 The (t-2) th threshold μ (t-2) The total number of the voxels of the non-foreground region segmented downwards is L -2 The difference value of the total number of non-foreground region voxels divided in the iterations of (t-2) to (t-1) is (L -1 -L -2 ) The difference value of the total number of non-foreground region voxels divided in the (t-1) th and t-step iterations is (L 0 -L -1 ) Thus, the differential neighbor ratio of the total number of non-foreground region voxels segmented in the (t-2), (t-1) and t-step iterations
4. The method according to claim 1, wherein the discrimination parameter β is a differential adjacent ratio D of the absolute value of the ratio of the total number of divided foreground region voxels N to the total number of divided voxels M;
let t-th threshold mu t The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R 0 The (t-1) th threshold value mu (t-1) The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R -1 The (t-2) th threshold μ (t-2) The absolute ratio of the total number N of the foreground region voxels divided down to the total number M of the voxels divided down is R -2 The difference value of the absolute value of the ratio of the total number N of the foreground region voxels divided and the total number M of the divided voxels in the iterations of the steps (t-2) to (t-1) is (R) -1 -R -2 ) Absolute value of ratio of total number of segmented foreground region voxels N to total number of segmented voxels M in the (t-1) th and t-step iterationsThe difference value of (2) is (R) 0 -R -1 ) Thus, the differential adjacent ratio of the absolute value of the ratio of the total number of foreground region voxels segmented in the iterations of (t-2), (t-1) and t steps to the total number of segmented voxelsOr alternatively
The discrimination parameter beta is the differential adjacent ratio of the ratio absolute value of the total number M of the divided voxels and the total number N of the divided foreground region voxels, and the differential adjacent ratio is the reciprocal of the differential adjacent ratio D.
5. Method according to one of claims 1 to 4, characterized in that, when the search range of the threshold μ can be predicted, the method can be implemented by the following steps:
s100, importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, wherein all the two-dimensional images can obtain a three-dimensional image aiming at a target in the images through three-dimensional reconstruction;
s200, setting a segmentation threshold interval omicron and judging a parameter threshold epsilon;
s300, selecting a value smaller than the segmentation threshold as the threshold search lower bound mu 0 The method comprises the steps of carrying out a first treatment on the surface of the Selecting a value greater than the segmentation threshold as the threshold search upper bound μ 1
S400, calculating mu with the O as interval 0 To mu 1 Every threshold value μ in between, and adding the above parameters to the queue list N And/or list M And/or list L
S500, setting a discrimination parameter beta based on the parameters measured in S400, wherein the discrimination parameter beta is used for measuring whether the total number N of the segmented foreground region voxels is suddenly increased or not; calculating the discrimination parameters beta under each threshold value and adding into a queue list β
S600, sequentially taking the values in the queue list according to the order from the smaller threshold mu to the larger threshold mu β The corresponding beta value in the spectrum is judged until beta>ε;
S700, taking the threshold value mu corresponding to the current beta, and outputting mu-o as the optimal threshold value.
6. An image thresholding device based on three-dimensional connectivity for implementing the method according to one of the preceding claims 1 to 4, characterized in that it comprises:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
the parameter setting module is used for inputting the numerical value or calculation mode of the set parameter, and comprises dividing a threshold interval omicron, judging parameter threshold epsilon and threshold searching initial value mu 0 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity determination module for determining a threshold mu 0 -2 x o, threshold μ 0 -omicron & mu 0 The total number N of the voxels of the foreground region segmented down, the total number M of the voxels segmented down, and/or the total number L of the voxels of the non-foreground region segmented down;
the judging parameter measuring module sequentially measures judging parameters beta according to the sequence from small threshold mu to large threshold mu;
a threshold value judging module for judging the numerical relation between the discrimination parameter beta and the discrimination parameter threshold value epsilon, if the discrimination parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 -omicron as optimal threshold; if the discrimination parameter beta is less than or equal to the discrimination parameter threshold epsilon, the current threshold is used as a new search threshold after being increased by a segmentation threshold interval, and the voxel quantity measuring module and the discrimination parameter measuring module are started to perform next subthreshold operation again.
7. An image thresholding device based on three-dimensional connectivity for implementing the method of claim 5, characterized in that it comprises:
the importing module is used for importing all the two-dimensional images acquired at one time to obtain a two-dimensional image group, and the three-dimensional images aiming at the targets in the images can be obtained through three-dimensional reconstruction of all the two-dimensional images;
the parameter setting module is used for inputting the numerical value or calculation mode of the set parameter, and comprises dividing the threshold interval omicron, judging the parameter threshold epsilon and searching the lower boundary mu for the threshold value 0 Sum threshold search upper bound mu 1 Assigning and selecting a calculation mode of a discrimination parameter beta;
a voxel quantity measuring module for calculating mu with omicron as interval 0 To mu 1 Every threshold value μ in between, and adding the above parameters to the queue list N And/or list M And/or list L
The discrimination parameter measuring module calculates discrimination parameters beta under each threshold value and adds the discrimination parameters beta into the queue list β
The threshold value judging module sequentially takes the thresholds from small to large in the queue list β The corresponding beta value in the spectrum is judged until the judgment parameter beta>Judging parameter threshold epsilon, and taking threshold mu corresponding to current beta 0 And output mu 0 -omicron as optimal threshold.
8. The apparatus of claim 6 or 7, further comprising a conversion module for converting an RGB color image into a gray scale image.
CN202010188542.1A 2020-03-17 2020-03-17 Image threshold value determination method and device based on three-dimensional connectivity Active CN111402284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188542.1A CN111402284B (en) 2020-03-17 2020-03-17 Image threshold value determination method and device based on three-dimensional connectivity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188542.1A CN111402284B (en) 2020-03-17 2020-03-17 Image threshold value determination method and device based on three-dimensional connectivity

Publications (2)

Publication Number Publication Date
CN111402284A CN111402284A (en) 2020-07-10
CN111402284B true CN111402284B (en) 2023-07-25

Family

ID=71428870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188542.1A Active CN111402284B (en) 2020-03-17 2020-03-17 Image threshold value determination method and device based on three-dimensional connectivity

Country Status (1)

Country Link
CN (1) CN111402284B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658424A (en) * 2018-12-07 2019-04-19 中央民族大学 A kind of improved robust two dimension OTSU threshold image segmentation method
CN109859231A (en) * 2019-01-17 2019-06-07 电子科技大学 A kind of leaf area index extraction threshold segmentation method based on optical imagery

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5060276A (en) * 1989-05-31 1991-10-22 At&T Bell Laboratories Technique for object orientation detection using a feed-forward neural network
JP2004097535A (en) * 2002-09-10 2004-04-02 Toshiba Corp Method for region segmentation of three-dimensional medical image data
JP2005117504A (en) * 2003-10-09 2005-04-28 Canon Inc Image processor and image processing method
JP4529834B2 (en) * 2005-07-29 2010-08-25 ソニー株式会社 Solid-state imaging device, driving method of solid-state imaging device, and imaging device
CN101686338B (en) * 2008-09-26 2013-12-25 索尼株式会社 System and method for partitioning foreground and background in video
CN102915530B (en) * 2011-08-01 2015-11-25 佳能株式会社 For splitting the method and apparatus of input picture
CN102637253B (en) * 2011-12-30 2014-02-19 清华大学 Video foreground object extracting method based on visual saliency and superpixel division
CA3104723C (en) * 2013-04-29 2023-03-07 Intelliview Technologies Inc. Object detection
CN103778624A (en) * 2013-12-20 2014-05-07 中原工学院 Fabric defect detection method based on optical threshold segmentation
CN104331876B (en) * 2014-10-09 2020-12-08 北京配天技术有限公司 Method for detecting straight line and processing image and related device
CN104809730B (en) * 2015-05-05 2017-10-03 上海联影医疗科技有限公司 The method and apparatus that tracheae is extracted from chest CT image
CN104537669B (en) * 2014-12-31 2017-11-07 浙江大学 The arteriovenous Segmentation Method of Retinal Blood Vessels of eye fundus image
ITUA20161570A1 (en) * 2016-03-11 2017-09-11 Gruppo Cimbali Spa Method for the automatic diagnosis of the quality of a dispensed beverage.
CN106709928B (en) * 2016-12-22 2019-12-10 湖北工业大学 fast two-dimensional maximum inter-class variance threshold method for noisy images
US10062187B1 (en) * 2017-06-28 2018-08-28 Macau University Of Science And Technology Systems and methods for reducing computer resources consumption to reconstruct shape of multi-object image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658424A (en) * 2018-12-07 2019-04-19 中央民族大学 A kind of improved robust two dimension OTSU threshold image segmentation method
CN109859231A (en) * 2019-01-17 2019-06-07 电子科技大学 A kind of leaf area index extraction threshold segmentation method based on optical imagery

Also Published As

Publication number Publication date
CN111402284A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
Kabade et al. Segmentation of brain tumour and its area calculation in brain MR images using K-mean clustering and fuzzy C-mean algorithm
US5602891A (en) Imaging apparatus and method with compensation for object motion
van Rikxoort et al. Supervised enhancement filters: Application to fissure detection in chest CT scans
Hatt et al. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET
JP2016116843A (en) Image processing apparatus, image processing method and image processing program
US7787673B2 (en) Method and apparatus for airway detection and segmentation using 3D morphological operators
KR20070085120A (en) Nodule boundary detection
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
JP2011526508A (en) Segmentation of medical images
JPH11312234A (en) Image processing method including segmentation processing of multidimensional image and medical video apparatus
US9317926B2 (en) Automatic spinal canal segmentation using cascaded random walks
JP6415878B2 (en) Image processing apparatus, image processing method, and medical image diagnostic apparatus
CN111415340B (en) Organ segmentation method and device for large-area high-density image CT image
Faisal et al. Computer assisted diagnostic system in tumor radiography
Zhang et al. Automated microwave tomography (Mwt) image segmentation: State-of-the-art implementation and evaluation
US10993688B2 (en) Method of data processing for computed tomography
CN111402284B (en) Image threshold value determination method and device based on three-dimensional connectivity
RU2656761C1 (en) Method and system of segmentation of lung foci images
CN112634280B (en) MRI image brain tumor segmentation method based on energy functional
Macenko et al. Lesion detection using morphological watershed segmentation and modelbased inverse filtering
JP7019104B2 (en) Threshold learning method
You et al. Extraction of samples from airway and vessel trees in 3D lung CT based on a multi-scale principal curve tracing algorithm
Kaftan et al. Locally adaptive fuzzy pulmonary vessel segmentation in contrast enhanced CT data
Piekar et al. Gradient and polynomial approximation methods for medical image segmentation
Nageswararao et al. Inhomogeneity correction and hybrid-based segmentation in cardiac MRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant