CN111815613B - Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis - Google Patents
Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis Download PDFInfo
- Publication number
- CN111815613B CN111815613B CN202010692369.9A CN202010692369A CN111815613B CN 111815613 B CN111815613 B CN 111815613B CN 202010692369 A CN202010692369 A CN 202010692369A CN 111815613 B CN111815613 B CN 111815613B
- Authority
- CN
- China
- Prior art keywords
- liver
- slope
- segment
- identification
- membrane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Probability & Statistics with Applications (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a liver cirrhosis disease stage identification method based on envelope line morphological feature analysis, which comprises the following steps: acquiring a predicted membrane and a real membrane of the liver capsule according to the ultrasonic image; obtaining a variance VoS of a sectional slope, a variation coefficient CV of a slope difference of adjacent sections and the number of fluctuation changes NoF from a prediction film; and acquiring the number of line segments NoL from the real film; in the preliminary prediction, voS, noF, and CV are input as input features to identify two cases of "normal-early" and "mid-late"; if the primary identification result is "normal-early stage", noL and CV are taken as characteristics and input into a light identification model so as to judge normal and light liver cirrhosis; if the result of the preliminary identification is "mid-late", noL and VoS are input as features to a mid-late identification model to identify moderate cirrhosis and severe cirrhosis. The method combines the characteristics of the liver capsule prediction membrane and the real membrane for analysis, and can fully display the morphological characteristics of the liver capsule.
Description
Technical Field
The invention relates to the field of medical images, in particular to a liver cirrhosis disease stage identification method based on envelope line shape feature analysis.
Background
Cirrhosis is a clinically common chronic progressive liver disease, specifically defined as the histological development of fibrous bands around regenerative nodules caused by chronic liver injury, and with further development of cirrhosis, portal hypertension and end-stage liver disease can result, with higher mortality, and is necessary for early detection and treatment of cirrhosis. At present, the diagnosis of liver cirrhosis is mainly based on subjective and manual judgment of doctors, and the individual diagnosis can be greatly different due to the influence of objective factors such as experience, etc., so that a computer-aided diagnosis liver cirrhosis system based on quantitative analysis is very necessary.
The traditional liver cirrhosis auxiliary diagnosis method based on liver envelope images mainly realizes classification and identification of the characteristics based on a single real membrane or a single prediction membrane and then based on a machine learning algorithm, and the method can possibly cause loss of part of the characteristics, so that the classification and identification precision of the characteristics is lower, the disease stage cannot be accurately judged, and the clinical auxiliary diagnosis is adversely affected. Therefore, the current clinical auxiliary diagnosis needs cannot be satisfied.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a liver cirrhosis disease stage identification method based on capsule line morphological feature analysis, which obtains a predicted membrane and a real membrane of a liver capsule from an ultrasonic image, obtains identification features from the predicted membrane and the real membrane, and realizes stage identification of liver cirrhosis.
The technical scheme provided by the invention is as follows:
a method for stage identification of cirrhosis disease based on envelope line morphological feature analysis, comprising:
(S1) obtaining a predicted membrane and a real membrane of a liver capsule according to an ultrasonic image;
(S2) obtaining variance VoS of the segment slope of the liver capsule, variation coefficient CV of the adjacent segment slope difference and the fluctuation change times NoF from the prediction film; and obtaining the number of segments NoL of the liver capsule from the real membrane;
(S3) inputting the variance VoS of the sectional slope, the fluctuation change times NoF and the variation coefficient CV of the adjacent section slope difference into a preliminary identification model as input features, wherein the two judgment types output by the preliminary identification model are normal-early stage and medium-late stage;
(S4) performing secondary classification according to the primary identification result; in the secondary classification of the object, the object is classified,
if the primary recognition result is "normal-early stage", the number NoL of line segments and the variation coefficient CV of adjacent segment slope difference are taken as features to be input into a mild recognition model, and the two recognition types output by the mild recognition model are "normal" and "mild liver cirrhosis";
if the result of the preliminary identification is "mid-late stage", the number of line segments NoL and the variance VoS of the segment slope are used as the feature to input the mid-late stage identification model, and the two identification types output by the mid-mild identification model are "moderate liver cirrhosis" and "severe liver cirrhosis".
A further improvement of the invention is that in the process of obtaining the number of segments NoL, the number of break points in the real membrane is counted, and the number of segments NoL of the liver capsule is determined according to the number of break points.
The invention further improves that when the variance VoS of the segmented slope of the liver capsule is obtained, the prediction film of the liver capsule is divided into a plurality of segments according to a preset interval, the slope of each segment is calculated respectively, and the variance of the slope of each segment is calculated to obtain the variance VoS of the slope of the segment.
A further improvement of the invention is that in the process of obtaining the variation coefficient CV of the slope differences of adjacent segments, the slope differences between adjacent segments of the prediction film are calculated, and the average value of the slope differences is obtainedStandard deviation STD Kd The method comprises the steps of carrying out a first treatment on the surface of the Standard deviation STD Kd Mean->The variation coefficient CV of the gradient difference of the adjacent segments can be obtained.
A further improvement of the present invention is that in the process of obtaining the number of fluctuation changes NoF, the absolute value of the slope difference between each adjacent segment is calculated, and the number of times that the absolute value of the slope difference is larger than the fluctuation threshold is counted and taken as the number of fluctuation changes NoF.
A further improvement of the present invention is that each segment has the same number of pixels during the process of separating the predictive membrane of the liver capsule; when calculating the slope of the segment, the slope between two endpoints of the segment is used as the segment slope, or the slope of the straight line after fitting each pixel in the segment is used as the segment slope.
A further improvement of the present invention is that the ultrasound image is a superficial cut image of the liver, which includes a liver envelope at the upper edge of the liver.
The invention further improves that the preliminary recognition model is a support vector machine model and is obtained by training in a mode of fifteen-fold cross validation.
The invention further improves that the mild recognition model and the middle and later stage recognition model are both K-means clustering models and are obtained by training in a mode of ten-time five-fold cross validation.
Compared with the prior art, the invention has the following beneficial effects:
1) The characteristics of the liver capsule morphology can be fully displayed by combining the characteristics of the liver capsule predicted membrane and the real membrane.
2) The two-stage classification model of the support vector machine and the K-means clustering is utilized, so that the classification precision is improved, and reliable guarantee is provided for clinical auxiliary diagnosis.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a method for stage identification of cirrhosis disease based on envelope line morphology analysis;
FIG. 2 is a schematic diagram of an identification model employed in the present invention;
FIG. 3 is a flow chart of a liver cirrhosis ultrasound image liver capsule extraction method based on digital image processing techniques;
FIG. 4 is a schematic diagram of a sliding window detection principle;
FIG. 5 is a schematic diagram of a liver capsule traversal search algorithm without ascites images;
FIG. 6 is a schematic diagram of a liver capsule traversal search algorithm with ascites images;
fig. 7 is a schematic diagram of a real membrane extraction process.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention includes a method for identifying a stage of a liver cirrhosis disease based on analysis of morphological characteristics of an envelope line, the method including the steps of:
(S1) obtaining a predicted membrane and a real membrane of the liver capsule according to the ultrasonic image. The ultrasonic image used in this example is a superficial section image of the liver, and the upper edge of the liver is the liver envelope. As shown in fig. 5, the lower half of the liver superficial section image is liver parenchyma, the upper half is other organs, and the liver capsule is positioned in the middle of the image. The predicted membrane contains trend information of liver capsule, and the predicted membrane contains fluctuation condition of liver capsule, so that the predicted membrane has rich details; the predictive membrane and the real membrane are combined, so that abundant characteristics can be provided for automatic judgment of downstream liver diseases.
(S2) obtaining variance VoS of the segment slope of the liver capsule, variation coefficient CV of the adjacent segment slope difference and the fluctuation change times NoF from the prediction film; and the number of segments NoL of the liver capsule is obtained from the real membrane. The above features are used as input features for subsequent recognition tasks.
Specifically, the predicted membrane of the liver capsule is a preliminary recognition result of the liver capsule obtained from the ultrasonic image, and the number NoL of segments of the liver capsule is calculated from the predicted membrane to represent continuity of the liver capsule. In the process of acquiring the Number of Line segments (NoL), the Number of break points in the real membrane is counted, and the Number of Line segments of the liver capsule is determined NoL according to the Number of break points. The expression is as follows:
NoL=k
L={l i |i=1,......,k} (1)
L{l i i 1..the term, k represents the set of line segments that make up the entire liver capsule. In this embodiment, the "line segment" in the "number of line segments NoL" is not a straight line between two points in geometry, and the "line segment" in this embodiment refers to a segment in the real membrane of the two-point liver capsule, which may be a straight line or a curve.
The variance (Variance of slope, voS) of the segment slope of the liver capsule and the coefficient of variation CV of the adjacent segment slope difference are used to represent the smoothness of the liver capsule. Smoothness mainly describes the fluctuation condition of the liver capsule, and can effectively describe the overall trend and fluctuation condition of the liver capsule. In order to avoid the influence of the intermittent region on the feature analysis, in the embodiment, the segments are divided by the same interval (the same number of pixels) respectively, and then the segment slope in each segment is comprehensively analyzed, so that the integral liver capsule feature is obtained.
When the variance VoS of the segment slope of the liver capsule is obtained, the prediction film of the liver capsule is divided into a plurality of segments according to a preset interval, the slope of each segment is calculated respectively, and the variance of the slope of each segment is calculated to obtain the variance VoS of the segment slope.
Specifically, assuming that there is a breakpoint in the image of the liver capsule, the predicted film is divided into m segments due to the breakpoint, where m= (1, 2,., i), the segments of the predicted film are divided into N segments at the same interval P, where n= (1, 2,., j), then N is present in the entire liver capsule image L Number of bar segments.
In n i Representing the number of segments in the i-th segment of the predictive film. It is thus known that the slope of each segment in each segment is shown in equation 3, and the slope variance of the final whole liver capsule is shown in equation 4.
K in the formula ij Represents the slope of the j-th segment in the i-th segment, (x) Eij ,y Eij ) And (x) Fij ,y Fij ) Representing the coordinates of the first and last pixel points of each segment respectively,the average of all segment slopes in the whole liver capsule is shown. In this embodiment, when calculating the slope of the segment, the slope between two end points of the segment is used as the segment slope, and in the implementation process, the slope of the straight line after fitting each pixel in the segment may also be used as the segment slope.
The coefficient of variation is a statistic for measuring the variation degree of each observed value in the index, so that the embodiment better realizes the description of the liver envelope fluctuation by analyzing the variation degree of the slope difference value of the adjacent segments on the basis of obtaining each segment slope.
In the process of calculating the variation coefficient CV of the slope differences of adjacent segments, the slope difference between adjacent segments of the prediction film is calculated, and the slope differences are obtainedMean value ofStandard deviation STD Kd The method comprises the steps of carrying out a first treatment on the surface of the Standard deviation STD Kd Mean->The variation coefficient CV of the gradient difference of the adjacent segments can be obtained. The coefficient of variation CV of the adjacent segment slope difference is shown in equation 6.
K in the formula d Representing the slope difference, STD, of adjacent small line segments Kd The standard deviation of the difference in slope is indicated,representing the mean value of the slope difference.
In the process of acquiring the number of fluctuation changes NoF, the absolute value of the slope difference between each adjacent segment is calculated, and the number of times when the absolute value of the slope difference is larger than the fluctuation threshold value is counted and taken as the number of fluctuation changes NoF. In this embodiment, the fluctuation threshold is 0.3. K between adjacent segments d Absolute value of |K d I, above 0.3, it is considered a wave. The total number of undulations in the film is predicted as the number of undulations (Number of fluctuations, noF).
(S3) inputting the variance VoS of the sectional slope, the fluctuation change times NoF and the variation coefficient CV of the adjacent section slope difference into a preliminary recognition model as input features, wherein the two judgment types outputted by the preliminary recognition model are 'normal-early stage' and 'medium-late stage'.
In the implementation process, as shown in fig. 2, three features of variance VoS of the sectional slope, the number of fluctuation changes NoF and variation coefficient CV of the adjacent section slope difference are formed into a three-dimensional vector, the three-dimensional vector is input into a preliminary recognition model, the preliminary recognition model is a support vector machine model, and the model is obtained by training in a mode of five-fold cross validation for ten times, so that two major discrimination results of 'normal-early stage' and 'medium-late stage' can be obtained. The identification results of the two kinds of the identification results are averaged, and the identification precision of the model can be obtained.
(S4) performing secondary classification according to the primary identification result. As shown in fig. 2, in the secondary classification, the determination of the corresponding model according to the result of the primary recognition needs to be considered, and in particular:
if the primary recognition result is "normal-early stage", the number of line segments NoL and the variance coefficient CV of the adjacent segment slope difference are taken as features to be input into a mild recognition model, and the two recognition types output by the mild recognition model are "normal" and "mild liver cirrhosis". The mild recognition model can analyze continuity and smoothness of liver capsule to obtain a final recognition result.
If the result of the preliminary identification is "mid-late stage", the number of line segments NoL and the variance VoS of the segment slope are used as the feature to input the mid-late stage identification model, and the two identification types output by the mid-mild identification model are "moderate liver cirrhosis" and "severe liver cirrhosis".
The mild recognition model and the middle and later stage recognition model are both K-means clustering models, and are obtained by training in a mode of ten-time five-fold cross validation. The recognition result of the model can be taken as the average value of the recognition result to be used as the recognition precision.
Step (S1) of the present embodiment is for processing a liver-superficial-section image, which is an ultrasonic image acquired at a specific position of a patient. The image of the superficial section of the liver is shown in fig. 5, which includes the liver capsule at the upper edge of the liver, the lower half of the image being liver parenchyma and the upper half being other organs.
As shown in fig. 3, in step (S1) of this embodiment, extracting the predicted and actual membranes of the liver capsule includes the steps of:
(S11) traversing an upper portion of the ultrasound image using a sliding window algorithm to identify whether there is a liver ascites region in the ultrasound image. The method specifically comprises the following steps:
(S111) traversing the upper half of the ultrasound image by sliding the upper half of the ultrasound image from left to right, top to bottom, with a step L using an L x L square window; in the traversal process, the average gray level of each window is obtained, the average gray level is compared with a window threshold value, and if the average gray level is smaller than the window threshold value, the liver ascites characteristic in the window is judged; the sliding window detection principle is shown in fig. 4.
Specifically, if the size of the ultrasound image is mxn, the size of the sliding window traversing area (the upper half of the ultrasound image) is mxn/2, so that when the window traverses one row or one column according to the step length L, the number of windows shown in formula 7 can be obtained, and when the upper half of the entire ultrasound image is traversed, the total number of windows shown in formula 8 can be obtained.
S r =M/L;
S c =(N/2)/L (7)
S a =S r ×S c (8)
S in r ,S c ,S a The number of sliding windows in a row, a column and the entire top half of the ultrasound image are represented, respectively. When the characteristics of liver ascites exist in each window, the threshold value of the window is 60.
(S112) counting the number S of the characteristic windows containing hepatic ascites f Dividing it by the number of windows S in the upper half of the ultrasound image a Obtaining characteristic coefficients of liver ascites; when the characteristic coefficient of the liver ascites is larger than a fixed threshold value P, judging that the ultrasonic image comprises the liver ascites area. The rule for judging whether the liver ascites area is included is shown in table 1.
TABLE 1 identification criteria for whether there is an ascites area in an ultrasound image
(S12) processing the ultrasonic image by using a Gaussian blur algorithm to reduce image noise and detail level.
In this step, the ultrasound image is processed by using a gaussian blur filter, and the definition is shown in formula (9).
Where r is the blur radius and σ is the standard deviation of the normal distribution. In two dimensions, a convolution matrix of pixels with non-zero distribution is transformed from the original image, each pixel having a value that is a weighted average of the values of surrounding neighboring pixels. The original pixel has the maximum Gaussian distribution value, so that the original pixel has the maximum weight, and the weight of the adjacent pixels is smaller as the adjacent pixels are farther from the original pixel, so that the denoising of the ultrasonic image is realized on the basis of retaining the edge effect.
(S13) carrying out local-first and then integral image enhancement on the ultrasonic image after Gaussian blur processing by using a multi-scale detail enhancement and blur set transformation method. The method specifically comprises the following steps:
(S131) respectively carrying out convolution operation on the three Gaussian kernel functions with different scales and the ultrasonic image I after Gaussian blur processing to obtain three Gaussian blur images B 1 ,B 2 ,B 3 The expression is:
B 1 =G 1 *I,B 2 =G 2 *I,B 3 =G 3 -I (10)
wherein G is 1 ,G 2 ,G 3 Gaussian kernels with standard deviations of 1,2 and 4 respectively;
(S132) the ultrasound image I is respectively compared with three Gaussian blur images B 1 ,B 2 ,B 3 Subtracting to obtain three detail images D 1 ,D 2 ,D 3 The expression is:
D 1 =I-B 1 ,D 2 =B 1 -B 2 ,D 3 =B 2 -B 3 (11)
(S133) setting different weights to integrate the detail images into a final detail image D', which is expressed as:
D′=(1-ω 1 ×sgn(D 1 ))×D 1 +ω 2 ×D 2 +ω 3 ×D 3
(12)
wherein omega 1 ,ω 2 ,ω 3 Respectively weighing; normally ω 1 ,ω 2 ,ω 3 Are respectively set to 0.5, 0.5 and 0.25.
(S133) integrally enhancing the detail-enhanced image using blur combination. Through the algorithm, dark pixels in the image can be darker, and bright pixels are brighter, so that the method is also suitable for global enhancement processing of the liver ultrasonic image, and the requirement of subsequent morphological processing can be better met.
(S14) carrying out binarization processing on the ultrasonic image after image enhancement, and removing isolated noise points in the binary image by using an island removing algorithm to obtain a binary ultrasonic image.
In the step, firstly, the self-adaptive threshold algorithm is utilized to carry out binarization processing on the ultrasonic image after denoising and enhancing, and because the strip rope existing in the liver parenchyma can cause the image after binarization to contain partial isolated noise points, a certain threshold value is required to be set by an area method before morphological processing, and isolated noise areas smaller than the threshold value are removed, so that a simpler binary ultrasonic image is obtained.
(S15) processing the binary ultrasonic image by using morphological closing operation; the lower part of the processed binary ultrasonic image is a liver image area, and the upper part is a single-color independent communication area. Morphological closing operations generally include expansion operations and erosion operations. The resulting binary ultrasound image is shown in fig. 5.
(S16) searching the processed binary ultrasonic image based on a traversal algorithm, and searching a pixel point set corresponding to the liver capsule to form a prediction membrane of the liver capsule; and fusing the predicted film into an original ultrasonic image, and removing pixel points of the pseudo film from the predicted film by adopting a gray level difference algorithm to obtain a real film of the liver capsule. The step needs to be divided into two parts: a predicted membrane of the liver capsule is obtained and a true membrane of the liver capsule is obtained. The prediction membrane of the liver envelope needs to be obtained by considering two conditions of liver ascites and no liver ascites. Specific:
as shown in fig. 5, searching for a pixel point corresponding to the liver capsule in an ultrasound image without liver ascites includes the following steps:
(S1601) selecting a plurality of starting points at the top of the binary ultrasound image;
(S1602) searching downward from each starting point, finding a pixel point of the opposite color to the single-color independent communication region at the top of the binary ultrasound image, and taking the pixel point as the pixel point corresponding to the liver capsule.
In the binary ultrasound image, the pixel value of liver parenchyma below the image is 0, and the pixel value of a single connected region above the image is 1. In the implementation process, each pixel point at the top edge of the binary ultrasonic image is used as a starting point to be traversed in sequence. Traversing the search along the positive direction of the ordinate in the traversing process of a certain starting point, stopping the search of the column when the gray value of the pixel is detected to be 0, and defining the point as a point P on the predicted film of the liver envelope 1 * . Then, the point P 1 Translating along the x-axis by 1 pixel step from the initial position to obtain P in turn 1 ,P 2 ,P 3 ,……,P m M pixel points are taken as a starting point, and the traversing searching method is repeated by taking the m pixel points as a starting point, so that m pixel points P which finally form a liver capsule predicted film can be obtained 1 * ,P 2 * ,P 3 * ,……,P m * 。
As shown in fig. 6, for an ultrasound image with liver ascites, finding a pixel point corresponding to the liver capsule includes the steps of:
(S1611) selecting a plurality of starting points at the top of the binary ultrasound image;
(S1612) searching downward from each starting point to find a pixel point corresponding to the liver capsule;
in the process of searching downwards from a certain starting point, searching for a first pixel point with the color opposite to that of the single-color independent communication area at the top of the binary ultrasonic image, continuing to search downwards until the pixel point with the same color as that of the single-color independent communication area at the top of the ultrasonic image is found, taking the pixel point as the pixel point corresponding to the liver capsule, and skipping over the starting point if the pixel point is not found.
In the case of liver ascites, the mode of selection of each starting point is the same as in the case of no liver ascites. The difference is the process of traversing each column of pixels. In the specific implementation process, when each column carries out traversal search, the pixel gray level of the starting point and the single communication area where the starting point is located is 1, when the pixel gray level value is detected to be 0, the search is continued in the y-axis direction until the pixel point with the gray level value of 1 is detected again, the pixel point can be defined as the pixel point on the liver capsule, and finally all the searched pixel points P are stored f1 * ,P f2 * ,P f3 * ,……,P fm * The liver capsule prediction film with ascites ultrasonic image can be formed. Further, since there is a breaking point in the normal case of the liver capsule in the case of ascites, when the column in which the broken portion is located is detected, since the pixel having the gradation value of 1 cannot be detected again after the pixel having the gradation value of 0 is detected, the next column is continued to be searched by skipping the column directly, in which case there is no pixel of the predicted film in the column.
As shown in fig. 7, in the process of obtaining a predicted membrane of a liver capsule, after an image is subjected to complex processing, particularly to expansion corrosion, a discontinuous liver capsule on an original ultrasonic image becomes continuous, so that a pseudo membrane shown in fig. 7c is formed at a discontinuous position, therefore, in the process of obtaining a real membrane, pixels of the pseudo membrane in the predicted membrane need to be removed, and the remaining pixels are used as pixels of the real membrane, and the method specifically comprises the following steps:
(S1621) fusing the pixels of the expanded prediction film into the original ultrasound image;
(S1622) traversing pixels of each column of the prediction film in the fused image; calculating the average value of a plurality of pixel points in the range of the liver capsule prediction film in the vertical direction in the process of traversing the pixel points in a certain column of the prediction film; if the average value is smaller than the pseudo film threshold value, the pixel point of the prediction film is the pixel point of the real film, otherwise, the pixel point of the prediction film is not the pixel point of the real film. In the present embodiment of the present invention,
in the implementation process, referring to fig. 7, assume that the coordinates of the pixel points in the expanded "predicted film" region are (x, y), where x∈ (1, m), and when the expanded "predicted film" is fused onto the original ultrasound image, the existence of the "pseudo film" is sequentially searched column by column from the first column. After expansion, the predicted film occupies a certain range in the vertical direction, and the width of the expanded line is the range for calculating the average value of pixels in the traversal process. According to the principle of formula 13, if the average value of the pixel values of each column in the original ultrasonic image in the y value range of the 'prediction film' is larger than the threshold value P, the original liver envelope is proved, so that the original pixel values of the column in the 'prediction film' are saved; conversely, if the in-range average value is smaller than the pseudo-membrane threshold P, the liver capsule predicted here is proved to be "pseudo-membrane", and therefore the pixel value of this column in "predicted membrane" should be set to 0. After all the rows of the whole image are traversed, the residual pixel points in the prediction film can form a real liver envelope which is finally needed to reflect the original ultrasonic image.
G in 1 (i, j) represents the gray value of the liver envelope pixel when the coordinates are (i, j) in the "predicted film" image, G 2 (x, y) represents the gray value of the original ultrasonic image at the pixel point with coordinates of (x, y), G 3 (i, j) represents the pixel gray value at the coordinates (i, j) in the real liver capsule.
In the description of the present application, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, merely to facilitate description of the present application and simplify the description, and do not indicate or imply that the devices or elements being referred to must have a specific orientation, be configured and operated in a specific orientation, and are not to be construed as limiting the present application.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily without conflict.
Claims (6)
1. The liver cirrhosis disease stage identification method based on envelope line morphological feature analysis is characterized by comprising the following steps of:
(S1) obtaining a predicted membrane and a real membrane of a liver capsule according to an ultrasonic image;
(S2) obtaining variance VoS of the segment slope of the liver capsule, variation coefficient CV of the adjacent segment slope difference and the fluctuation change times NoF from the prediction film; and obtaining the number of segments NoL of the liver capsule from the real membrane;
when the variance VoS of the segment slope of the liver capsule is obtained, dividing a prediction film of the liver capsule into a plurality of segments according to a preset interval, respectively calculating the slope of each segment, and calculating the variance of the slope of each segment to obtain the variance VoS of the segment slope;
in the process of obtaining the variation coefficient CV of the slope differences of adjacent sections, calculating the slope difference between each adjacent section of the prediction film, and obtaining the average value of each slope differenceStandard deviation STD Kd The method comprises the steps of carrying out a first treatment on the surface of the Standard deviation STD Kd Mean->Obtaining a variation coefficient CV of the gradient difference of the adjacent sections;
in the process of acquiring the fluctuation change times NoF, calculating the absolute value of the slope difference between every two adjacent segments, and counting the times that the absolute value of the slope difference is larger than the fluctuation threshold value and taking the times as fluctuation change times NoF;
(S3) inputting the variance VoS of the sectional slope, the fluctuation change times NoF and the variation coefficient CV of the adjacent section slope difference into a preliminary identification model as input features, wherein the two judgment types output by the preliminary identification model are normal-early stage and medium-late stage;
(S4) performing secondary classification according to the primary identification result; in the secondary classification of the object, the object is classified,
if the primary recognition result is "normal-early stage", the number NoL of line segments and the variation coefficient CV of adjacent segment slope difference are taken as features to be input into a mild recognition model, and the two recognition types output by the mild recognition model are "normal" and "mild liver cirrhosis";
if the result of the preliminary identification is "mid-late stage", the number NoL of line segments and the variance VoS of the slope of the segment are used as features to be input into a mid-late stage identification model, and the two identification types output by the mid-late stage identification model are "moderate cirrhosis" and "severe cirrhosis".
2. The method for identifying liver cirrhosis disease stage based on envelope line morphological feature analysis according to claim 1, wherein the number of break points in the real membrane is counted in the process of obtaining the number of line segments NoL, and the number of line segments NoL of liver envelope is determined according to the number of break points.
3. The method for identifying liver cirrhosis disease stage based on envelope line morphology analysis according to claim 1, wherein each segment has the same number of pixels during the process of separating the predictive membranes of the liver envelope; when calculating the slope of the segment, the slope between two endpoints of the segment is used as the segment slope, or the slope of the straight line after fitting each pixel in the segment is used as the segment slope.
4. A method for the staged identification of cirrhosis disease based on morphological feature analysis of the envelope lines according to any of claims 1-3, wherein the ultrasound images are superficial liver section images including the liver envelope at the upper edge of the liver.
5. The liver cirrhosis disease stage identification method based on envelope line morphological feature analysis according to claim 1, wherein the preliminary identification model is a support vector machine model and is obtained by training in a mode of ten-time five-fold cross validation.
6. The liver cirrhosis disease stage identification method based on envelope line morphological feature analysis according to claim 1, wherein the mild identification model and the middle and later stage identification model are both K-means clustering models, and are obtained by training in a mode of ten-time five-fold cross validation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010692369.9A CN111815613B (en) | 2020-07-17 | 2020-07-17 | Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010692369.9A CN111815613B (en) | 2020-07-17 | 2020-07-17 | Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111815613A CN111815613A (en) | 2020-10-23 |
CN111815613B true CN111815613B (en) | 2023-06-27 |
Family
ID=72866103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010692369.9A Active CN111815613B (en) | 2020-07-17 | 2020-07-17 | Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111815613B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113436728B (en) * | 2021-07-05 | 2022-10-28 | 复旦大学附属儿科医院 | Method and equipment for automatically analyzing electroencephalogram of clinical video of neonate |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101458223A (en) * | 2008-12-26 | 2009-06-17 | 江南大学 | Preparation of quantitative rapid detecting sensor of microcapsule algae toxin and applications |
KR101162605B1 (en) * | 2011-03-21 | 2012-07-05 | 인하대학교 산학협력단 | Texture feature extraction method in ct images |
WO2014011925A2 (en) * | 2012-07-11 | 2014-01-16 | University Of Mississippi Medical Center | Method for the detection and staging of liver fibrosis from image acquired data |
CN103561659A (en) * | 2011-05-24 | 2014-02-05 | 株式会社东芝 | Ultrasound diagnostic apparatus and image processing apparatus |
CN105631885A (en) * | 2016-01-06 | 2016-06-01 | 复旦大学 | Method for extracting glisson's capsule line and describing characteristics based on superficial tangent plane ultrasonic image |
CN107292312A (en) * | 2017-06-19 | 2017-10-24 | 中国科学院苏州生物医学工程技术研究所 | Tumour recognition methods |
CN108038513A (en) * | 2017-12-26 | 2018-05-15 | 北京华想联合科技有限公司 | A kind of tagsort method of liver ultrasonic |
CN108154517A (en) * | 2017-12-26 | 2018-06-12 | 北京华想联合科技有限公司 | A kind of Glisson's capsule line extraction method based on liver ultrasonic |
WO2018208091A2 (en) * | 2017-05-10 | 2018-11-15 | 서울대학교산학협력단 | Biomarker for monitoring or diagnosing onset of liver cancer in high-risk group for liver cancer and use thereof |
CN109754007A (en) * | 2018-12-27 | 2019-05-14 | 武汉唐济科技有限公司 | Peplos intelligent measurement and method for early warning and system in operation on prostate |
CN110555827A (en) * | 2019-08-06 | 2019-12-10 | 上海工程技术大学 | Ultrasonic imaging information computer processing system based on deep learning drive |
-
2020
- 2020-07-17 CN CN202010692369.9A patent/CN111815613B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101458223A (en) * | 2008-12-26 | 2009-06-17 | 江南大学 | Preparation of quantitative rapid detecting sensor of microcapsule algae toxin and applications |
KR101162605B1 (en) * | 2011-03-21 | 2012-07-05 | 인하대학교 산학협력단 | Texture feature extraction method in ct images |
CN103561659A (en) * | 2011-05-24 | 2014-02-05 | 株式会社东芝 | Ultrasound diagnostic apparatus and image processing apparatus |
WO2014011925A2 (en) * | 2012-07-11 | 2014-01-16 | University Of Mississippi Medical Center | Method for the detection and staging of liver fibrosis from image acquired data |
CN105631885A (en) * | 2016-01-06 | 2016-06-01 | 复旦大学 | Method for extracting glisson's capsule line and describing characteristics based on superficial tangent plane ultrasonic image |
WO2018208091A2 (en) * | 2017-05-10 | 2018-11-15 | 서울대학교산학협력단 | Biomarker for monitoring or diagnosing onset of liver cancer in high-risk group for liver cancer and use thereof |
CN107292312A (en) * | 2017-06-19 | 2017-10-24 | 中国科学院苏州生物医学工程技术研究所 | Tumour recognition methods |
CN108038513A (en) * | 2017-12-26 | 2018-05-15 | 北京华想联合科技有限公司 | A kind of tagsort method of liver ultrasonic |
CN108154517A (en) * | 2017-12-26 | 2018-06-12 | 北京华想联合科技有限公司 | A kind of Glisson's capsule line extraction method based on liver ultrasonic |
CN109754007A (en) * | 2018-12-27 | 2019-05-14 | 武汉唐济科技有限公司 | Peplos intelligent measurement and method for early warning and system in operation on prostate |
CN110555827A (en) * | 2019-08-06 | 2019-12-10 | 上海工程技术大学 | Ultrasonic imaging information computer processing system based on deep learning drive |
Non-Patent Citations (3)
Title |
---|
Computer-aided cirrhosis diagnosis via automatic liver capsule extraction and combined geometry-texture features;Xiang Liu等;《2017 IEEE International Conference on Multimedia and Expo (ICME)》;第865-870页 * |
基于超声图像特征的肝硬化分析系统;周国辉 等;《声学技术》(第4期);第288-291页 * |
高频超声影像肝脏包膜几何特征定量评价患者肝硬化程度;宋家琳 等;《中国医学影像技术》;第31卷(第12期);第1907-1910页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111815613A (en) | 2020-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533684B (en) | Chromosome karyotype image cutting method | |
JP6710135B2 (en) | Cell image automatic analysis method and system | |
CN109800824B (en) | Pipeline defect identification method based on computer vision and machine learning | |
Khoshelham et al. | Performance evaluation of automated approaches to building detection in multi-source aerial data | |
CN110570389B (en) | Vehicle damage identification method and device | |
US8073233B2 (en) | Image processor, microscope system, and area specifying program | |
US8340420B2 (en) | Method for recognizing objects in images | |
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
US10121245B2 (en) | Identification of inflammation in tissue images | |
US20160188954A1 (en) | Systems and methods for segmentation and processing of tissue images and feature extraction from same for treating, diagnosing, or predicting medical conditions | |
JP5926937B2 (en) | Image processing apparatus, image processing method, and image processing program | |
CN110569837A (en) | Method and device for optimizing damage detection result | |
CN106780522B (en) | A kind of bone marrow fluid cell segmentation method based on deep learning | |
US9626583B2 (en) | Automated epithelial nuclei segmentation for computational disease detection algorithms | |
CN110648322A (en) | Method and system for detecting abnormal cervical cells | |
CN115082466B (en) | PCB surface welding spot defect detection method and system | |
EP3140778B1 (en) | Method and apparatus for image scoring and analysis | |
CN110348307B (en) | Path edge identification method and system for crane metal structure climbing robot | |
CN111815613B (en) | Liver cirrhosis disease stage identification method based on envelope line morphological feature analysis | |
CN116682109B (en) | Pathological microscopic image analysis method, device, equipment and storage medium | |
CN114080644A (en) | System and method for diagnosing small bowel cleanliness | |
CN110889418A (en) | Gas contour identification method | |
Khan et al. | Segmentation of single and overlapping leaves by extracting appropriate contours | |
CN113850762A (en) | Eye disease identification method, device, equipment and storage medium based on anterior segment image | |
US20240071057A1 (en) | Microscopy System and Method for Testing a Sensitivity of an Image Processing Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |