CN110136115B - Neural network integration method for automatically detecting vulnerable plaque of IVOCT image - Google Patents

Neural network integration method for automatically detecting vulnerable plaque of IVOCT image Download PDF

Info

Publication number
CN110136115B
CN110136115B CN201910402166.9A CN201910402166A CN110136115B CN 110136115 B CN110136115 B CN 110136115B CN 201910402166 A CN201910402166 A CN 201910402166A CN 110136115 B CN110136115 B CN 110136115B
Authority
CN
China
Prior art keywords
network
detection
integration
vulnerable plaque
ivoct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910402166.9A
Other languages
Chinese (zh)
Other versions
CN110136115A (en
Inventor
刘然
张艳珍
田逢春
钱君辉
郑杨婷
刘亚琼
赵洋
陈希
崔珊珊
王斐斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910402166.9A priority Critical patent/CN110136115B/en
Publication of CN110136115A publication Critical patent/CN110136115A/en
Application granted granted Critical
Publication of CN110136115B publication Critical patent/CN110136115B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an IVOCT image vulnerable plaque automatic detection neural network integration method, which comprises the following steps: selecting an integrated network, wherein the selected network comprises a traditional target detection network Haar-Adaboost, a regression-based target detection network YOLO and SSD and a regional suggestion-based target detection network Faster R-CNN; network training detection, namely setting data indexes and evaluation indexes reflecting the detection result of each network on vulnerable plaques according to the self characteristics of the IVOCT images, training each network to generate different test models, and detecting a preset number of IVOCT test images respectively; the two-step integration method comprises the steps of integrating IVOCT detection image category labels detected by each network by using a designed integration strategy, and then performing area integration according to the result of category integration to obtain a detection result. The method and the device can improve the accuracy of detecting vulnerable plaque areas, reduce the conditions of missed detection and false detection, and improve the contact ratio between the detection area and the real area and the final detection result performance.

Description

Neural network integration method for automatically detecting vulnerable plaque of IVOCT image
Technical Field
The invention relates to the technical field of IVOCT image detection, in particular to a neural network integration method for automatically detecting vulnerable plaques of IVOCT images.
Background
Vulnerable plaque detection relies on intravascular imaging techniques (intravascular imaging modality). Currently, two imaging techniques, intravascular Ultrasound (IVUS) and Intravascular Optical Coherence Tomography (IVOCT), are commonly used clinically to detect vulnerable plaques. Among them, IVOCT is a high resolution (10-20 μm) imaging modality, about 10 times that of IVUS, and has better sensitivity (sensitivity) and specificity (specificity) for detection of vulnerable plaque than IVUS. In addition, the technology can perform repeated processing, and the stability of the result can be still ensured after multiple times of imaging. Therefore, the IVOCT technique is more suitable for the detection of vulnerable plaque.
The traditional method for detecting vulnerable plaque based on IVOCT image is that doctors judge by naked eyes according to own experience, the process is time-consuming and labor-consuming, and the result subjectivity is strong. In this context, automatic detection of vulnerable plaque based on a single network, such as the Faster R-CNN network, has emerged. However, through research, the inventor of the present invention finds that the detection of vulnerable plaque by a single network is prone to missed detection and false detection, and therefore, the accuracy of the detection result is low, so that the detection effect on vulnerable plaque areas is not objective.
Disclosure of Invention
The invention provides an IVOCT image vulnerable plaque automatic detection neural network integration method, which aims at the technical problems that missed detection and false detection are easy to occur in the existing vulnerable plaque detection through a single network, so that the accuracy of a detection result is low, and the detection effect on a vulnerable plaque area is not objective.
In order to solve the technical problem, the invention adopts the following technical scheme:
an IVOCT image vulnerable plaque automatic detection neural network integration method comprises the following steps:
selecting an integrated network, selecting different representative networks for vulnerable plaque detection, wherein the selected networks comprise a traditional target detection network Haar-Adaboost and a deep neural network, and the deep neural network comprises a regression-based target detection network YOLO and SSD and a regional suggestion-based target detection network Faster R-CNN;
network training detection, namely setting data indexes and evaluation indexes reflecting the detection result of each network on vulnerable plaques according to the self characteristics of the IVOCT images; training each network to generate different test models, detecting each network by using a predetermined number of IVOCT test images to obtain the output result of each network, namely the data index of each network detection result, and calculating the data index to obtain the evaluation index of each network detection result;
the method comprises the steps of designing an integration strategy according to data indexes and evaluation indexes of each network detection result, integrating IVOCT detection image type labels detected by each network by using the integration strategy to obtain judgment of whether the IVOCT detection image contains a vulnerable plaque area, integrating and combining all areas containing the vulnerable plaque area in the IVOCT detection image according to the integration result of the type of the first step, and taking the integration result as a final output detection result.
Further, the data indexes of each network for the vulnerable plaque detection result comprise true positive TP, false positive FP, false negative FN and true negative TN; wherein,
Figure BDA0002059998770000021
Figure BDA0002059998770000022
Figure BDA0002059998770000031
wherein T is a threshold value, DSC (A) i ,B j ) Is a Dice similarity coefficient used for measuring the overlapping degree of the detected vulnerable plaque area and the real vulnerable plaque area,
Figure BDA0002059998770000032
A i representing the set of real vulnerable plaque areas a = { a = } 1 ,A 2 ,...,A n The ith real vulnerable plaque area in (c), B j Represents the detected vulnerable plaque area set B = { B = { (B) 1 ,B 2 ,...,B m J th detected vulnerable plaque area in (j), i ∈ [1, n ]],j∈[1,m]And | represents the width of the region;
the evaluation indexes of each network on the vulnerable plaque detection result comprise accuracy P, recall rate R, contact ratio D and detection quality score S; wherein,
Figure BDA0002059998770000033
Figure BDA0002059998770000034
Figure BDA0002059998770000035
Figure BDA0002059998770000036
in the formula, w 1 Weight factor, w, representing accuracy and recall 2 Weight factor, w, representing degree of overlap 1 ,w 2 ∈[0,1],w 1 +w 2 =1。
Further, the threshold T =0.5.
Further, the integration strategies include four integration strategies, specifically:
the first integration strategy is that in three detection networks of Faster R-CNN, YOLO and SSD, if only one network detects vulnerable plaques, the corresponding image is judged as a positive sample;
the second integration strategy is to integrate the outputs of three detection networks of fast R-CNN, YOLO and SSD by adopting a simple voting method and a minority majority-obeying method;
the third integration strategy is that a simple voting method is adopted as a whole, but when the vulnerable plaque is detected by the Faster R-CNN or the SSD in the three detection networks of the Faster R-CNN, the YOLO and the SSD, a weighted voting method is adopted to endow the two networks with higher weight so that the final judgment result judges the corresponding image as a positive sample based on the detection result of the fast R-CNN or the SSD;
the fourth integration strategy is that when the judgment image of the Haar-Adaboost network is a negative sample, a weighted voting method is adopted, and in four detection networks of Faster R-CNN, YOLO, SSD and Haar-Adaboost, the weight of the Haar-Adaboost network in category integration is distributed to be 0.4-0.7, and the weights of the other three networks are distributed to be 0.1-0.2; and when the Haar-Adaboost network judges that the image is a positive sample, the Haar-Adaboost network does not distribute weights in category integration, and the third integration strategy is adopted for carrying out category judgment.
Further, the simple voting method is calculated by using the following formula:
Figure BDA0002059998770000041
where C (x) represents the resulting output of the final vote, sign is a sign function, C i (x) Indicating the voting result of the ith classifier.
Further, the weighted voting method is calculated by using the following formula:
Figure BDA0002059998770000042
where C (x) represents the resulting output of the final vote, sign is a sign function, w i Represents the weight corresponding to the ith classifier, C i (x) Indicating the voting result of the ith classifier.
Further, in the fourth integration strategy, when the image discriminated by the Haar-Adaboost network is a negative sample, the weight of the Haar-Adaboost network in the category integration is assigned to 0.4, and the weights of the other three networks are all assigned to 0.2.
Furthermore, in the second step of area integration of the two-step integration method, the area detected by a single grid should be included as much as possible, so when vulnerable plaques are output by a plurality of detection grids, all vulnerable plaque areas need to be merged, and the integration result is the union of all areas.
Further, the method also comprises the step of evaluating the output result of each integration strategy by adopting the evaluation index.
The IVOCT image vulnerable plaque automatic detection neural network integration method provided by the invention has the following advantages compared with the prior art by applying neural network integration to vulnerable plaque detection:
1. the vulnerable plaque is detected by adopting a neural network integration method, so that the accuracy of detecting the vulnerable plaque area can be improved, the conditions of missed detection and false detection can be reduced, the contact ratio between a detection area and a real area can be improved, and the performance of a final detection result can be improved;
2. the method has a promoting effect on realizing the automatic detection function of the vulnerable plaque of the IVOCT image, and has more objectivity on the detection effect of the vulnerable plaque area under the conditions of saving manpower and material resources and reducing the workload of a doctor;
3. the test result of the test set shows that the accuracy, the recall rate and the contact ratio are improved to a certain extent, the detection quality score is higher than that of any single network, particularly after a Haar-Adaboost detection network is added into the integration strategy to correct the negative sample type, the vulnerable plaque detection accuracy is obviously improved, the detection quality score is improved a lot, and the detection quality score is higher than that of any single network.
Drawings
Fig. 1 is a schematic flow chart of an IVOCT image vulnerable plaque automatic detection neural network integration method provided by the present invention.
FIG. 2 is a schematic diagram of a conventional Faster R-CNN network structure provided by the present invention.
Fig. 3 is a schematic diagram of a conventional YOLO network training principle provided by the present invention.
FIG. 4 is a schematic diagram of a cardiovascular expert manually marking vulnerable plaque and integrated test results provided by the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
Referring to fig. 1 to fig. 3, the present invention provides an IVOCT image vulnerable plaque automatic detection neural network integration method, which includes the following steps:
selecting an integrated network, selecting different representative networks for vulnerable plaque detection, wherein the selected networks comprise a traditional target detection network Haar-Adaboost and a deep neural network, and the deep neural network comprises a regression-based target detection network YOLO and SSD and a regional suggestion-based target detection network Faster R-CNN; for better understanding of the present invention, the detection principles of the four detection networks of the conventional Haar-Adaboost, faster R-CNN, YOLO and SSD will now be described as follows:
the Haar characteristic is based on the characteristic of a block, the calculation amount can be reduced, the characteristic value is obtained by calculating the difference between the pixel sum of a black area and a white area, and the Haar characteristic reflects the gray level change condition of a certain characteristic point area; the Adaboost classifier principle is to train different weak classifiers and then cascade the weak classifiers together to form a strong classifier.
The fast R-CNN algorithm based on region suggestion can be divided into two steps, firstly, a candidate region containing a target is generated through a region suggestion algorithm, then, feature extraction is carried out through a convolutional neural network, and classification and position regression of the target are carried out. The fast R-CNN algorithm firstly utilizes a convolutional neural Network to extract the characteristics of an image to be detected to generate a characteristic diagram, then utilizes a candidate Region suggestion Network (RPN) to process the characteristic diagram to generate a high-quality multi-scale target candidate Region, and finally detects and classifies the characteristics of the Network learning high-quality candidate Region, wherein the specific fast R-CNN Network structure is shown in FIG. 2. The feature extraction network in the Faster R-CNN algorithm is a convolutional neural network, extracted convolutional features are shared to a candidate area suggestion network and a detection network, the candidate area suggestion network is equivalent to a coarse detection process of a target, the features extracted by the feature extraction network are used as the input of the network, and suggestion boxes with different scales and aspect ratios are generated by adopting an anchor mechanism, so that the burden of classifying the network can be reduced, and the detection accuracy can be higher under the condition of selecting fewer detection windows. The loss function of the candidate area suggestion network is a multi-task loss function, the class confidence and the correction parameter task of the candidate area are combined, and the loss function mainly comprises two parts: and (4) classifying loss and regression loss, and after the candidate region is obtained, classifying and regressing the candidate region. And the feature graph output by the feature extraction network and the candidate region output by the candidate region suggestion network are used as the input of the detection network together, and the confidence coefficient and the correction parameter of the corresponding category of the candidate region are output.
The regression-based target detection algorithm YOLO has no regional suggestion, the positions and the categories are directly predicted based on CNN characteristics, redundant boxes are merged through a non-maximum suppression mechanism, small probability boxes are removed, and a prediction result is finally generated. The method is characterized in that the target detection is used as a regression problem, the whole image is directly adopted for training, a prediction result is generated by using a full connection layer, each prediction frame uses global features, candidate regions do not need to be extracted in advance, a target region is generated in a forward direction, the target region is corrected in a reverse direction, time cost is low, the position and the type of the target are directly predicted from the whole image, end-to-end target detection is achieved, and a specific YOLO network training principle is shown in figure 2. Specifically, when a target area is generated in the forward direction, the CNN network of YOLO divides an input image into grids of S × S size, then each cell detects a target whose center point falls in the network, each grid predicts B bounding boxes and a confidence of each bounding box, the confidence includes the possibility that the bounding box contains the target and the accuracy of the bounding box, the image is divided into one grid and predicts the bounding box, the confidence and C class probabilities at the same time, and the result of forward output is a tensor: s × S × (B + 5+ C). The target area is corrected reversely by optimizing an error function to minimize a loss function, and the loss function of the YOLO mainly comprises three parts: and the coordinate error, the IOU error and the classification error of the prediction frame and the actual frame are reversely updated and adjusted, so that the loss function is minimized, and the detection accuracy is improved. In the experiment, for the IVOCT image training set, the YOLO is trained from the beginning to obtain different training schemes and hyper-parameters, a small-scale network (tiny-YOLO-voc. Cfg) is used for training, and in the training stage, the training weight is stored together with the iteration number.
The SSD is based on a feed-forward convolutional network, default frames with fixed sizes and fractions of target classes in the frames are generated, multi-scale feature map detection is carried out, feature maps with different receptive fields in different sizes are output on each convolutional layer, position and class prediction is carried out on the default frames on the feature maps with different scales, when a boundary frame is predicted, the SSD imitates an anchor mechanism in a fast R-CNN network, a priori frames with different scales or aspect ratios are arranged on each cell, the boundary frame is predicted by taking the priori frames as references, training is reduced, time overhead is saved, and finally redundant boundary frames with small probability are removed from the predicted boundary frames of each class by adopting a non-maximum suppression principle to generate a final detection result. SSDs use local features to adapt to the prediction problem of different size targets.
Network training detection, namely setting data indexes and evaluation indexes which reflect the detection results of each network on vulnerable plaques according to the characteristics of the IVOCT images; training each network to generate different test models, detecting each network by using a preset number of IVOCT test images to obtain the output result of each network, namely the data index of each network detection result, and calculating the data index to obtain the evaluation index of each network detection result. Wherein the IVOCT image data is a positive sample containing a vulnerable plaque region, labeled "1", and a negative sample containing no vulnerable plaque, labeled "0". Specifically, the IVOCT image has its own characteristics, typical target detection evaluation indexes such as Mean Average Precision (MAP) cannot well reflect the detection effect of vulnerable plaque, and indexes such as a Dice similarity coefficient (sensitivity) and specificity are generally considered for the performance evaluation of the medical image. The Dice similarity coefficient can be regarded as a similarity measure between sets, and in the present application, it can be used to measure the degree of overlap between the detected vulnerable plaque area and the real vulnerable plaque area.
Suppose the set of real vulnerable plaque areas is A = { A = { A = } 1 ,A 2 ,...,A n The set of detected vulnerable plaque areas is B = { B = } 1 ,B 2 ,...,B m }. Note the detected vulnerable plaque area B i (1. Ltoreq. I. Ltoreq.m) and B j (1. Ltoreq. J. Ltoreq.m) there may be cross sections, for which case the application will assign B i And B j Considering as a false output (because the intersections between the elements in set a are all empty), such a region needs to be excluded from set B using equation (1).
Figure BDA0002059998770000081
Then the Dice similarity coefficient is:
Figure BDA0002059998770000082
wherein, | - | represents the width (unit: pixel) of the region, DSC (A) i ,B j )∈[0,1],A i Representing the real vulnerable plaque area set A = { A = { (A) 1 ,A 2 ,...,A n The ith real vulnerable plaque area in (c), B j Represents the detected vulnerable plaque area set B = { B = { (B) 1 ,B 2 ,...,B m Jth detected vulnerable plaque area in (j).
As a specific embodiment, the data indexes of each network for vulnerable plaque detection results include true positive TP (true positive), false positive FP (false positive), false negative FN (false negative), and true negative TN (true negative); wherein the value of TP is calculated according to the following formula (3):
Figure BDA0002059998770000083
wherein i ∈ [1, n ]],j∈[1,m]Expression DSC (A) i ,B j ) T is a logical expression; its value is true and denoted by the number 1 and its value is false and denoted by the number 0. T is a threshold value if DSC (A) i ,B j ) If > T, B is considered to be j Is a vulnerable plaque area. According to the experiments and previous studies of the present application, for medical images, when T =0.5, a good trade-off between specificity and sensitivity can be obtained.
Also, the values for false positive FP and false negative FN in this application are calculated as follows:
Figure BDA0002059998770000091
wherein, the formula (4) indicates that if A i And B j Dissimilar (0 < DSC (A) i ,B j ) T) or not in the set A with B j Intersecting elements, then B is considered j Not the vulnerable plaque area. Equation (5) indicates that if there are no elements in set B with A i Intersect, then vulnerable plaque area A i Is considered to be missed.
The value of true negative TN (number of negative samples detected directly) represents the number of samples that the cardiovascular specialist labeled negative and also detected negative.
The evaluation indexes of each network for the vulnerable plaque detection result comprise accuracy P, recall rate R, contact ratio D and detection quality score S, and the evaluation indexes of the vulnerable plaque detection result can be defined according to formulas (3), (4) and (5); wherein,
the Precision P (Precision rate) is defined as follows:
Figure BDA0002059998770000092
wherein, P belongs to [0,1].
The Recall ratio R (Recall rate) is defined as follows:
Figure BDA0002059998770000093
wherein R is ∈ [0,1].
The degree of Overlap D (Overlap rate) is defined as follows:
Figure BDA0002059998770000101
the degree of coincidence D is a mean value of DSC greater than 0.5, and belongs to [0,1].
The Detection quality score S (Detection quality score) is defined as follows:
Figure BDA0002059998770000102
in the formula, w 1 Weight factor, w, representing accuracy and recall 2 Weight factor, w, representing degree of coincidence 1 ,w 2 ∈[0,1],w 1 +w 2 And =1. Wherein S is [0,1]]The larger the S, the better the vulnerable plaque detection quality.
As a specific embodiment, the training of each network to generate different test models may specifically be performed by respectively training each network to generate a model for detection using existing IVOCT data, and a specific training method is well known to those skilled in the art and therefore will not be described herein again. After that, 300 IVOCT test images (198 positive samples and 102 negative samples) are used to test each network, each network has its own output result, the test results of each network are shown below, including TP, FP, FN and TN, and table 1 below is a data index detected by each network.
TABLE 1 data indices (TP, FP, FN and TN)
Figure BDA0002059998770000103
As can be seen from Table 1, the Haar-Adaboost network detects the negative samples accurately, and in 300 test images, all the negative samples are detected correctly, but the number of false detections and missed detections is large; a large number of false detection conditions exist in the YOLO network, and a large number of negative samples are not accurately detected; the number of missed detections of the SSD network is high, and a plurality of vulnerable plaque areas are not detected; the Faster R-CNN network detects significantly higher numbers of correct vulnerable plaque areas than other networks, but still has a large number of negative samples detected as images containing vulnerable plaque. Evaluation indexes of the detection results were calculated from the data indexes of table 1, and detailed results are shown in table 2.
TABLE 2 evaluation indices (accuracy, recall, contact and detection quality score)
Figure BDA0002059998770000111
In medical diagnosis, the consequences of missed diagnosis (false negative FN) and misdiagnosis (false positive FP) are severe. If the person to be diagnosed is ill, the person is diagnosed as disease-free when being detected, which is a missed diagnosis, because the disease cannot be found and treated in time, the condition is the most serious, so that the treatment of the patient is delayed, and the condition that the optimal treatment time is missed is terrible. The other is that the diagnosed person is not ill, but is diagnosed as ill when the patient is detected, which is misdiagnosed, and this situation is also very big for psychological attack of the patient, and also wastes medical resources, thereby causing great harm in any case. Therefore, through research and analysis, in the detection of vulnerable plaques of the IVOCT images, in order to reduce misdiagnosis and missed diagnosis and improve the detection accuracy, the inventors of the present application determine to integrate the detection results of multiple networks, so that even if a certain classifier obtains a wrong prediction, other classifiers can correct the error through an integrated rule.
Therefore, the method adopts a two-step integration method in the last integration method, wherein the first step is to design an integration strategy according to the data index and the evaluation index of each network detection result, and integrate the IVOCT detection image category labels detected by each network by using the integration strategy to obtain the judgment whether the IVOCT detection image contains the vulnerable plaque area; and the second step is to carry out integration and combination treatment on all areas containing vulnerable plaque areas in the IVOCT detection image according to the result of the first step of category integration, and the integration result is used as a final output detection result.
In particular, in regression prediction of machine learning, a common ensemble learning strategy is to obtain an average value of outputs of a plurality of individual learners as an output of a final classifier, and methods for obtaining the average value include arithmetic mean and weighted mean.
Arithmetic mean method:
Figure BDA0002059998770000121
weighted average method:
Figure BDA0002059998770000122
in equations (10) and (11), C (x) represents the resultant output of the final vote, ω (x) ("ω i Represents the weight value corresponding to the ith classifier, C i (x) Indicating the voting result of the ith classifier.
Another classification problem is generally a voting method, in which the discrimination results given by the individual learners are integrated by voting, and finally the voting result is used as the output of the final classifier. The voting method comprises simple voting and weighted voting, wherein the simple voting is that the weights of all classifiers are the same, namely, the simple voting method is a relative majority voting method, minority obeys majority, and the result of final output is that the number of votes obtained by the classes exceeds half; the weighted votes are the weighted values of all classifiers are different, the weighted votes are summed, and the output of the final classifier is the maximum class value.
The simple voting method uses the following formula for calculation:
Figure BDA0002059998770000123
the weighted voting method is calculated using the following formula:
Figure BDA0002059998770000124
in equations (12) and (13), C (x) represents the resultant output of the final vote, sign is a sign function, w i Represents the weight value corresponding to the ith classifier, C i (x) Indicating the voting result of the ith classifier.
In the first step of category integration of the integration method, firstly, the category of the image is judged, and whether the image is a positive sample or a negative sample is determined. According to the detection results of the individual learners (namely, the detection networks), the detection results of the networks YOLO, SSD and Faster R-CNN are slightly better, but the detection accuracy of the network Haar-Adaboost on negative samples is high, so that the final class output is integrated by adopting a strategy of combining simple voting and weighted voting when the class integration is carried out.
As a specific embodiment, the integration strategy designed by the present application includes four integration strategies, and according to the results of the foregoing table 1 and table 2, only three detection networks of YOLO, SSD and fast R-CNN are firstly used for integration, specifically:
in the first integration strategy, in the diagnosis process, the missed diagnosis cost is higher, so in order to prevent missed detection in detection, in three detection networks of fast R-CNN, YOLO and SSD, as long as one network detects vulnerable plaques, the corresponding image is judged as a positive sample, and the specific category integration result is shown in table 3.
The second integration strategy is to integrate the outputs of three detection networks, namely Faster R-CNN, YOLO and SSD, by directly adopting a simple voting method and a minority majority-obeying method, and the specific category integration result is shown in Table 4.
And the third integration strategy is wholly based on the principle of a simple voting method, but when the fast R-CNN or the SSD in the three detection networks of the fast R-CNN, the YOLO and the SSD detects vulnerable plaques, a weighted voting method is adopted to endow the two networks with higher weights, so that the final judgment result judges the corresponding image as a positive sample based on the detection result of the fast R-CNN or the SSD, and the specific category integration result is shown in the table 5.
The three integration methods are corrected in the aspect of missing detection, but many false detection situations still exist, and many false detections are made on negative samples as positive samples, in order to correct the errors, according to the result of table 1, the Haar-Adaboost network has a certain help for judging the negative samples, but the accuracy of the detected vulnerable plaque area is low, so the design of the application only selects the judgment of the Haar-Adaboost network on the category as a part of the integrated learning strategy. Therefore, the fourth integration strategy mainly corrects the judgment of the negative sample through a Haar-Adaboost network, and specifically adopts a weighted voting method when the judgment image of the Haar-Adaboost network is the negative sample, and allocates the weight of the Haar-Adaboost network in the category integration to be 0.4-0.7 and the weights of the other three networks to be 0.1-0.2 in the four detection networks of fast R-CNN, YOLO, SSD and Haar-Adaboost; and when the Haar-Adaboost network judges that the image is a positive sample, the Haar-Adaboost network does not distribute weights in the category integration, the category judgment is carried out by adopting the third integration strategy, and the specific category integration result is shown in the table 6.
In the fourth integration strategy, when the discrimination image of the Haar-Adaboost network is a negative sample, the weight of the Haar-Adaboost network in category integration is assigned to 0.4, and the weights of the other three networks are all assigned to 0.2, so that the discrimination of the negative sample can be corrected well through the Haar-Adaboost network.
After the class integration of the image is completed, a second step of integration, namely integration of the detected vulnerable plaque, is required to be performed on the result that the class output is a positive sample. Since the error detection of the negative samples is basically reduced in the category integration, when the judgment output is a positive sample, the vulnerable plaque region integration includes the region detected by a single network as much as possible, when vulnerable plaque is output by a plurality of detection networks (or individual learners), all vulnerable plaque regions need to be merged, and the integration result is the union of all regions.
The following tables 3 to 6 show image category labels detected by a single network, specifically, when an image detected by the network contains a vulnerable plaque area, the image is represented as a positive sample, and the label is marked as "1"; when the image detected by the network does not contain the vulnerable plaque area, the label is marked as '0'; "-" indicates whether the image class output of the network detection is a positive or negative sample. Outputting a category label after the category integration through the first step, and then performing integrated output of the vulnerable plaque area on the basis of a category output result. If the output of the integration category label is '0', the area integration is not output, and if the output of the integration category label is '1', the area integration is output according to the area integration strategy. Wherein, the 'F' represents that the detection area output is the detection result output of the Faster R-CNN network; "S" indicates that the detection area output is the detection result output of the SSD network; "Y" indicates that the detection area output is the detection result output of the YOLO network; "Merge (Y & S)" indicates that the detected vulnerable plaque is the union of the vulnerable plaque areas detected by YOLO and SSD network; similarly, "Merge (F & S)", "Merge (F & Y & S)" respectively represent the union of vulnerable plaque regions detected by the corresponding network.
TABLE 3 output of the first integration strategy
Figure BDA0002059998770000141
Figure BDA0002059998770000151
TABLE 4 output of the second integration strategy
Figure BDA0002059998770000152
TABLE 5 output results of the third integration strategy
Figure BDA0002059998770000153
TABLE 6 output results of the fourth integration strategy
Figure BDA0002059998770000154
Figure BDA0002059998770000161
As a specific embodiment, the IVOCT image vulnerable plaque automatic detection neural network integration method provided by the present application further includes a step of evaluating an output result of each integration strategy by using the aforementioned evaluation index. Specifically, the network output results are integrated according to the four integration strategies designed in the foregoing, each integration strategy has a corresponding result output, and table 7 shows TP, FP, FN, and TN values corresponding to the final output result of each integration strategy.
TABLE 7 Integrated output results calculation indices (TP, FP, FN and TN)
Figure BDA0002059998770000162
The detection performance is evaluated by adopting the data indexes, compared with the results in the table 1, the TP values are improved, and the FP and FN values are reduced; moreover, a Haar-Adaboost network is added in the fourth integration strategy to judge and output the image types, and a large amount of correction is performed on negative sample detection, so that the TN value is improved, and the FP value is reduced. The evaluation index of the detection result was calculated from the data index of table 7, and the detailed results are shown in table 8.
TABLE 8 evaluation index (accuracy, recall, contact ratio and detection quality score) of integrated output result calculation
Figure BDA0002059998770000163
Figure BDA0002059998770000171
Compared with the evaluation indexes of a single network in the table 2, the three integration strategies integrate the three networks of YOLO, SSD and Faster R-CNN, the accuracy, the recall rate and the contact ratio are improved to a certain extent, and the detection quality score is higher than that of any single network. After the Haar-Adaboost network is added into the fourth integration strategy to correct the negative sample type, the method is favorable for improving the precision of vulnerable plaque detection, compared with the precision shown in the table 2, the precision is improved by about 10 percentage points, the detection quality score is improved by a large amount, and the detection result is higher than that of any single network. And the detection quality score of the fourth integration strategy is the highest from the detection quality scores of the four integration strategies, so the fourth integration strategy can be preferably subjected to category integration.
In a specific embodiment, referring to fig. 4, a first row represents the results of manual labeling by a cardiovascular expert, a second row represents the output results of a first integration strategy, a third row represents the output results of a second integration strategy, a fourth row represents the output results of a third integration strategy, and a fifth row represents the output results of a fourth integration strategy, wherein the output results of different integration strategies may be different. The results in the first column show that the results output by the first integration strategy are more consistent with the results manually labeled by the cardiovascular experts, and the third category of integration outputs correctly, but the regional integration is not accurate. The results in the second column show that the output of the second and fourth integration strategies are consistent with cardiovascular expert labeling results. The third column of results shows that the manual labeling results are consistent with the results of cardiovascular experts only after the fourth integration strategy is added to the Haar-Adaboost network to correct the negative sample class output.
Therefore, the neural network integration method for automatically detecting vulnerable plaque of IVOCT image provided by the invention has the following advantages compared with the prior art by applying the neural network integration to the detection of vulnerable plaque:
1. the vulnerable plaque is detected by adopting a neural network integration method, so that the accuracy of detecting the vulnerable plaque area can be improved, the conditions of missed detection and false detection can be reduced, the contact ratio between a detection area and a real area can be improved, and the performance of a final detection result can be improved;
2. the method has a promoting effect on realizing the automatic detection function of the vulnerable plaque of the IVOCT image, and has more objectivity on the detection effect of the vulnerable plaque area under the conditions of saving manpower and material resources and reducing the workload of doctors;
3. the test result of the test set shows that the accuracy, the recall rate and the contact ratio are improved to a certain extent, the detection quality score is higher than that of any single network, particularly after a Haar-Adaboost detection network is added into the integration strategy to correct the negative sample type, the vulnerable plaque detection accuracy is obviously improved, the detection quality score is improved a lot, and the detection quality score is higher than that of any single network.
Finally, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

  1. An IVOCT image vulnerable plaque automatic detection neural network integration method is characterized by comprising the following steps:
    selecting an integrated network, namely selecting different representative networks for vulnerable plaque detection, wherein the selected networks comprise a traditional target detection network Haar-Adaboost and a deep neural network, and the deep neural network comprises a regression-based target detection network YOLO and SSD and a region-suggestion-based target detection network Faster R-CNN;
    network training detection, namely setting data indexes and evaluation indexes reflecting the detection result of each network on vulnerable plaques according to the self characteristics of the IVOCT images; training each network to generate different test models, detecting each network by using a predetermined number of IVOCT test images to obtain the output result of each network, namely the data index of each network detection result, and calculating the data index to obtain the evaluation index of each network detection result;
    designing an integration strategy according to the data index and the evaluation index of each network detection result, integrating IVOCT detection image category labels detected by each network by using the integration strategy to obtain the judgment of whether the IVOCT detection image contains a vulnerable plaque area, integrating and combining all areas containing the vulnerable plaque area in the IVOCT detection image according to the result of the first-step category integration, and taking the integration result as a final output detection result; wherein,
    the integration strategies include four integration strategies, specifically:
    the first integration strategy is that in three detection networks of Faster R-CNN, YOLO and SSD, if only one network detects vulnerable plaques, the corresponding image is judged as a positive sample;
    the second integration strategy is to integrate the outputs of three detection networks of fast R-CNN, YOLO and SSD by adopting a simple voting method and a minority majority-obeying method;
    the third integration strategy is that a simple voting method is adopted as a whole, but when the vulnerable plaque is detected by the Faster R-CNN or the SSD in the three detection networks of the Faster R-CNN, the YOLO and the SSD, a weighted voting method is adopted to endow the two networks with higher weight so that the final judgment result judges the corresponding image as a positive sample based on the detection result of the fast R-CNN or the SSD;
    the fourth integration strategy is that when the image is judged to be a negative sample by the Haar-Adaboost network, a weighted voting method is adopted, and in four detection networks of fast R-CNN, YOLO, SSD and Haar-Adaboost, the weight of the Haar-Adaboost network in the category integration is distributed to be 0.4-0.7, and the weights of the other three networks are distributed to be 0.1-0.2; and when the Haar-Adaboost network judges that the image is a positive sample, the Haar-Adaboost network does not distribute weights in category integration, and the third integration strategy is adopted for carrying out category judgment.
  2. 2. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, wherein the data indicators of each network for vulnerable plaque detection results comprise true positive TP, false positive FP, false negative FN and true negative TN; wherein,
    Figure FDA0003865983460000021
    Figure FDA0003865983460000022
    Figure FDA0003865983460000023
    wherein T is a threshold value, and the expression DSC (A) i ,B j ) T is a logic expression whose value is true represented by the number 1 and whose value is false represented by the number 0; if DSC (A) i ,B j ) If > T, B is considered to be j Is a vulnerable plaque area; ((DSC (A) i ,B j )>0)∧(DSC(A i ,B j )≤T))、(DSC(A i ,B j )=0)、
    Figure FDA0003865983460000024
    Is a logical expression whose value is true represented by the number 1 and whose value is false represented by the number 0;
    Figure FDA0003865983460000025
    the formula shows that: if 0 < DSC (A) i ,B j ) T is ≦ T, i.e., A i And B j Are not similar, or
    Figure FDA0003865983460000026
    I.e. set A has no and B j Intersecting elements, then B is considered j Not vulnerable plaque areas;
    (DSC(A i ,B j )=0)、
    Figure FDA0003865983460000031
    is a logical expression whose value is true and whose value is false and whose value is represented by the number 1 and 0;
    Figure FDA0003865983460000032
    the formula represents: if there is no element in the set B with A i Intersect, then vulnerable plaque area A i Is considered to be missed; DSC (A) i ,B j ) Is a Dice similarity coefficient used for measuring the overlap degree of the detected vulnerable plaque area and the real vulnerable plaque area,
    Figure FDA0003865983460000033
    A i representing the real vulnerable plaque area set A = { A = { (A) 1 ,A 2 ,...,A n Ith real vulnerable plaque area in (c), B j Represents the detected vulnerable plaque area set B = { B = { (B) 1 ,B 2 ,...,B m J in the (f) th detected vulnerable plaque area, i belongs to [1, n ]],j∈[1,m], | - | represents the width of the region;
    the evaluation indexes of each network on the vulnerable plaque detection result comprise accuracy P, recall rate R, contact ratio D and detection quality score S; wherein,
    Figure FDA0003865983460000034
    Figure FDA0003865983460000035
    Figure FDA0003865983460000036
    Figure FDA0003865983460000037
    in the formula, w 1 Weight factor, w, representing accuracy and recall 2 Weight factor, w, representing degree of coincidence 1 ,w 2 ∈[0,1],w 1 +w 2 =1。
  3. 3. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 2, wherein the threshold T =0.5.
  4. 4. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, wherein the simple voting method is calculated by adopting the following formula:
    Figure FDA0003865983460000038
    where C (x) represents the resulting output of the final vote, sign is a sign function, C i (x) Indicating the voting result of the ith classifier.
  5. 5. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, wherein the weighted voting method is calculated by adopting the following formula:
    Figure FDA0003865983460000041
    where C (x) represents the resulting output of the final vote, sign is a sign function, w i Represents the weight corresponding to the ith classifier, C i (x) Indicating the voting result of the ith classifier.
  6. 6. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, wherein in the fourth integration strategy, when the Haar-Adaboost network discriminates that the image is a negative sample, the weight of the Haar-Adaboost network in category integration is assigned to 0.4, and the weights of the other three networks are all assigned to 0.2.
  7. 7. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, wherein in the second step of area integration of the two-step integration method, an area detected by a single grid should be included as much as possible, so when vulnerable plaque is output by a plurality of detection grids, all vulnerable plaque areas need to be merged, and the integration result is the union of all areas.
  8. 8. The IVOCT image vulnerable plaque automatic detection neural network integration method of claim 1, further comprising the step of evaluating each integration strategy output result by using the aforementioned evaluation index.
CN201910402166.9A 2019-05-14 2019-05-14 Neural network integration method for automatically detecting vulnerable plaque of IVOCT image Expired - Fee Related CN110136115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910402166.9A CN110136115B (en) 2019-05-14 2019-05-14 Neural network integration method for automatically detecting vulnerable plaque of IVOCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910402166.9A CN110136115B (en) 2019-05-14 2019-05-14 Neural network integration method for automatically detecting vulnerable plaque of IVOCT image

Publications (2)

Publication Number Publication Date
CN110136115A CN110136115A (en) 2019-08-16
CN110136115B true CN110136115B (en) 2022-11-08

Family

ID=67574096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910402166.9A Expired - Fee Related CN110136115B (en) 2019-05-14 2019-05-14 Neural network integration method for automatically detecting vulnerable plaque of IVOCT image

Country Status (1)

Country Link
CN (1) CN110136115B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366374B (en) * 2013-07-12 2016-04-20 重庆大学 Based on the passageway for fire apparatus obstacle detection method of images match
CN108416769B (en) * 2018-03-02 2021-06-04 成都斯斐德科技有限公司 IVOCT image vulnerable plaque automatic detection method based on preprocessing
CN108416394B (en) * 2018-03-22 2019-09-03 河南工业大学 Multi-target detection model building method based on convolutional neural networks
CN108875595A (en) * 2018-05-29 2018-11-23 重庆大学 A kind of Driving Scene object detection method merged based on deep learning and multilayer feature
CN108986073A (en) * 2018-06-04 2018-12-11 东南大学 A kind of CT image pulmonary nodule detection method based on improved Faster R-CNN frame
CN108961229A (en) * 2018-06-27 2018-12-07 东北大学 Cardiovascular OCT image based on deep learning easily loses plaque detection method and system
CN109598290A (en) * 2018-11-22 2019-04-09 上海交通大学 A kind of image small target detecting method combined based on hierarchical detection

Also Published As

Publication number Publication date
CN110136115A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110992382B (en) Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening
CN110766051A (en) Lung nodule morphological classification method based on neural network
US20230230241A1 (en) System and method for detecting lung abnormalities
CN114387201B (en) Cytopathic image auxiliary diagnosis system based on deep learning and reinforcement learning
CN110111895A (en) A kind of method for building up of nasopharyngeal carcinoma far-end transfer prediction model
CN113763340B (en) Automatic grading method based on multitask deep learning ankylosing spondylitis
CN111798976A (en) DDH artificial intelligence auxiliary diagnosis method and device
CN113610118B (en) Glaucoma diagnosis method, device, equipment and method based on multitasking course learning
US20220121902A1 (en) Method and apparatus for quality prediction
CN112819821A (en) Cell nucleus image detection method
CN116883768B (en) Lung nodule intelligent grading method and system based on multi-modal feature fusion
Merone et al. A computer-aided diagnosis system for HEp-2 fluorescence intensity classification
WO2023160666A1 (en) Target detection method and apparatus, and target detection model training method and apparatus
CN114512240A (en) Gout prediction model system, equipment and storage medium
Davis et al. Automated bone age assessment using feature extraction
CN111833321A (en) Window-adjusting optimization-enhanced intracranial hemorrhage detection model and construction method thereof
CN115187566A (en) Intracranial aneurysm detection method and device based on MRA image
CN114098779A (en) Intelligent pneumoconiosis grade judging method
AU2021100007A4 (en) Deep Learning Based System for the Detection of COVID-19 Infections
CN114972153A (en) Bridge vibration displacement visual measurement method and system based on deep learning
CN110136115B (en) Neural network integration method for automatically detecting vulnerable plaque of IVOCT image
CN113052227A (en) Pulmonary tuberculosis identification method based on SE-ResNet
Soda et al. A multi-expert system to classify fluorescent intensity in antinuclear autoantibodies testing
CN116504406A (en) Method and system for constructing lung cancer postoperative risk model based on image combination pathology
CN116452592A (en) Method, device and system for constructing brain vascular disease AI cognitive function evaluation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221108

CF01 Termination of patent right due to non-payment of annual fee