CN117198514B - A vulnerable plaque identification method and system based on CLIP model - Google Patents

A vulnerable plaque identification method and system based on CLIP model Download PDF

Info

Publication number
CN117198514B
CN117198514B CN202311473361.3A CN202311473361A CN117198514B CN 117198514 B CN117198514 B CN 117198514B CN 202311473361 A CN202311473361 A CN 202311473361A CN 117198514 B CN117198514 B CN 117198514B
Authority
CN
China
Prior art keywords
image
vulnerable plaque
text
vulnerable
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311473361.3A
Other languages
Chinese (zh)
Other versions
CN117198514A (en
Inventor
王怡宁
易妍
徐橙
钱真
刘晓鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lianying Intelligent Imaging Technology Research Institute
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Original Assignee
Beijing Lianying Intelligent Imaging Technology Research Institute
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lianying Intelligent Imaging Technology Research Institute, Peking Union Medical College Hospital Chinese Academy of Medical Sciences filed Critical Beijing Lianying Intelligent Imaging Technology Research Institute
Priority to CN202311473361.3A priority Critical patent/CN117198514B/en
Publication of CN117198514A publication Critical patent/CN117198514A/en
Application granted granted Critical
Publication of CN117198514B publication Critical patent/CN117198514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of medical engineering, and provides a vulnerable plaque identification method and a vulnerable plaque identification system based on a CLIP model aiming at the influence of vulnerable plaque on main cardiovascular adverse events and the current situation of the current research of automatically identifying vulnerable plaque based on the image field. The vulnerable plaque identification network model constructed by the invention is based on the CLIP model, and a BN layer and a Dropout layer are introduced to respectively process text features and image features, so that the overfitting is reduced. In addition, considering that the characteristic judgment of partial vulnerable plaque is very subjective, the gold standard label is easy to mix with noise, bootstrapping loss is adopted to replace a standard cross entropy loss function, a predictive label is introduced into a bootstrapping loss formula, the loss value of a noise sample is reduced, the update of the noise sample is indirectly influenced, if the predictive value is a true value, the loss is 0, and the normal sample can still be effectively trained.

Description

一种基于CLIP模型的易损斑块识别方法及系统A vulnerable plaque identification method and system based on CLIP model

技术领域Technical field

本发明涉及医学工程领域,具体为一种基于CLIP模型的易损斑块识别方法和系统。The invention relates to the field of medical engineering, specifically a method and system for identifying vulnerable plaques based on the CLIP model.

背景技术Background technique

急性冠状动脉综合征患者死亡风险高,其主要病理基础是不稳定斑块破裂或斑块表面侵蚀继发血栓形成。而不稳定斑块破裂与斑块表面侵蚀(也称易损斑块或高危斑块)和主要心血管不良事件(major adverse cardiovascular event,MACE)密切相关。因此,早期识别易损斑块,并据此进行强化干预,对减少MACE的发生具有重要意义。传统的有创冠状动脉造影只能显示血管管腔情况,不能直接显示斑块及其特征。而冠状动脉CTA(coronaryCTA,CCTA)不仅能可靠地评价管腔狭窄及其功能意义,还能准确评价斑块的形态和组成成分,识别易损斑块,对指导冠心病患者的临床管理有极其重要的意义。Patients with acute coronary syndrome have a high risk of death, and the main pathological basis is thrombosis secondary to unstable plaque rupture or plaque surface erosion. Unstable plaque rupture is closely related to plaque surface erosion (also called vulnerable plaque or high-risk plaque) and major adverse cardiovascular events (MACE). Therefore, early identification of vulnerable plaques and intensive intervention accordingly are of great significance in reducing the occurrence of MACE. Traditional invasive coronary angiography can only display the lumen of blood vessels, but cannot directly display plaques and their characteristics. Coronary CTA (coronaryCTA, CCTA) can not only reliably evaluate luminal stenosis and its functional significance, but also accurately evaluate the morphology and composition of plaques, identify vulnerable plaques, and is extremely useful in guiding the clinical management of patients with coronary heart disease. Significance.

近年来,在心血管领域影像组学和机器学习的研究日渐增多。影像组学通过自动化斑块分割和量化技术减少了人为测量误差,且能同时整合临床和影像数据对疾病进行全面分析,极大地提高了CCTA高维斑块定量分析技术的应用价值。针对探究机器学习在冠状动脉斑块中的价值,最近的研究显示,相比于目前心血管风险评分、冠状动脉钙化评分和管腔狭窄严重程度等的预测能力,用机器学习提取的病变特征(包括最小管腔面积、动脉粥样硬化体积百分比、纤维脂肪和坏死核心体积、斑块体积、左前降支病变和重构指数)在预测5年内MACE中显示出更高的预后价值。而且,机器学习技术还可用于高危斑块特征的提取,识别有斑块进展风险的患者。但目前基于图像领域自动识别易损斑块且指导临床治疗决策和预后评估的研究较少,值得深入探索和研究,进而使其发挥更大的价值。In recent years, research on radiomics and machine learning in the cardiovascular field has been increasing. Radiomics reduces human measurement errors through automated plaque segmentation and quantification technology, and can simultaneously integrate clinical and imaging data to conduct a comprehensive analysis of the disease, which greatly improves the application value of CCTA high-dimensional plaque quantitative analysis technology. In order to explore the value of machine learning in coronary artery plaque, recent studies have shown that compared with the current predictive capabilities of cardiovascular risk scores, coronary artery calcium scores, and luminal stenosis severity, the lesion features extracted with machine learning ( Including minimal lumen area, atherosclerotic volume percentage, fibrofatty and necrotic core volume, plaque volume, left anterior descending artery disease, and remodeling index) showed higher prognostic value in predicting MACE within 5 years. Moreover, machine learning technology can also be used to extract high-risk plaque features and identify patients at risk of plaque progression. However, there are currently few studies on automatically identifying vulnerable plaques based on the image field and guiding clinical treatment decisions and prognosis assessment. It is worthy of in-depth exploration and research, so that it can exert greater value.

发明内容Contents of the invention

针对易损斑块对主要心血管不良事件的影响以及目前基于图像领域自动识别易损斑块研究的现状,本发明提供了一种基于CLIP模型的易损斑块识别方法和系统。In view of the impact of vulnerable plaques on major cardiovascular adverse events and the current status of research on automatic identification of vulnerable plaques based on the image field, the present invention provides a method and system for identifying vulnerable plaques based on the CLIP model.

为了达到上述目的,本发明采用了下列技术方案:In order to achieve the above objects, the present invention adopts the following technical solutions:

第一方面,本发明提供了一种基于CLIP模型的易损斑块识别方法,包括以下步骤:In a first aspect, the present invention provides a vulnerable plaque identification method based on the CLIP model, which includes the following steps:

S1 图像预处理;S1 image preprocessing;

S2 易损斑块识别网络模型的构建与训练;S2 Construction and training of vulnerable plaque recognition network model;

S3 将预处理后的图像和文本输入训练好的模型中,完成易损斑块的识别分类。S3 inputs the preprocessed images and text into the trained model to complete the identification and classification of vulnerable plaques.

进一步,所述S1中图像预处理的具体过程为:将已知血管中心线及病灶范围的MPR图像生成spacing为0.3*0.3*0.3的CPR图像,因为易损斑块征象主要为脂质成分,需要拉宽窗宽,定义使用窗位350HU,窗宽1500HU归一化,对归一化后的CPR图像取病灶box crop一个patch,并重采样成64*64*64大小。Furthermore, the specific process of image preprocessing in S1 is: generate a CPR image with a spacing of 0.3*0.3*0.3 from the MPR image of the known blood vessel centerline and lesion range, because the signs of vulnerable plaques are mainly lipid components. It is necessary to widen the window width, define the window level to be 350HU, and normalize the window width to 1500HU. Take a patch of the lesion box crop from the normalized CPR image, and resample it to a size of 64*64*64.

进一步,所述S2中易损斑块识别网络模型包括图像编码模块、文本编码模块、图像特征处理层、文本特征处理层、特征拼接层和分类器;其中,图像编码模块用于提取图片特征;文本编码模块用于提取文本特征;图像特征处理层用于处理图像编码模块提取的图像特征;文本特征处理层用于处理文本编码模块提取的文本特征;特征拼接层用于将处理后的图像特征和文本特征进行拼接;分类器用于输出识别分类结果。Further, the vulnerable plaque recognition network model in S2 includes an image coding module, a text coding module, an image feature processing layer, a text feature processing layer, a feature splicing layer and a classifier; wherein, the image coding module is used to extract image features; The text coding module is used to extract text features; the image feature processing layer is used to process the image features extracted by the image coding module; the text feature processing layer is used to process the text features extracted by the text coding module; the feature splicing layer is used to combine the processed image features Spliced with text features; the classifier is used to output recognition classification results.

进一步,所述图像编码模块是vision transformer或ResNet50网络;图像特征处理层和文本特征处理层分别为BN层和Dropout层。Further, the image encoding module is a vision transformer or ResNet50 network; the image feature processing layer and text feature processing layer are BN layer and Dropout layer respectively.

进一步,所述S2中模型训练采用AdamW参数优化方法。Furthermore, the model training in S2 adopts the AdamW parameter optimization method.

进一步,所述训练具体为:Further, the training specifically includes:

考虑到部分易损斑块的特征判断极具主观性,金标准标签易混入噪声,采用bootstrapping loss代替标准的交叉熵损失函数;Considering that the feature judgment of some vulnerable plaques is highly subjective and gold standard labels are easily mixed with noise, bootstrapping loss is used instead of the standard cross-entropy loss function;

bootstrapping loss的公式如下所示:The formula of bootstrapping loss is as follows:

式中:是真实标签,/>是预测标签,/>是预测的概率值,/>是噪声权重,N是样本个数。In the formula: is a real label,/> is the predicted label,/> is the predicted probability value,/> is the noise weight, and N is the number of samples.

进一步,所述S3中文本为通常报告里对斑块的描述,如:斑块位置、类型、形态、狭窄程度。Furthermore, the Chinese text of S3 is the description of plaques in common reports, such as: plaque location, type, shape, and degree of stenosis.

第二方面,本发明提供了一种基于CLIP模型的易损斑块识别系统,所述系统用于实现上文所述的基于CLIP模型的易损斑块识别方法,所述系统包括图像预处理单元和易损斑块识别单元,其中图像预处理单元用于将已知血管中心线及病灶范围的MPR图像生成spacing为0.3*0.3*0.3的CPR图像,并使用窗位350HU,窗宽1500HU归一化,对归一化后的CPR图像取病灶box crop一个patch,并重采样成64*64*64大小;易损斑块识别单元包含易损斑块识别网络模型,主要进行特征提取、特征处理及特征拼接,从而获得易损斑块识别分类结果。In a second aspect, the present invention provides a vulnerable plaque identification system based on the CLIP model. The system is used to implement the vulnerable plaque identification method based on the CLIP model described above. The system includes image preprocessing. unit and vulnerable plaque identification unit, in which the image preprocessing unit is used to generate a CPR image with a spacing of 0.3*0.3*0.3 from the MPR image of the known blood vessel centerline and lesion range, and uses a window level of 350HU and a window width of 1500HU to normalize Normalization, take a patch of lesion box crop from the normalized CPR image, and resample it into a size of 64*64*64; the vulnerable plaque identification unit includes a vulnerable plaque identification network model, which mainly performs feature extraction and feature processing and feature splicing to obtain vulnerable plaque identification and classification results.

第三方面,本发明提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上文所述的基于CLIP模型的易损斑块识别方法。In a third aspect, the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, the above-mentioned steps are implemented. Vulnerable plaque identification method based on CLIP model.

第四方面,本发明提供了一种非暂态计算机可读存储介质,所述介质上存储有计算机程序,该计算机程序用于被处理器执行时实现如上文所述的基于CLIP模型的易损斑块识别方法。In a fourth aspect, the present invention provides a non-transitory computer-readable storage medium. A computer program is stored on the medium. The computer program is used to implement the vulnerability based on the CLIP model as described above when executed by a processor. Plaque identification methods.

与现有技术相比本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

本发明针对易损斑块对主要心血管不良事件的影响以及目前基于图像领域自动识别易损斑块研究的现状,提供了一种基于CLIP模型的易损斑块识别方法和系统。本发明构建的易损斑块识别网络模型在CLIP模型的基础上,引入了BN层和Dropout层分别对文本特征和图像特征做处理,减少了过拟合。另外,考虑到部分易损斑块的特征判断极具主观性,金标准标签易混入噪声,本发明采用bootstrapping loss代替标准的交叉熵损失函数,在bootstrapping loss公式中引入了预测标签,降低了噪声样本的loss值,间接影响了的更新,如果预测值为真实值,则loss为0,正常样本仍可获得有效训练。The present invention provides a vulnerable plaque identification method and system based on the CLIP model in view of the impact of vulnerable plaques on major adverse cardiovascular events and the current status of research on automatic identification of vulnerable plaques based on the image field. The vulnerable plaque recognition network model constructed by the present invention is based on the CLIP model and introduces a BN layer and a Dropout layer to process text features and image features respectively, thereby reducing over-fitting. In addition, considering that the feature judgment of some vulnerable plaques is highly subjective and gold standard labels are easily mixed with noise, the present invention uses bootstrapping loss instead of the standard cross-entropy loss function, and introduces predictive labels into the bootstrapping loss formula to reduce noise. The loss value of the sample indirectly affects the update. If the predicted value is the true value, the loss is 0, and normal samples can still be effectively trained.

附图说明Description of the drawings

图1为本发明方法的流程图。Figure 1 is a flow chart of the method of the present invention.

图2为图像预处理流程图。Figure 2 is the image preprocessing flow chart.

图3为本发明易损斑块识别网络模型结构图。Figure 3 is a structural diagram of the vulnerable plaque identification network model of the present invention.

图4为CLIP模型图像编码模块的CNN架构示意图。Figure 4 is a schematic diagram of the CNN architecture of the CLIP model image coding module.

图5为CLIP模型文本编码器架构示意图。Figure 5 is a schematic diagram of the CLIP model text encoder architecture.

具体实施方式Detailed ways

下面结合本发明实施例和附图,对本发明的技术方案进行具体、详细的说明。应当指出,以下实施例仅用于说明本发明,不用来限制本发明的保护范围。若未特别指明,实施例中所用的技术手段为本领域技术人员所熟知的常规手段。The technical solution of the present invention will be described in detail below with reference to the embodiments of the present invention and the accompanying drawings. It should be noted that the following examples are only used to illustrate the present invention and are not intended to limit the scope of the present invention. Unless otherwise specified, the technical means used in the examples are conventional means well known to those skilled in the art.

如图1所示,一种基于CLIP模型的易损斑块识别方法,包括以下步骤:As shown in Figure 1, a vulnerable plaque identification method based on the CLIP model includes the following steps:

1、图像预处理:将已知血管中心线及病灶范围的MPR图像生成spacing为0.3*0.3*0.3的CPR图像,因为易损斑块征象主要为脂质成分,需要拉宽窗宽,这里定义为使用窗位350HU,窗宽1500HU归一化,对归一化后的CPR图像取病灶box crop一个patch,并重采样成64*64*64大小(保持原有长宽高比例,多余地方补0),如图2所示。1. Image preprocessing: Generate a CPR image with a spacing of 0.3*0.3*0.3 from the MPR image of the known blood vessel centerline and lesion range. Because the signs of vulnerable plaques are mainly lipid components, the window width needs to be widened, as defined here In order to use a window level of 350HU and a window width of 1500HU for normalization, take a patch of the lesion box crop from the normalized CPR image and resample it to a size of 64*64*64 (maintain the original length, width and height ratio, and fill in the excess areas with 0 ),as shown in picture 2.

2、易损斑块识别网络模型的构建与训练;2. Construction and training of vulnerable plaque recognition network model;

如图3所示,本发明易损斑块识别网络模型包括图像编码模块、文本编码模块、图像特征处理层、文本特征处理层、特征拼接层和由两层全连接层组成的分类器;其中图像编码模块用于提取图片特征,文本编码模块用于提取文本特征,图像特征处理层用于处理图像编码模块提取的图像特征,文本特征处理层用于处理文本编码模块提取的文本特征,特征拼接层将处理后的图像特征和文本特征进行拼接,由两层全连接层组成的分类器用于输出识别分类结果。As shown in Figure 3, the vulnerable plaque recognition network model of the present invention includes an image coding module, a text coding module, an image feature processing layer, a text feature processing layer, a feature splicing layer and a classifier composed of two fully connected layers; where The image coding module is used to extract image features, the text coding module is used to extract text features, the image feature processing layer is used to process the image features extracted by the image coding module, the text feature processing layer is used to process the text features extracted by the text coding module, and feature splicing The layer splices the processed image features and text features, and a classifier composed of two fully connected layers is used to output the recognition classification results.

CLIP模型是一个利用文本的监督信号训练一个迁移能力强的视觉模型。由两个编码模块组成,分别用于对文本数据和图像数据进行编码。对于图像编码模块,探索了多种不同的模型架构,传统的CNN架构的两个选项如图4所示,但是CLIP的ViT变体的训练计算效率要高出3倍,因此本发明选用CLIP的ViT变体作为图像编码器架构。文本编码器只是一个仅有解码器的transformers,这意味着在每一层中都使用了掩码自注意力,Masked self-attention 确保转换器对序列中每个标记的表示仅依赖于它之前的标记。图5为文本编码器架构的基本描述。The CLIP model is a visual model that uses text supervision signals to train a visual model with strong transferability. It consists of two encoding modules, which are used to encode text data and image data respectively. For the image encoding module, a variety of different model architectures have been explored. The two options of the traditional CNN architecture are shown in Figure 4. However, the training calculation efficiency of the ViT variant of CLIP is 3 times higher, so this invention chooses CLIP. ViT variant as image encoder architecture. The text encoder is just a decoder-only transformer, which means that masked self-attention is used in each layer. Masked self-attention ensures that the transformer's representation of each token in the sequence depends only on the one before it. mark. Figure 5 is a basic description of the text encoder architecture.

本发明图像编码模块和文本编码模块分别为CLIP模型的图像编码模块(CLIP的ViT变体)和文本编码模块。图像编码模块可以是vision transformer,ResNet50等网络,文本编码模块输入文字为报告里对此斑块的描述,可能包括斑块位置、类型、形态、狭窄程度等。The image coding module and the text coding module of the present invention are respectively the image coding module (ViT variant of CLIP) and the text coding module of the CLIP model. The image encoding module can be a network such as vision transformer or ResNet50. The input text of the text encoding module is a description of the plaque in the report, which may include the location, type, shape, stenosis, etc. of the plaque.

此外,为了减少过拟合,本发明引入了BN层和Dropout层分别对文本特征和图像特征做处理,然后将得到的这两个特征进行拼接,再经过一个两层全连接层的分类器。In addition, in order to reduce overfitting, the present invention introduces a BN layer and a Dropout layer to process text features and image features respectively, and then splices the two obtained features, and then passes through a two-layer fully connected classifier.

模型训练采用AdamW参数优化方法,采用AdamW参数优化方法较传统的Adam优化器计算效率更高。Model training uses the AdamW parameter optimization method, which is more computationally efficient than the traditional Adam optimizer.

训练具体为:The training details are:

考虑到部分易损斑块的特征判断极具主观性,金标准标签易混入噪声,采用bootstrapping loss代替标准的交叉熵损失函数;Considering that the feature judgment of some vulnerable plaques is highly subjective and gold standard labels are easily mixed with noise, bootstrapping loss is used instead of the standard cross-entropy loss function;

交叉熵loss公式:Cross entropy loss formula:

bootstrapping loss的公式如下所示:The formula of bootstrapping loss is as follows:

式中:是真实标签,/>是预测标签,/>是预测的概率值,/>是噪声权重,N是样本个数。In the formula: is a real label,/> is the predicted label,/> is the predicted probability value,/> is the noise weight, and N is the number of samples.

本发明bootstrapping loss在二阶交叉熵的基础上进行了loss修正,可以理解为噪声样本计算出的loss值较大,使得噪声样本对模型的影响较大,而在bootstrappingloss公式中引入了预测标签,降低了噪声样本的loss值,间接影响了的更新,如果预测值为真实值,则loss为0,正常样本仍可获得有效训练。The bootstrapping loss of the present invention performs loss correction on the basis of second-order cross entropy. It can be understood that the loss value calculated by the noise sample is larger, making the noise sample have a greater impact on the model, and the prediction label is introduced in the bootstrapping loss formula, The loss value of the noise sample is reduced, which indirectly affects the update. If the predicted value is the true value, the loss is 0, and the normal sample can still be effectively trained.

3、将预处理后的图像以及文本输入训练好的易损斑块识别网络模型中,完成易损斑块的识别分类。3. Input the preprocessed image and text into the trained vulnerable plaque recognition network model to complete the identification and classification of vulnerable plaques.

本发明另一实施例,提供了一种基于CLIP模型的易损斑块识别系统,所述系统用于实现上文所述的基于CLIP模型的易损斑块识别方法,所述系统包括图像预处理单元和易损斑块识别单元,其中图像预处理单元用于将已知血管中心线及病灶范围的MPR图像生成spacing为0.3*0.3*0.3的CPR图像,并使用窗位350HU,窗宽1500HU归一化,对归一化后的CPR图像取病灶box crop一个patch,并重采样成64*64*64大小;易损斑块识别单元包含易损斑块识别网络模型,进行特征提取、特征处理及特征拼接,从而获得易损斑块识别分类结果。Another embodiment of the present invention provides a vulnerable plaque identification system based on the CLIP model. The system is used to implement the vulnerable plaque identification method based on the CLIP model described above. The system includes an image pre-processing system. Processing unit and vulnerable plaque identification unit, in which the image preprocessing unit is used to generate a CPR image with a spacing of 0.3*0.3*0.3 from the MPR image of the known blood vessel centerline and lesion range, and uses a window level of 350HU and a window width of 1500HU Normalize, take a patch of the lesion box crop from the normalized CPR image, and resample it into a size of 64*64*64; the vulnerable plaque identification unit includes a vulnerable plaque identification network model to perform feature extraction and feature processing and feature splicing to obtain vulnerable plaque identification and classification results.

本发明第三实施例,提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上文所述的基于CLIP模型的易损斑块识别方法。A third embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, it implements the above Vulnerable plaque identification method based on CLIP model.

本发明第四实施例,提供了一种非暂态计算机可读存储介质,所述介质上存储有计算机程序,该计算机程序用于被处理器执行时实现如上文所述的基于CLIP模型的易损斑块识别方法。The fourth embodiment of the present invention provides a non-transitory computer-readable storage medium. A computer program is stored on the medium. The computer program is used to implement the above-mentioned easy-to-use CLIP model-based method when executed by a processor. Damage plaque identification method.

虽然,上文中已经用一般性说明及具体实施方案对本发明作了详尽的描述,但在本发明基础上,可以对之作一些修改或改进,这对本领域技术人员而言是显而易见的。因此,在不偏离本发明精神的基础上所做的这些修改或改进,均属于本发明要求保护的范围。Although the present invention has been described in detail with general descriptions and specific embodiments above, it is obvious to those skilled in the art that some modifications or improvements can be made based on the present invention. Therefore, these modifications or improvements made without departing from the spirit of the present invention all fall within the scope of protection claimed by the present invention.

Claims (7)

1.一种基于CLIP模型的易损斑块识别方法,其特征在于,包括以下步骤:1. A vulnerable plaque identification method based on the CLIP model, which is characterized by including the following steps: S1 图像预处理;S1 image preprocessing; S2 易损斑块识别网络模型的构建与训练;S2 Construction and training of vulnerable plaque recognition network model; S3 将预处理后的图像和文本输入训练好的模型中,完成易损斑块的识别分类;S3 inputs the preprocessed images and text into the trained model to complete the identification and classification of vulnerable plaques; 所述S1中图像预处理的具体过程为:将已知血管中心线及病灶范围的多平面重建图像生成曲面重建图像,使用窗位、窗宽归一化,对归一化后的曲面重建图像取病灶所在长方体空间区域对应的一个图像块,并对其进行重采样;The specific process of image preprocessing in S1 is: generate a curved surface reconstruction image from a multi-plane reconstructed image with known blood vessel centerline and lesion range, use window level and window width to normalize, and reconstruct the normalized curved surface image. Take an image block corresponding to the cuboid space area where the lesion is located, and resample it; 所述S2中易损斑块识别网络模型包括图像编码模块、文本编码模块、图像特征处理层、文本特征处理层、特征拼接层和分类器;其中,图像编码模块用于提取图片特征,文本编码模块用于提取文本特征,图像特征处理层用于处理图像编码模块提取的图像特征,文本特征处理层用于处理文本编码模块提取的文本特征,特征拼接层将处理后的图像特征和文本特征进行拼接,分类器用于输出识别分类结果;The vulnerable plaque recognition network model in S2 includes an image coding module, a text coding module, an image feature processing layer, a text feature processing layer, a feature splicing layer and a classifier; among which, the image coding module is used to extract image features, and the text coding module The module is used to extract text features, the image feature processing layer is used to process the image features extracted by the image coding module, the text feature processing layer is used to process the text features extracted by the text coding module, and the feature splicing layer combines the processed image features and text features. Splicing, the classifier is used to output recognition classification results; 所述S2中易损斑块识别网络模型训练采用AdamW参数优化方法;The vulnerable plaque recognition network model training in S2 adopts the AdamW parameter optimization method; 所述S2中易损斑块识别网络模型训练具体为:The vulnerable plaque recognition network model training in S2 is specifically as follows: 采用bootstrapping loss代替标准的交叉熵损失函数;Use bootstrapping loss instead of the standard cross-entropy loss function; 所述bootstrapping loss的公式如下所示:The formula of the bootstrapping loss is as follows: ; 式中:是真实标签,/>是预测标签,/>是预测的概率值,/>是噪声权重,N是样本个数。In the formula: is a real label,/> is the predicted label,/> is the predicted probability value,/> is the noise weight, and N is the number of samples. 2.根据权利要求1所述的基于CLIP模型的易损斑块识别方法,其特征在于,所述S1中图像预处理的具体过程为:将已知血管中心线及病灶范围的多平面重建图像生成像素间距为0.3*0.3*0.3的曲面重建图像,使用窗位350HU,窗宽1500HU归一化,对归一化后的曲面重建图像取病灶所在长方体空间区域对应的一个图像块,并重采样成64*64*64大小。2. The vulnerable plaque identification method based on the CLIP model according to claim 1, characterized in that the specific process of image preprocessing in S1 is: reconstructing the multi-plane image of the known blood vessel centerline and lesion range. Generate a surface reconstruction image with a pixel spacing of 0.3*0.3*0.3, normalize it using a window level of 350HU and a window width of 1500HU. From the normalized surface reconstruction image, take an image block corresponding to the cuboid space area where the lesion is located, and resample it into 64*64*64 size. 3.根据权利要求1所述的基于CLIP模型的易损斑块识别方法,其特征在于,所述图像编码模块是vision transformer或ResNet50网络;所述文本特征处理层和图像特征处理层分别为BN层和Dropout层。3. The vulnerable plaque identification method based on the CLIP model according to claim 1, characterized in that the image encoding module is a vision transformer or a ResNet50 network; the text feature processing layer and the image feature processing layer are BN respectively. layer and Dropout layer. 4.根据权利要求1所述的基于CLIP模型的易损斑块识别方法,其特征在于,所述S3中文本为报告里对斑块的描述:斑块位置、类型、形态、狭窄程度。4. The vulnerable plaque identification method based on the CLIP model according to claim 1, characterized in that the S3 Chinese text is a description of the plaque in the report: plaque location, type, shape, and degree of stenosis. 5.一种基于CLIP模型的易损斑块识别系统,其特征在于,所述系统用于实现权利要求1-4任一项所述的基于CLIP模型的易损斑块识别方法,所述系统包括图像预处理单元和易损斑块识别单元,其中图像预处理单元用于将已知血管中心线及病灶范围的多平面重建图像生成曲面重建图像,使用窗位、窗宽归一化,对归一化后的曲面重建图像取病灶所在长方体空间区域对应的一个图像块,并对其进行重采样;易损斑块识别单元包含易损斑块识别网络模型,主要进行特征提取、特征处理及特征拼接,从而获得易损斑块识别分类结果。5. A vulnerable plaque identification system based on the CLIP model, characterized in that the system is used to implement the vulnerable plaque identification method based on the CLIP model according to any one of claims 1-4, and the system It includes an image preprocessing unit and a vulnerable plaque identification unit. The image preprocessing unit is used to generate a curved surface reconstruction image from a multi-planar reconstructed image with known blood vessel centerline and lesion range. It uses window level and window width normalization to The normalized surface reconstruction image takes an image block corresponding to the cuboid space area where the lesion is located, and resamples it; the vulnerable plaque recognition unit includes a vulnerable plaque recognition network model, which mainly performs feature extraction, feature processing and Feature splicing is used to obtain vulnerable plaque identification and classification results. 6.一种电子设备,其特征在于:包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-4任一项所述的基于CLIP模型的易损斑块识别方法。6. An electronic device, characterized in that it includes a memory, a processor and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, any of claims 1-4 is implemented. A method for identifying vulnerable plaques based on the CLIP model. 7.一种非暂态计算机可读存储介质,其特征在于:所述介质上存储有计算机程序,该计算机程序用于被处理器执行时实现如权利要求1-4任一项所述的基于CLIP模型的易损斑块识别方法。7. A non-transitory computer-readable storage medium, characterized in that: a computer program is stored on the medium, and the computer program is used to implement the method according to any one of claims 1-4 when executed by a processor. Vulnerable plaque identification method using CLIP model.
CN202311473361.3A 2023-11-08 2023-11-08 A vulnerable plaque identification method and system based on CLIP model Active CN117198514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311473361.3A CN117198514B (en) 2023-11-08 2023-11-08 A vulnerable plaque identification method and system based on CLIP model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311473361.3A CN117198514B (en) 2023-11-08 2023-11-08 A vulnerable plaque identification method and system based on CLIP model

Publications (2)

Publication Number Publication Date
CN117198514A CN117198514A (en) 2023-12-08
CN117198514B true CN117198514B (en) 2024-01-30

Family

ID=88989102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311473361.3A Active CN117198514B (en) 2023-11-08 2023-11-08 A vulnerable plaque identification method and system based on CLIP model

Country Status (1)

Country Link
CN (1) CN117198514B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117932404A (en) * 2024-01-05 2024-04-26 华南理工大学 A dual-modal scoring method and system for coupling park social media review text and images
CN118429665B (en) * 2024-07-03 2024-10-11 杭州倍佐健康科技有限公司 Method for identifying coronary CTA atheromatous plaque and vulnerable plaque based on AI model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103917166A (en) * 2011-08-17 2014-07-09 Vp诊断公司 A method and system of characterization of carotid plaque
CN106056126A (en) * 2015-02-13 2016-10-26 西门子公司 Plaque vulnerability assessment in medical imaging
CN108492272A (en) * 2018-03-26 2018-09-04 西安交通大学 Cardiovascular vulnerable plaque recognition methods based on attention model and multitask neural network and system
WO2019238976A1 (en) * 2018-06-15 2019-12-19 Université de Liège Image classification using neural networks
WO2021257893A1 (en) * 2020-06-19 2021-12-23 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
CN114387464A (en) * 2021-12-01 2022-04-22 杭州脉流科技有限公司 Vulnerable plaque identification method, computer equipment, readable storage medium and program product based on IVUS images
WO2023004451A1 (en) * 2021-07-28 2023-02-02 Artrya Limited A coronary artery disease analysis system
CN115688028A (en) * 2023-01-05 2023-02-03 杭州华得森生物技术有限公司 Tumor cell growth state detection equipment
WO2023029817A1 (en) * 2021-08-31 2023-03-09 北京字节跳动网络技术有限公司 Medical report generation method and apparatus, model training method and apparatus, and device
CN116524593A (en) * 2023-04-23 2023-08-01 北京建筑大学 A dynamic gesture recognition method, system, device and medium
CN116796251A (en) * 2023-08-25 2023-09-22 江苏省互联网行业管理服务中心 Poor website classification method, system and equipment based on image-text multi-mode

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087459B2 (en) * 2015-08-14 2021-08-10 Elucid Bioimaging Inc. Quantitative imaging for fractional flow reserve (FFR)
US11200666B2 (en) * 2018-07-03 2021-12-14 Caroline Choi Method for diagnosing, predicting, determining prognosis, monitoring, or staging disease based on vascularization patterns
WO2022051211A1 (en) * 2020-09-02 2022-03-10 The General Hospital Corporation System for and method of deep learning diagnosis of plaque erosion through optical coherence tomography
US20220391755A1 (en) * 2021-05-26 2022-12-08 Salesforce.Com, Inc. Systems and methods for vision-and-language representation learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103917166A (en) * 2011-08-17 2014-07-09 Vp诊断公司 A method and system of characterization of carotid plaque
CN106056126A (en) * 2015-02-13 2016-10-26 西门子公司 Plaque vulnerability assessment in medical imaging
CN108492272A (en) * 2018-03-26 2018-09-04 西安交通大学 Cardiovascular vulnerable plaque recognition methods based on attention model and multitask neural network and system
WO2019238976A1 (en) * 2018-06-15 2019-12-19 Université de Liège Image classification using neural networks
WO2021257893A1 (en) * 2020-06-19 2021-12-23 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
WO2023004451A1 (en) * 2021-07-28 2023-02-02 Artrya Limited A coronary artery disease analysis system
WO2023029817A1 (en) * 2021-08-31 2023-03-09 北京字节跳动网络技术有限公司 Medical report generation method and apparatus, model training method and apparatus, and device
CN114387464A (en) * 2021-12-01 2022-04-22 杭州脉流科技有限公司 Vulnerable plaque identification method, computer equipment, readable storage medium and program product based on IVUS images
CN115688028A (en) * 2023-01-05 2023-02-03 杭州华得森生物技术有限公司 Tumor cell growth state detection equipment
CN116524593A (en) * 2023-04-23 2023-08-01 北京建筑大学 A dynamic gesture recognition method, system, device and medium
CN116796251A (en) * 2023-08-25 2023-09-22 江苏省互联网行业管理服务中心 Poor website classification method, system and equipment based on image-text multi-mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEAKLY SUPERVISED VULNERABLE PLAQUES DETECTION BY IVOCT IMAGE;Peiwen Shi 等;2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI);全文 *
基于人工智能的成像技术在冠心病诊断中的应用;门婷婷 等;《中国实验诊断学》;第第27卷卷(第第8期期);全文 *
基于深度学习的HRMR图像颅内动脉粥样硬化斑块分割与识别方法研究;万黎明;中国优秀硕士论文电子期刊网;全文 *

Also Published As

Publication number Publication date
CN117198514A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN117198514B (en) A vulnerable plaque identification method and system based on CLIP model
CN113012172B (en) AS-UNet-based medical image segmentation method and system
WO2024104035A1 (en) Long short-term memory self-attention model-based three-dimensional medical image segmentation method and system
CN111462049A (en) A method for automatic labeling of the shape of the lesion area in the breast contrast-enhanced ultrasound video
Yuan et al. A multi-scale convolutional neural network with context for joint segmentation of optic disc and cup
CN115440346B (en) Acne grading method, system, equipment and storage medium based on semi-supervised learning
CN116862877A (en) Scanning image analysis system and method based on convolutional neural network
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
CN116934747B (en) Fundus image segmentation model training method, equipment and glaucoma auxiliary diagnosis system
CN117726814A (en) Retinal blood vessel segmentation method based on cross-attention and dual-branch pooling fusion
CN115082388A (en) Diabetic retinopathy image detection method based on attention mechanism
CN118397273A (en) Polyp image segmentation method of multi-scale shared network based on boundary perception
CN117197166A (en) Polyp image segmentation method and imaging method based on edge and neighborhood information
Kong et al. Data enhancement based on M2-Unet for liver segmentation in Computed Tomography
CN112633416A (en) Brain CT image classification method fusing multi-scale superpixels
Tian et al. Learning discriminative representations for fine-grained diabetic retinopathy grading
Tan et al. Lightweight pyramid network with spatial attention mechanism for accurate retinal vessel segmentation
Yan et al. MRSNet: Joint consistent optic disc and cup segmentation based on large kernel residual convolutional attention and self-attention
CN111340794A (en) Method and device for quantifying coronary artery stenosis
CN116740041B (en) CTA scanning image analysis system and method based on machine vision
CN117593317A (en) Retinal blood vessel image segmentation method based on multi-scale dilated convolutional residual network
CN117576383A (en) Attention decoding-based informative meat segmentation method and system
CN117314935A (en) Diffusion model-based low-quality fundus image enhancement and segmentation method and system
CN116934683A (en) Artificial Intelligence-Assisted Ultrasound Diagnosis of Splenic Trauma
CN116228731A (en) Multi-contrast learning coronary artery high-risk plaque detection method, system and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant