CN102663446A - Building method of bag-of-word model of medical focus image - Google Patents

Building method of bag-of-word model of medical focus image Download PDF

Info

Publication number
CN102663446A
CN102663446A CN2012101232473A CN201210123247A CN102663446A CN 102663446 A CN102663446 A CN 102663446A CN 2012101232473 A CN2012101232473 A CN 2012101232473A CN 201210123247 A CN201210123247 A CN 201210123247A CN 102663446 A CN102663446 A CN 102663446A
Authority
CN
China
Prior art keywords
focus
lesion
bag
image
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101232473A
Other languages
Chinese (zh)
Other versions
CN102663446B (en
Inventor
冯前进
阳维
黄美燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN 201210123247 priority Critical patent/CN102663446B/en
Publication of CN102663446A publication Critical patent/CN102663446A/en
Application granted granted Critical
Publication of CN102663446B publication Critical patent/CN102663446B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本发明涉及识别医学病灶图像,具体涉及一种医学病灶图像的词袋模型的构建方法,该方法首先将医学病灶图像划分为病灶区域和病灶边界区域两个部分,分别得到病灶区域和病灶边界区域的词袋,进而构建出病灶区域词袋和病灶边界区域词袋联合的词袋模型。本发明所述的构建方法得到的词袋模型相对于一般的词袋模型增加了病灶区域局部特征的相对空间位置信息,因此有助于提高临床诊断的准确性。

Figure 201210123247

The present invention relates to identifying medical lesion images, and in particular to a method for constructing a bag-of-words model of medical lesion images. The method first divides the medical lesion image into two parts, the lesion area and the lesion boundary area, and obtains the lesion area and the lesion boundary area respectively. The bag of words, and then construct the word bag model of the combination of the bag of words in the lesion area and the bag of words in the boundary area of the lesion. Compared with the general bag-of-words model, the bag-of-words model obtained by the construction method of the present invention increases the relative spatial position information of the local features of the lesion area, thus helping to improve the accuracy of clinical diagnosis.

Figure 201210123247

Description

一种医学病灶图像的词袋模型的构建方法A construction method of bag-of-words model for medical lesion images

技术领域 technical field

本发明涉及识别医学病灶图像,具体来说涉及识别医学病灶图像的词袋模型。The invention relates to identifying medical lesion images, in particular to a bag-of-words model for identifying medical lesion images.

背景技术 Background technique

基于内容的图像检索(Content-Based ImageRetrieval,CBIR)的基本思想是提取图像的视觉特征进行图像表达,使用图像表达对图像进行检索。CBIR技术可以为管理图像数据、临床诊断、医学教学等提供支持和帮助。特别地,医学图像中相似病灶的检索,可提高临床诊断的可靠性和相关信息的完整性。The basic idea of content-based image retrieval (Content-Based Image Retrieval, CBIR) is to extract the visual features of images for image expression, and use image expression to retrieve images. CBIR technology can provide support and assistance for image data management, clinical diagnosis, medical teaching, etc. In particular, the retrieval of similar lesions in medical images can improve the reliability of clinical diagnosis and the integrity of relevant information.

一般地,CBIR系统把待查询图像和待查询图像的图像表达存放在数据库中,当用户提供一幅查询图像时,CBIR系统提取该图像的视觉特征进行图像表达,与数据库中待查询图像的图像表达进行比较,从而返回与查询特征相似的图像。CBIR系统中常用来进行图像表达的视觉特征有:颜色、纹理、形状、边缘等。纹理特征是图像的重要特征之一,能够描述图像的平滑,稀疏,规则性等特征。在医学图像中,由于提取图像的纹理特征能够反映人眼所不能观察到的隐藏在图像结构中的细节信息,因此纹理特征广泛应用于医学图像检索的图像表达中。几种传统的纹理特征提取方法包括:灰度共生矩阵、小波变换、伽伯(Gabor)变换等。灰度共生矩阵表示了在一致纹理模式下像素灰度的空间依赖性,但是,它没有完全抓住局部灰度的图像特点,对于较大的局部,用这种方法提取纹理特征的效果不理想。小波变换能够为多尺度需求提供一个清晰的数学框架,因此,小波变换能够有效地提取纹理图像的多尺度特征。然而,在小波变换中,滤波器组的选择问题仍未解决,导致对纹理分析产生影响。伽伯变换与人类的视觉最为匹配,在图像分析中具有重要的作用,但是,它的变换窗口大小固定,对纹理在方向和频率上的细微变换不敏感,不能满足我们在实际应用中需求。词袋(Bagof Visual Words,BoW)模型通过提取整幅图像的局部细节特征,进而对图像进行量化表达,捕捉了图像的细微变化和整体特征。Generally, the CBIR system stores the image to be queried and the image expression of the image to be queried in the database. When the user provides a query image, the CBIR system extracts the visual features of the image for image expression, and the image of the image to be queried in the database Expressions are compared to return images similar to the query features. The visual features commonly used for image expression in the CBIR system include: color, texture, shape, edge, etc. Texture feature is one of the important features of an image, which can describe the smoothness, sparseness, regularity and other characteristics of the image. In medical images, texture features are widely used in the image expression of medical image retrieval because the extracted texture features can reflect the details hidden in the image structure that cannot be observed by human eyes. Several traditional texture feature extraction methods include: gray level co-occurrence matrix, wavelet transform, Gabor transform and so on. The gray level co-occurrence matrix represents the spatial dependence of the pixel gray level in the consistent texture mode. However, it does not fully capture the image characteristics of the local gray level. For larger parts, the effect of extracting texture features by this method is not ideal. . Wavelet transform can provide a clear mathematical framework for multi-scale requirements. Therefore, wavelet transform can effectively extract multi-scale features of texture images. However, in wavelet transform, the problem of filter bank selection remains unsolved, resulting in impact on texture analysis. The Gabor transform is the most suitable for human vision and plays an important role in image analysis. However, its transformation window size is fixed and it is not sensitive to subtle changes in texture direction and frequency, which cannot meet our practical needs. The bag of words (Bag of Visual Words, BoW) model extracts the local details of the entire image, and then quantifies the image, capturing the subtle changes and overall features of the image.

然而,由于一般的词袋模型在提取局部特征时,没有考虑到特征之间的相对空间位置关系,从而使产生的词袋模型缺乏有效的空间信息,降低了词袋模型的描述能力。为了解决这个问题,有研究者提出一种联合空间金字塔模型和词袋模型的方法,用于自然图像的分类(S.Lazebnik,C.Schmid,and J.Ponce,″Beyond bags of features:spatial pyramid matching forrecognizing natural scene categories,″IEEE Conference on Computer Vision and PatternRecognition,pp.2169-2178,2006)。这种方法以空间金字塔模型的提取方式,把图像细分为多个子区域,然后构建各个子区域的词袋模型,最后把全部子区域的词袋模型联合为一个大的词袋模型。此外,还有研究者提出在词袋模型的特征向量中加入特征的空间坐标来提高医学X光片的分类和检索性能(U.Avni,H.Greenspan,E.Konen,M.Sharon,and J.Goldberger,″X-ray categorization and retrieval on the organ and pathology level,using patch-based visualwords,″IEEE Transactions on Medical Imaging,vol.30,pp.733-746,2011)。由于医学病灶图像一般是灰度图像,病灶组织内部的纹理信息有异于附近的正常组织,而且,不同类型病灶的纹理信息也各有差异,另外,病灶邻近的组织结构也有所不同。例如,在脑部肿瘤图像中,胶质瘤通常边缘会有一圈水肿;脑膜瘤邻近脑壳、灰质和脑脊髓液;而垂体瘤则经常出现在蝶窦、视交叉和内部颈动脉附近。因此,上述现有技术虽然能很好地表达出病灶区域内的纹理特征,但是因缺少病灶边界区域和病灶区域向病灶边界区域过渡的灰度变化信息,其区分不同类型病灶的准确性尚不够理想。However, because the general bag-of-words model does not take into account the relative spatial position relationship between features when extracting local features, the generated bag-of-words model lacks effective spatial information and reduces the description ability of the bag-of-words model. In order to solve this problem, some researchers proposed a method of combining the spatial pyramid model and the word bag model for the classification of natural images (S.Lazebnik, C.Schmid, and J.Ponce, "Beyond bags of features: spatial pyramid matching for recognizing natural scene categories, "IEEE Conference on Computer Vision and Pattern Recognition, pp.2169-2178, 2006). This method uses the extraction method of the spatial pyramid model to subdivide the image into multiple sub-regions, then construct the bag-of-words model of each sub-region, and finally combine the bag-of-words models of all sub-regions into a large bag-of-words model. In addition, some researchers proposed to add the spatial coordinates of features to the feature vector of the bag-of-words model to improve the classification and retrieval performance of medical X-ray films (U.Avni, H.Greenspan, E.Konen, M.Sharon, and J . Goldberger, "X-ray classification and retrieval on the organ and pathology level, using patch-based visualwords, "IEEE Transactions on Medical Imaging, vol.30, pp.733-746, 2011). Because medical lesion images are generally gray-scale images, the texture information inside the lesion tissue is different from that of nearby normal tissues, and the texture information of different types of lesions is also different. In addition, the tissue structure adjacent to the lesion is also different. For example, in brain tumor images, gliomas often have a rim of edema; meningiomas are adjacent to the skull, gray matter, and cerebrospinal fluid; and pituitary tumors are often seen near the sphenoid sinus, optic chiasm, and internal carotid arteries. Therefore, although the above-mentioned prior art can well express the texture features in the lesion area, the accuracy of distinguishing different types of lesions is not enough due to the lack of grayscale change information of the lesion border area and the transition from the lesion area to the lesion border area ideal.

发明内容 Contents of the invention

本发明所要解决的技术问题是提供一种医学病灶图像的词袋模型的构建方法,该方法得到的词袋模型能反映病灶区域的空间位置,有助于提高临床诊断的准确性。The technical problem to be solved by the present invention is to provide a method for constructing a bag-of-words model of a medical lesion image. The bag-of-words model obtained by the method can reflect the spatial position of the lesion area, which helps to improve the accuracy of clinical diagnosis.

本发明的目的可通过以下的技术措施来实现:The purpose of the present invention can be achieved through the following technical measures:

一种医学病灶图像的词袋模型的构建方法,该方法由以下步骤组成:A method for constructing a bag-of-words model of a medical lesion image, the method is composed of the following steps:

(1)读取数据库中已勾画出病灶轮廓的医学病灶图像,所读取的医学病灶图像要包含每种病灶类型至少各50幅图像,并每一幅医学病灶图像进行以下处理:(1) Read the medical lesion images in the database that have outlined the outline of the lesion. The read medical lesion images must contain at least 50 images of each lesion type, and each medical lesion image is processed as follows:

(1.1)先对病灶轮廓线进行一维高斯平滑,再分别以病灶轮廓线和病灶轮廓线的法线相交的像素点为起点,沿着病灶轮廓法线向病灶内和病灶外各取若干数量相同的像素点,然后分别,以位于每一病灶轮廓法线方向上所取的像素点为列,按顺时针或逆时针排列,以位于每一病灶轮廓法线方向上同一位置的像素点为行,按从病灶轮廓内向病灶轮廓外或从病灶轮廓外向病灶轮廓内的顺序排列,得到所述医学病灶图像的边界区域的变换图像;接着,以变换图像内间隔为0~5个像素点距离的像素点为中心向外扩展,获取一系列像素点阵列大小为5×5、7×7或9×9的小方块,将所述小方块中的每一列像素点逆时针转90°,再以从上到下的顺序排成一行队列,然后将每一行队列中,位于所述变换图像内的像素点分别替换为所对应的灰度值,位于所述变换图像外的像素点的灰度值分别赋予零,得到所述病灶边界区域的灰度值向量A1;(1.1) First perform one-dimensional Gaussian smoothing on the lesion contour line, and then take the pixel point where the lesion contour line intersects with the normal of the lesion contour line as the starting point, and take a certain number of points along the lesion contour normal to the inside of the lesion and outside the lesion The same pixel points are respectively arranged in a column clockwise or counterclockwise with the pixel points located in the normal direction of each lesion outline, and the pixel points located in the same position in the normal direction of each lesion outline are row, arranged in the order from the inside of the lesion outline to the outside of the lesion outline or from the outside of the lesion outline to the inside of the lesion outline, to obtain the transformed image of the border area of the medical lesion image; then, the interval within the transformed image is 0 to 5 pixel points Expand outward from the center of the pixels, obtain a series of small squares with a pixel array size of 5×5, 7×7 or 9×9, rotate each column of pixels in the small squares by 90° counterclockwise, and then Arrange a queue in order from top to bottom, and then replace the pixels located in the transformed image in each queue with the corresponding gray value, and the gray value of the pixel located outside the transformed image Values are respectively assigned zero to obtain the gray value vector A1 of the lesion border area;

(1.2)在病灶区域内自位于所述病灶轮廓线边缘的像素点起,以间隔为0~5个像素点距离的像素点为中心向外扩展,获取一系列像素点阵列大小为5×5、7×7或9×9的小方块,将所述小方块中的每一列像素点逆时针转90°,再以从上到下的顺序排成一行队列,然后将每一行队列中,位于所述病灶轮廓线内的像素点分别替换为所对应的灰度值,位于所述病灶轮廓线外的像素点的灰度值分别赋予零,得到病灶区域的灰度值向量A2;(1.2) In the lesion area, starting from the pixel located on the edge of the lesion contour line, the pixel points with an interval of 0 to 5 pixel points are expanded outward from the center, and a series of pixel point arrays with a size of 5×5 are obtained. , 7×7 or 9×9 small squares, each column of pixels in the small squares is rotated 90° counterclockwise, and then arranged in a row from top to bottom, and then each row of the queue is located at Pixels within the contour of the lesion are replaced with the corresponding gray values, and the gray values of the pixels outside the contour of the lesion are respectively assigned zero to obtain the gray value vector A2 of the lesion area;

(1.3)将步骤(1.1)得到的所有医学病灶图像的病灶边界区域的灰度值向量A1规为一类,将步骤(1.2)得到的所有医学病灶图像的病灶区域的灰度值向量A2规为另一类,然后分别对其进行k均值聚类,得到含有由聚类中心所形成的单词组成的病灶区域词典和病灶边界区域词典;(1.3) Regulate the gray value vectors A1 of the lesion boundary regions of all medical lesion images obtained in step (1.1) into one class, and standardize the gray value vectors A2 of the lesion regions of all medical lesion images obtained in step (1.2) is another class, and then k-means clustering is performed on it respectively to obtain a lesion area dictionary and a lesion boundary area dictionary composed of words formed by the cluster center;

(2)取所述数据库中任一幅医学病灶图像,分别构建病灶区域的词袋B2和病灶边界区域的词袋B1;其中,(2) Take any medical lesion image in the database, and construct the word bag B2 of the lesion area and the word bag B1 of the lesion border area respectively; Wherein,

(2.1)所述病灶区域的词袋B2的构建方法为:以其病灶区域内每一像素点为中心向外扩展,获取一系列大小与步骤(1.2)所述小方块相同的小方块,再以步骤(1.2)相同的方法得到该幅医学病灶图像的病灶区域的二次运算的灰度值向量A2′,然后按所述行队列的顺序分别计算每一灰度值向量A2′与病灶区域词典中每个单词的欧氏距离,并把每一灰度值向量A2′量化到与其欧氏距离最小的单词上,得到病灶区域词典中每个单词出现的频率,该频率数字所形成的一维向量即为该幅医学病灶图像的病灶区域的词袋B2;(2.1) The construction method of the bag of words B2 in the lesion area is as follows: take each pixel in the lesion area as the center and expand outwards to obtain a series of small squares with the same size as the small squares described in step (1.2), and then Obtain the gray value vector A2' of the secondary operation of the lesion area of the medical lesion image in the same way as step (1.2), and then calculate each gray value vector A2' and the lesion area respectively according to the order of the row queue The Euclidean distance of each word in the dictionary, and quantize each gray value vector A2′ to the word with the smallest Euclidean distance to obtain the frequency of each word in the lesion area dictionary. The dimension vector is the word bag B2 of the lesion area of the medical lesion image;

(2.2)所述病灶边界区域的词袋B1的构建方法为:先按步骤(1.1)相同的方法对该医学病灶图像的病灶边界区域进行处理,得到边界区域的变换图像,以变换图像内每一像素点为中心向外扩展,获取一系列大小与步骤(1.1)所述小方块相同的小方块,再以步骤(1.1)相同的方法得到该幅医学病灶图像的病灶边界区域的二次运算的灰度值向量A1′,然后按所述行队列的顺序分别计算每一灰度值向量A1′与病灶边界区域词典中每个单词的欧氏距离,并把每一灰度值向量A1′量化到与其欧氏距离最小的单词上,得到病灶边界区域词典中每个单词出现的频率,该频率数字所形成的一维向量即为该幅医学病灶图像的病灶边界区域的词袋B1;(2.2) The construction method of the bag of words B1 of the lesion border area is: first process the lesion border area of the medical lesion image in the same way as step (1.1), to obtain the transformed image of the border area, to transform each word in the image Expand outward from the center of one pixel point to obtain a series of small squares with the same size as the small squares described in step (1.1), and then use the same method as step (1.1) to obtain the secondary calculation of the lesion boundary area of the medical lesion image The gray value vector A1' of each gray value vector A1', and then calculate the Euclidean distance between each gray value vector A1' and each word in the lesion boundary area dictionary according to the order of the row queue, and put each gray value vector A1' Quantize to the word with the smallest Euclidean distance to obtain the frequency of occurrence of each word in the lesion boundary area dictionary, and the one-dimensional vector formed by the frequency number is the word bag B1 of the lesion boundary area of the medical lesion image;

(2.3)将表示词袋B1和词袋B2的两个向量首尾连接排列成一个向量便得到该医学病灶图像的词袋模型。(2.3) The bag-of-words model of the medical lesion image is obtained by arranging the two vectors representing the word bag B1 and the word bag B2 end-to-end to form a vector.

本发明所述的构建方法,其中,所述的小方块的大小以7×7为优,所述的k均值聚类的k值可根据图像的数目取,取的原则为图像数目越多,k值取得越大;图像数目越少,k值取得越小,一般k值取1000为优。In the construction method of the present invention, the size of the small square is preferably 7×7, and the k value of the k-means clustering can be selected according to the number of images, and the principle of selection is that the larger the number of images, The larger the value of k is; the smaller the number of images is, the smaller the value of k is. Generally, the value of k is 1000.

本发明所述的构建方法适用于MRI、CT、超声图像词袋模型的构建。The construction method described in the present invention is applicable to the construction of bag-of-words models of MRI, CT, and ultrasonic images.

本发明方法相对于现有技术来说,具有以下的有益效果:Compared with the prior art, the inventive method has the following beneficial effects:

由于图像数据库是由已经勾画好病灶轮廓的图像构成,因此,依据病灶图像的特点,把病灶图像划分为病灶区域和病灶边界区域两个部分,分别得到病灶区域和病灶边界区域的词袋,进而构建出病灶区域词袋和病灶边界区域词袋联合的词袋模型,相对于一般的词袋模型增加了病灶区域局部特征的相对空间位置信息,因此有助于提高临床诊断的准确性。Since the image database is composed of images that have already outlined the outline of the lesion, according to the characteristics of the lesion image, the lesion image is divided into two parts: the lesion area and the lesion boundary area, and the word bags of the lesion area and the lesion boundary area are obtained respectively, and then A bag-of-words model combining bag-of-words in the lesion area and a bag-of-words in the boundary area is constructed, which increases the relative spatial position information of the local features of the lesion area compared with the general bag-of-words model, thus helping to improve the accuracy of clinical diagnosis.

附图说明 Description of drawings

图1是本发明的基于医学病灶图像的词袋模型的构建方法的流程图;Fig. 1 is the flowchart of the construction method of the bag-of-words model based on medical lesion image of the present invention;

图2是查全-查准率曲线,其中,编号为I的曲线是用本发明所述的词袋模型得到的查全-查准率曲线;编号为II的曲线是用在病灶区域构建词袋B2得到的查全-查准率曲线;编号为III的曲线是用灰度共生矩阵的方法得到的查全-查准率曲线;编号为Ⅳ的曲线是用小波变换的方法得到的查全-查准率曲线;编号为V的曲线是用伽伯变换的方法得到的查全-查准率曲线;Fig. 2 is recall-precision rate curve, and wherein, the curve that is numbered as I is the recall-precision rate curve that obtains with word bag model of the present invention; Numbering is that the curve of II is to be used in lesion area construction word The recall-precision rate curve obtained by bag B2; the curve numbered III is the recall-precision rate curve obtained by the method of gray level co-occurrence matrix; the curve numbered IV is the recall rate curve obtained by the method of wavelet transform -The precision rate curve; Numbered as the curve of V is the recall-precision rate curve that obtains with the method for Gabor transformation;

图3是查全-查准率曲线,其中,编号为I的曲线是用本发明所述的词袋模型得到的查全-查准率曲线;编号为II的曲线是采用空间金字塔方法构建词袋模型得到的查全-查准率曲线;编号为III的曲线是用在特征向量中加入特征的空间坐标的方法构建词袋模型得到的查全-查准率曲线。Fig. 3 is recall-recall rate curve, and wherein, the curve that is numbered as I is the recall-recall rate curve that obtains with word bag model of the present invention; Numbering is that the curve that is II adopts space pyramid method to construct word The recall-precision rate curve obtained by the bag model; the curve numbered III is the recall-precision rate curve obtained by constructing the bag-of-words model by adding the spatial coordinates of the feature in the feature vector.

具体实施方式 Detailed ways

例1(构建词袋模型)Example 1 (Building a bag of words model)

本实施例所使用的数据库是存有脑膜瘤、胶质瘤和垂体瘤的脑部T1加权增强MRI图像各800张,且每一张MRI图像已勾画好了病灶轮廓。以下参照图1详细描述所述数据库中任一张MRI图像的词袋模型的构建方法。The database used in this embodiment consists of 800 T1-weighted enhanced MRI images of the brain containing meningioma, glioma, and pituitary tumor, and each MRI image has been delineated. The method for constructing the bag-of-words model of any MRI image in the database will be described in detail below with reference to FIG. 1 .

(1)读取所述数据库中的MRI图像,并对其中的每一幅MRI图像进行以下处理:(1) Read the MRI images in the database, and perform the following processing on each MRI image therein:

(1.1)先对病灶轮廓线进行一维高斯平滑,再分别以病灶轮廓线和病灶轮廓线的法线相交的像素点为起点,沿着病灶轮廓法线向病灶内和病灶外各取15个像素点,然后分别,以位于每一病灶轮廓法线方向上所取的像素点为列,按顺时针或逆时针排列,以位于每一病灶轮廓法线方向上同一位置的像素点为行,按从病灶轮廓内向病灶轮廓外或从病灶轮廓外向病灶轮廓内的顺序排列,得到所述医学病灶图像的边界区域的变换图像;接着,以变换图像内间隔为5个像素点距离的像素点为中心向外扩展,获取一系列像素点阵列大小为7×7的小方块,将所述小方块中的每一列像素点逆时针转90°,再以从上到下的顺序排成一行队列,然后将每一行队列中,位于所述变换图像内的像素点分别替换为所对应的灰度值,位于所述变换图像外的像素点的灰度值分别赋予零,得到所述病灶边界区域的灰度值向量A1。其中所述的一维高斯平滑的目的是为了避免病灶轮廓线受到噪声的干扰而导致病灶轮廓线的法线方向改变,要用到高斯核来平滑病灶轮廓线。所述一维高斯平滑的方法如下:(1.1) First perform one-dimensional Gaussian smoothing on the lesion contour line, and then take the pixel point where the lesion contour line and the normal line of the lesion contour line intersect as the starting point, and take 15 points along the lesion contour normal to the inside of the lesion and outside the lesion. Pixels, and then respectively, take the pixels located in the normal direction of each lesion outline as columns, arrange clockwise or counterclockwise, and take the pixels located at the same position in the normal direction of each lesion outline as rows, Arranged in the order from the inside of the lesion outline to the outside of the lesion outline or from the outside of the lesion outline to the inside of the lesion outline, the transformed image of the boundary area of the medical lesion image is obtained; then, the pixel points with an interval of 5 pixel points in the transformed image are Expand the center outward, obtain a series of small squares with a pixel array size of 7×7, turn each column of pixels in the small squares 90° counterclockwise, and then line up in a row from top to bottom, Then, in each row queue, the pixels located in the transformed image are replaced with the corresponding gray values, and the gray values of the pixels located outside the transformed image are respectively assigned zero to obtain the lesion boundary area. Gray value vector A1. The purpose of the one-dimensional Gaussian smoothing described therein is to avoid the change of the normal direction of the lesion outline due to noise interference on the lesion outline, and a Gaussian kernel is used to smooth the lesion outline. The method of one-dimensional Gaussian smoothing is as follows:

一维的高斯核为The one-dimensional Gaussian kernel is

GG 11 DD. (( Xx ;; σσ )) == 11 22 ππ σσ ee -- Xx 22 22 σσ 22 ,,

这里σ表示标准偏差,X表示空间中的任一点的位置。设一个病灶的病灶轮廓线上的点坐标为b(x,y),则经过高斯核平滑后的病灶轮廓线上的点坐标为Here σ represents the standard deviation and X represents the position of any point in space. Assuming that the point coordinates on the lesion contour line of a lesion are b(x, y), then the point coordinates on the lesion contour line smoothed by the Gaussian kernel are

B(x′,y′)=b(x′,y′)*G1D(X;σ)′,B(x', y')=b(x', y')*G 1D (X; σ)',

式中σ和X与高斯核表达式中的σ和X相同。因此,病灶轮廓线的法线方向与病灶轮廓线的切线方向之间的角度为where σ and X are the same as those in the Gaussian kernel expression. Therefore, the angle between the normal direction of the lesion contour line and the tangent direction of the lesion contour line is

θθ == arctamarctam (( ythe y ′′ xx ′′ )) ..

使用角度θ,可以得到每一病灶轮廓法线方向上所取的点对应的坐标Using the angle θ, the coordinates corresponding to the points taken in the normal direction of each lesion contour can be obtained

x1=x+l×cosθ,x 1 =x+l×cosθ,

y1=y+l×sinθ,y 1 =y+l×sinθ,

其中l是病灶轮廓法线方向上所取的点到病灶轮廓线上的点的距离。由于坐标(x1,y1)不一定在图像上的一个确定的像素点上,因此,使用线性插值来获得每一病灶轮廓法线方向上所取的点对应的像素点。Where l is the distance from a point taken in the normal direction of the lesion outline to a point on the lesion outline. Since the coordinates (x 1 , y 1 ) are not necessarily on a certain pixel point on the image, linear interpolation is used to obtain the pixel point corresponding to the point taken in the normal direction of each lesion outline.

(1.2)在病灶区域内自位于所述病灶轮廓线边缘的像素点起,以间隔为5个像素点距离的像素点为中心向外扩展,获取一系列像素点阵列大小为7×7的小方块,将所述小方块中的每一列像素点逆时针转90°,再以从上到下的顺序排成一行队列,然后将每一行队列中,位于所述病灶轮廓线内的像素点分别替换为所对应的灰度值,位于所述病灶轮廓线外的像素点的灰度值分别赋予零,得到病灶区域的灰度值向量A2。(1.2) In the lesion area, starting from the pixel located at the edge of the lesion outline, the pixel points with an interval of 5 pixel points are expanded outwards, and a series of small pixels with an array size of 7×7 are obtained. square, rotate each column of pixel points in the small square counterclockwise by 90°, and then arrange them in a row from top to bottom, and then arrange the pixels in each row of queues that are located in the outline of the lesion respectively Replaced by the corresponding gray value, the gray value of the pixels located outside the contour line of the lesion is respectively assigned to zero, and the gray value vector A2 of the lesion area is obtained.

(1.3)将步骤(1.1)得到的所有医学病灶图像的病灶边界区域的灰度值向量A1规为一类,将步骤(1.2)得到的所有医学病灶图像的病灶区域的灰度值向量A2规为另一类,然后分别对其进行k均值聚类,得到含有由聚类中心所形成的分别具有1000个单词组成的病灶区域词典和病灶边界区域词典。(1.3) Regulate the gray value vectors A1 of the lesion boundary regions of all medical lesion images obtained in step (1.1) into one class, and standardize the gray value vectors A2 of the lesion regions of all medical lesion images obtained in step (1.2) into another category, and then carry out k-means clustering on them respectively to obtain lesion area dictionaries and lesion boundary area dictionaries which are formed by the cluster centers and each have 1000 words.

(2)取所述数据库中任一幅医学病灶图像,分别构建病灶区域的词袋B2和病灶边界区域的词袋B1;其中,(2) Take any medical lesion image in the database, and construct the word bag B2 of the lesion area and the word bag B1 of the lesion border area respectively; Wherein,

(2.1)所述病灶区域的词袋B2的构建方法为:以其病灶区域内每一像素点为中心向外扩展,获取一系列大小与步骤(1.2)所述小方块相同的小方块,再以步骤(1.2)相同的方法得到该幅医学病灶图像的病灶区域的二次运算的灰度值向量A2′,然后按所述行队列的顺序分别计算每一灰度值向量A2′与病灶区域词典中每个单词的欧氏距离,并把每一灰度值向量A2′量化到与其欧氏距离最小的单词上,得到病灶区域词典中每个单词出现的频率,该频率数字所形成的一维向量即为该幅医学病灶图像的病灶区域的词袋B2。(2.1) The construction method of the bag of words B2 in the lesion area is as follows: take each pixel in the lesion area as the center and expand outwards to obtain a series of small squares with the same size as the small squares described in step (1.2), and then Obtain the gray value vector A2' of the secondary operation of the lesion area of the medical lesion image in the same way as step (1.2), and then calculate each gray value vector A2' and the lesion area respectively according to the order of the row queue The Euclidean distance of each word in the dictionary, and quantize each gray value vector A2′ to the word with the smallest Euclidean distance to obtain the frequency of each word in the lesion area dictionary. The dimension vector is the bag of words B2 of the lesion area of the medical lesion image.

(2.2)所述病灶边界区域的词袋B1的构建方法为:先按步骤(1.1)相同的方法对该医学病灶图像的病灶边界区域进行处理,得到边界区域的变换图像,以变换图像内每一像素点为中心向外扩展,获取一系列大小与步骤(1.1)所述小方块相同的小方块,再以步骤(1.1)相同的方法得到该幅医学病灶图像的病灶边界区域的二次运算的灰度值向量A1′,然后按所述行队列的顺序分别计算每一灰度值向量A1′与病灶边界区域词典中每个单词的欧氏距离,并把每一灰度值向量A1′量化到与其欧氏距离最小的单词上,得到病灶边界区域词典中每个单词出现的频率,该频率数字所形成的一维向量即为该幅医学病灶图像的病灶边界区域的词袋B1。(2.2) The construction method of the bag of words B1 of the lesion border area is: first process the lesion border area of the medical lesion image in the same way as step (1.1), to obtain the transformed image of the border area, to transform each word in the image Expand outward from the center of one pixel point to obtain a series of small squares with the same size as the small squares described in step (1.1), and then use the same method as step (1.1) to obtain the secondary calculation of the lesion boundary area of the medical lesion image The gray value vector A1' of each gray value vector A1', and then calculate the Euclidean distance between each gray value vector A1' and each word in the lesion boundary area dictionary according to the order of the row queue, and put each gray value vector A1' Quantize to the word with the smallest Euclidean distance to obtain the frequency of occurrence of each word in the lesion boundary area dictionary, and the one-dimensional vector formed by the frequency number is the word bag B1 of the lesion boundary area of the medical lesion image.

(2.3)将表示词袋B1和词袋B2的两个向量首尾连接排列成一个向量便得到该医学病灶图像的词袋模型。(2.3) The bag-of-words model of the medical lesion image is obtained by arranging the two vectors representing the word bag B1 and the word bag B2 end-to-end to form a vector.

例2(效果的验证)Example 2 (verification of effect)

1、构建CBIR系统1. Build a CBIR system

(1)采用例1所述的构建方法构建出例1所述数据库中每幅图的词袋模型,然后使用所得到的2400张MRI图像的词袋模型构建出脑膜瘤、胶质瘤和垂体瘤查询的CBIR系统1;(1) Use the construction method described in Example 1 to construct the bag-of-words model of each picture in the database described in Example 1, and then use the bag-of-words model of the obtained 2400 MRI images to construct meningioma, glioma and pituitary CBIR system for tumor inquiry1;

(2)同样使用上述2400张MRI图像,分别采用本发明所述病灶区域的词袋B2作为词袋模型以及灰度共生矩阵、小波变换、伽伯变换得到的纹理特征构建出CBIR系统2、CBIR系统3、CBIR系统4和CBIR系统5。(2) Using the above-mentioned 2400 MRI images, the CBIR system 2, CBIR is constructed by using the bag of words B2 of the lesion area of the present invention as the bag of words model and the texture features obtained by the gray level co-occurrence matrix, wavelet transform, and Gabor transform. System 3, CBIR System 4 and CBIR System 5.

(3)也同样使用上述2400张MRI图像,分别采用空间金字塔方法构建的词袋模型和在特征向量中加入特征的空间坐标的方法构建的词袋模型构建出CBIR系统6和ICBIR系统7。(3) Also using the above 2400 MRI images, the CBIR system 6 and ICBIR system 7 were constructed using the bag-of-words model constructed by the spatial pyramid method and the bag-of-words model constructed by adding the spatial coordinates of the features to the feature vector.

2、检索2. Search

(1)把脑膜瘤、胶质瘤和垂体瘤的脑部T1加权增强MRI图像各100张作为查询图像提交给步骤1所构建的CBIR系统1,系统1先按例1所述的方法构建这些查询图像的词袋模型,再采用距离测度的方法分别比较这些查询图像与CBIR系统中每一幅图像的相似度,然后把CBIR系统中的图像按与之对应的查询图像的相似度从高到低的顺序排列。设定一次返回图像张数分别为1、2、……、n、……2400,得到每一张图像的查准率和查全率,然后计算300张查询图像的查准率和查全率平均值便得到如图2或图3中编号为I的曲线。(1) Submit 100 brain T1-weighted enhanced MRI images of meningioma, glioma, and pituitary tumor as query images to the CBIR system 1 constructed in step 1. System 1 constructs these images according to the method described in Example 1. The bag-of-words model of the query image, and then use the distance measure method to compare the similarity between these query images and each image in the CBIR system, and then classify the images in the CBIR system according to the similarity of the corresponding query image from high to high. lower order. Set the number of images returned at one time as 1, 2, ..., n, ... 2400, get the precision rate and recall rate of each image, and then calculate the precision rate and recall rate of 300 query images The mean value just obtains the curve numbered as I among Fig. 2 or Fig. 3.

(2)同样使用上述300张查询图像分别提交给步骤1所构建的CBIR系统2、CBIR系统3、CBIR系统4和CBIR系统5,系统2按本发明所述病灶区域词袋B2构建查询图像的词袋模型,系统3~5分别采用灰度共生矩阵、小波变换、伽伯变换提取这些查询图像的纹理特征,再采用距离测度的方法分别比较这些查询图像与CBIR系统2~5中每一幅图像的相似度,然后把CBIR系统2~5中的图像按与之对应的查询图像的相似度从高到低的顺序排列。设定一次返回图像张数分别为1、2、……、n、……2400,得到每一张图像的查准率和查全率,然后计算300张查询图像的查准率和查全率平均值便得到如图2中编号为II、III、Ⅳ和V的曲线。图2表明了在各个CBIR系统返回小于等于90%的数据库的相关图像中,采用本发明所述词袋模型得到的查准率都远远高于其他纹理特征提取方法得到的查准率。(2) Submit the above-mentioned 300 query images to CBIR system 2, CBIR system 3, CBIR system 4 and CBIR system 5 constructed in step 1 respectively, and system 2 constructs query images according to lesion area word bag B2 described in the present invention Bag-of-words model, systems 3 to 5 respectively use gray-level co-occurrence matrix, wavelet transform, and Gabor transform to extract the texture features of these query images, and then use the method of distance measurement to compare these query images with each image in CBIR systems 2 to 5 The similarity of the image, and then arrange the images in the CBIR system 2-5 according to the order of the similarity of the corresponding query image from high to low. Set the number of images returned at one time as 1, 2, ..., n, ... 2400, get the precision rate and recall rate of each image, and then calculate the precision rate and recall rate of 300 query images The mean values yield the curves numbered II, III, IV and V in FIG. 2 . Figure 2 shows that among the relevant images returned by each CBIR system to the database that is less than or equal to 90%, the precision rate obtained by using the bag-of-words model of the present invention is much higher than that obtained by other texture feature extraction methods.

(3)也同样使用上述300张查询图像分别提交给步骤1所构建的CBIR系统6和CBIR系统7,系统6按空间金字塔方法构建这些查询图像的词袋模型,系统7按在特征向量中加入特征的空间坐标的方法构建这些查询图像的词袋模型,再采用距离测度的方法分别比较这些查询图像与CBIR系统6和7中每一幅图像的相似度,然后把CBIR系统6和7中的图像按与之对应的查询图像的相似度从高到低的顺序排列。设定一次返回图像张数分别为1、2、……、n、……2400,得到每一张图像的查准率和查全率,然后计算300张查询图像的查准率和查全率平均值便得到如图3中编号为II和III的曲线。图3表明了在各个CBIR系统返回小于等于90%的数据库的相关图像中,采用本发明所述词袋模型得到的查准率都高于另外两种加入空间信息的词袋方法得到的查准率。(3) Also use the above 300 query images to submit to the CBIR system 6 and CBIR system 7 constructed in step 1 respectively. System 6 constructs the bag-of-words model of these query images according to the spatial pyramid method, and system 7 adds The bag-of-words model of these query images is constructed by using the spatial coordinates of features, and then the similarity between these query images and each image in CBIR systems 6 and 7 is compared using the distance measure method, and then the CBIR systems 6 and 7 are compared Images are arranged in descending order of similarity to their corresponding query images. Set the number of images returned at one time as 1, 2, ..., n, ... 2400, get the precision rate and recall rate of each image, and then calculate the precision rate and recall rate of 300 query images The mean values yield the curves numbered II and III in FIG. 3 . Fig. 3 has shown that in each CBIR system returns less than or equal to 90% of the relevant images of the database, the accuracy rate obtained by adopting the bag-of-words model of the present invention is higher than that obtained by the other two bag-of-words methods adding spatial information Rate.

Claims (3)

1. A method for constructing a bag-of-words model of a medical focus image comprises the following steps:
(1) reading medical focus images with focus outlines drawn in a database, wherein the read medical focus images contain at least 50 images of each focus type, and each medical focus image is processed as follows:
(1.1) firstly, performing one-dimensional Gaussian smoothing on a focus contour line, then respectively taking pixel points of which the number is the same as that of pixel points which are taken from the focus contour line to the inside and the outside of a focus along the normal line of the focus contour as starting points, then respectively taking the pixel points taken from the normal line direction of each focus contour as columns, arranging the pixel points clockwise or anticlockwise, taking the pixel points at the same position in the normal line direction of each focus contour as lines, and arranging the pixel points in the sequence from the inside of the focus contour to the outside of the focus contour or from the outside of the focus contour to the inside of the focus contour to obtain a transformation image of a boundary region of the medical focus image; then, taking pixel points at intervals of 0-5 pixel point distances in a transformed image as centers to expand outwards, obtaining a series of small squares with the pixel point array size of 5 × 5, 7 × 7 or 9 × 9, turning each row of pixel points in the small squares by 90 degrees anticlockwise, then arranging the pixel points in the small squares into a row of queues from top to bottom, then respectively replacing the pixel points in the transformed image in each row of queues with corresponding gray values, and respectively giving zero to the gray values of the pixel points outside the transformed image to obtain a gray value vector A1 of the focus boundary region;
(1.2) in a focus area, extending outwards by taking pixel points at the distance of 0-5 pixel points as centers from pixel points at the edge of a focus contour line, acquiring a series of small squares with the pixel point array size of 5 × 5, 7 × 7 or 9 × 9, rotating each row of pixel points in the small squares by 90 degrees anticlockwise, arranging the pixel points in a row of queues from top to bottom, respectively replacing the pixel points in the focus contour line in each row of queues with corresponding gray values, and respectively giving zero to the gray values of the pixel points outside the focus contour line to obtain a gray value vector A2 of the focus area;
(1.3) specifying gray value vectors A1 of the lesion boundary regions of all the medical lesion images obtained in the step (1.1) as one type, specifying gray value vectors A2 of the lesion regions of all the medical lesion images obtained in the step (1.2) as another type, and then respectively carrying out k-means clustering on the gray value vectors A2 to obtain a lesion region dictionary and a lesion boundary region dictionary which comprise words formed by clustering centers;
(2) taking any medical focus image in the database, and respectively constructing a word bag B2 in a focus area and a word bag B1 in a focus boundary area; wherein,
(2.1) the construction method of the bag of words B2 in the focus area comprises the following steps: expanding outwards by taking each pixel point in the focus area as the center, acquiring a series of small squares with the same size as the small squares in the step (1.2), then obtaining a gray value vector A2 ' of secondary operation of the focus area of the medical focus image by the same method in the step (1.2), then respectively calculating the Euclidean distance between each gray value vector A2 ' and each word in a focus area dictionary according to the sequence of the line queue, and quantizing each gray value vector A2 ' to the word with the minimum Euclidean distance to obtain the frequency of each word in the focus area dictionary, wherein the one-dimensional vector formed by the frequency number is the bag B2 of the focus area of the medical focus image;
(2.2) the construction method of the bag of words B1 in the lesion boundary region comprises the following steps: processing the focus boundary region of the medical focus image according to the same method of the step (1.1) to obtain a transformation image of the boundary region, expanding outwards by taking each pixel point in the transformed image as a center to obtain a series of small squares with the same size as the small squares in the step (1.1), obtaining a gray value vector A1' of secondary operation of a focus boundary region of the medical focus image by the same method in the step (1.1), then respectively calculating the Euclidean distance between each gray value vector A1' and each word in the lesion boundary region dictionary according to the sequence of the line queue, and quantizes each gray value vector A1' to the word with the minimum Euclidean distance to the word to obtain the frequency of each word in the focus boundary region dictionary, the one-dimensional vector formed by the frequency number is a word bag B1 of the lesion boundary area of the medical lesion image;
(2.3) arranging two vectors representing the bag-of-words B1 and the bag-of-words B2 end to end into a vector to obtain a bag-of-words model of the medical lesion image.
2. The method as claimed in claim 1, wherein the size of the small square is 7 × 7 pixel point array.
3. The method of claim 1 or 2, wherein the k-value of the k-means cluster is 1000.
CN 201210123247 2012-04-24 2012-04-24 Building method of bag-of-word model of medical focus image Expired - Fee Related CN102663446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210123247 CN102663446B (en) 2012-04-24 2012-04-24 Building method of bag-of-word model of medical focus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210123247 CN102663446B (en) 2012-04-24 2012-04-24 Building method of bag-of-word model of medical focus image

Publications (2)

Publication Number Publication Date
CN102663446A true CN102663446A (en) 2012-09-12
CN102663446B CN102663446B (en) 2013-04-24

Family

ID=46772929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210123247 Expired - Fee Related CN102663446B (en) 2012-04-24 2012-04-24 Building method of bag-of-word model of medical focus image

Country Status (1)

Country Link
CN (1) CN102663446B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968618A (en) * 2012-10-24 2013-03-13 浙江鸿程计算机系统有限公司 Static hand gesture recognition method fused with BoF model and spectral clustering algorithm
CN103310208A (en) * 2013-07-10 2013-09-18 西安电子科技大学 Identifiability face pose recognition method based on local geometrical visual phrase description
CN103345645A (en) * 2013-06-27 2013-10-09 复旦大学 Commodity image category forecasting method based on online shopping platform
CN103399870A (en) * 2013-07-08 2013-11-20 华中科技大学 Visual word bag feature weighting method and system based on classification drive
CN104598881A (en) * 2015-01-12 2015-05-06 中国科学院信息工程研究所 Feature compression and feature selection based skew scene character recognition method
CN105787494A (en) * 2016-02-24 2016-07-20 中国科学院自动化研究所 Brain disease identification method based on multistage partition bag-of-words model
CN106170799A (en) * 2014-01-27 2016-11-30 皇家飞利浦有限公司 From image zooming-out information and information is included in clinical report
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN113506313A (en) * 2021-07-07 2021-10-15 上海商汤智能科技有限公司 Image processing method and related device, electronic equipment and storage medium
CN114119578A (en) * 2021-12-01 2022-03-01 数坤(北京)网络科技股份有限公司 Image processing method and device, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180422A (en) * 2017-04-02 2017-09-19 南京汇川图像视觉技术有限公司 A kind of labeling damage testing method based on bag of words feature

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117337A (en) * 2011-03-31 2011-07-06 西北工业大学 Space information fused Bag of Words method for retrieving image
CN102129477A (en) * 2011-04-23 2011-07-20 山东大学 Multimode-combined image reordering method
US20110219360A1 (en) * 2010-03-05 2011-09-08 Microsoft Corporation Software debugging recommendations

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110219360A1 (en) * 2010-03-05 2011-09-08 Microsoft Corporation Software debugging recommendations
CN102117337A (en) * 2011-03-31 2011-07-06 西北工业大学 Space information fused Bag of Words method for retrieving image
CN102129477A (en) * 2011-04-23 2011-07-20 山东大学 Multimode-combined image reordering method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968618A (en) * 2012-10-24 2013-03-13 浙江鸿程计算机系统有限公司 Static hand gesture recognition method fused with BoF model and spectral clustering algorithm
CN103345645B (en) * 2013-06-27 2016-09-28 复旦大学 Commodity image class prediction method towards net purchase platform
CN103345645A (en) * 2013-06-27 2013-10-09 复旦大学 Commodity image category forecasting method based on online shopping platform
CN103399870A (en) * 2013-07-08 2013-11-20 华中科技大学 Visual word bag feature weighting method and system based on classification drive
CN103310208A (en) * 2013-07-10 2013-09-18 西安电子科技大学 Identifiability face pose recognition method based on local geometrical visual phrase description
CN103310208B (en) * 2013-07-10 2016-05-11 西安电子科技大学 The distinctive human face posture recognition methods of describing based on local geometric vision phrase
CN106170799B (en) * 2014-01-27 2021-01-22 皇家飞利浦有限公司 Extracting information from images and including information in clinical reports
CN106170799A (en) * 2014-01-27 2016-11-30 皇家飞利浦有限公司 From image zooming-out information and information is included in clinical report
CN104598881B (en) * 2015-01-12 2017-09-29 中国科学院信息工程研究所 Feature based compresses the crooked scene character recognition method with feature selecting
CN104598881A (en) * 2015-01-12 2015-05-06 中国科学院信息工程研究所 Feature compression and feature selection based skew scene character recognition method
CN105787494A (en) * 2016-02-24 2016-07-20 中国科学院自动化研究所 Brain disease identification method based on multistage partition bag-of-words model
CN105787494B (en) * 2016-02-24 2019-04-23 中国科学院自动化研究所 Brain disease recognition device based on multi-level partition bag of words model
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN108510497B (en) * 2018-04-10 2022-04-26 四川和生视界医药技术开发有限公司 Method and device for displaying focus information of retina image
CN113506313A (en) * 2021-07-07 2021-10-15 上海商汤智能科技有限公司 Image processing method and related device, electronic equipment and storage medium
CN114119578A (en) * 2021-12-01 2022-03-01 数坤(北京)网络科技股份有限公司 Image processing method and device, computer equipment and storage medium
CN114119578B (en) * 2021-12-01 2022-07-08 数坤(北京)网络科技股份有限公司 Image processing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN102663446B (en) 2013-04-24

Similar Documents

Publication Publication Date Title
CN102663446B (en) Building method of bag-of-word model of medical focus image
Dou et al. Unpaired multi-modal segmentation via knowledge distillation
Khan et al. Cascading handcrafted features and Convolutional Neural Network for IoT-enabled brain tumor segmentation
Cheng et al. CNNs based multi-modality classification for AD diagnosis
Oktay et al. Stratified decision forests for accurate anatomical landmark localization in cardiac images
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
Arbeláez et al. Multiscale combinatorial grouping
Huang et al. Retrieval of brain tumors with region‐specific bag‐of‐visual‐words representations in contrast‐enhanced MRI images
CN105740833B (en) A kind of Human bodys' response method based on depth sequence
Prasad et al. Classifying computer generated charts
CN104881680A (en) Alzheimer's disease and mild cognitive impairment identification method based on two-dimension features and three-dimension features
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
Agarwal et al. Automatic view classification of echocardiograms using histogram of oriented gradients
CN115330813A (en) An image processing method, apparatus, device and readable storage medium
CN108427913A (en) The Hyperspectral Image Classification method of combined spectral, space and hierarchy information
CN103310208B (en) The distinctive human face posture recognition methods of describing based on local geometric vision phrase
Carneiro et al. Semantic-based indexing of fetal anatomies from 3-D ultrasound data using global/semi-local context and sequential sampling
Yu et al. Fetal facial standard plane recognition via very deep convolutional networks
Epifanio et al. Hippocampal shape analysis in Alzheimer's disease using functional data analysis
Pan et al. VcaNet: Vision Transformer with fusion channel and spatial attention module for 3D brain tumor segmentation
Saber et al. Multi-center, multi-vendor, and multi-disease cardiac image segmentation using scale-independent multi-gate UNET
Li et al. Automatic lumen border detection in IVUS images using deep learning model and handcrafted features
CN105975940A (en) Palm print image identification method based on sparse directional two-dimensional local discriminant projection
CN108197641A (en) A kind of spatial pyramid based on interest domain detection matches image classification method
Lei et al. Multi-modal and multi-layout discriminative learning for placental maturity staging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130424

Termination date: 20200424

CF01 Termination of patent right due to non-payment of annual fee