WO2022001571A1 - 一种基于超像素图像相似度的计算方法 - Google Patents

一种基于超像素图像相似度的计算方法 Download PDF

Info

Publication number
WO2022001571A1
WO2022001571A1 PCT/CN2021/098184 CN2021098184W WO2022001571A1 WO 2022001571 A1 WO2022001571 A1 WO 2022001571A1 CN 2021098184 W CN2021098184 W CN 2021098184W WO 2022001571 A1 WO2022001571 A1 WO 2022001571A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
similarity
key frame
superpixel
images
Prior art date
Application number
PCT/CN2021/098184
Other languages
English (en)
French (fr)
Inventor
王卫
Original Assignee
南京巨鲨显示科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京巨鲨显示科技有限公司 filed Critical 南京巨鲨显示科技有限公司
Publication of WO2022001571A1 publication Critical patent/WO2022001571A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the invention belongs to the field of image recognition, and in particular relates to a calculation method based on superpixel image similarity.
  • Computer-aided diagnosis method refers to extracting effective features from one or more modal image data, and using machine learning method to classify and identify the extracted effective feature samples.
  • the current computer-aided diagnostic imaging methods have the following problems: (1) The network model is only trained for a certain part or a specific part, and is not suitable for other parts; (2) The selection of key frames and regions of interest requires manual intervention Manual completion, resulting in low convenience.
  • the purpose of the present invention is to provide a calculation method based on the similarity of superpixel images, which can perform similarity discrimination on the pixel blocks after the selected key frame image is segmented. In order to solve the problems of low practicability and low precision in the prior art.
  • the technical scheme adopted in the present invention is:
  • a computing method based on superpixel image similarity comprising the following steps:
  • the phase consistency algorithm is used to compare the similarity between the superpixel segmentation map and the corresponding pixel blocks in the superimposed image, and the similarity of each pair of pixel blocks is obtained respectively.
  • the training method of the network model is as follows:
  • a network model with the highest accuracy in recognizing key frame images on the test set is trained.
  • the clusters are continuously iteratively optimized until the difference between each pixel point and the cluster center no longer changes, and superpixel segmentation maps of different pixel blocks are obtained.
  • the method further includes calculating the color distance and the spatial distance of the searched pixel points according to the distance metric.
  • i and j represent the key frame image x and key frame image y, respectively
  • l a and b are the feature vectors in the Lab color space
  • x and y are the feature vectors in the XY coordinates
  • dc and d s respectively represent is the color distance and the spatial distance
  • D i is the distance between the pixel point and the seed point
  • m is a constant
  • S is the maximum spatial distance.
  • the method for obtaining the similarity includes:
  • Metric weighting is performed on the similarity to obtain the similarity of each pair of superpixel blocks.
  • Pc represents the phase consistency information of the images x, y
  • G represents the gradient amplitude
  • S PC (x, y) is the feature similarity of the two images x, y
  • S G (x, y) is the two images.
  • images x, y gradient similarity, T 1 and T 2 are constants.
  • x, y are the key frame image x and key frame image y
  • is the entire airspace
  • S PC (x, y) is the feature similarity of the two images x, y
  • S G (x, y) is the two is the gradient similarity of images x and y
  • ⁇ and ⁇ are positive integers
  • Pc represents the phase consistency information of images x, y
  • n represents the label of each pair of superpixel blocks to be analyzed.
  • e is the natural constant
  • j is the classification category
  • k is the total number of categories to be classified
  • z i is the i-th dimension component of the k-dimensional vector
  • P i is the image classification predicted probability of class i.
  • a computing system based on superpixel image similarity includes:
  • Screening module used to select key frame images from the image data through the network model
  • Segmentation module used to divide the key frame image into a superpixel segmentation map of different pixels through an image segmentation algorithm
  • Flip module used to perform horizontal flip processing on the key frame image
  • Extraction module for extracting segmentation boundaries from the superpixel segmentation map
  • Superimposing module used to superimpose the segmentation boundary on the key frame image after horizontal flip processing to obtain a superimposed image
  • Comparison module It is used to compare the similarity between the superpixel segmentation map and the corresponding pixel blocks in the superimposed image through the phase consistency algorithm, and obtain the similarity of each pair of pixel blocks divided.
  • a computing method system based on superpixel image similarity includes a processor and a storage medium;
  • the storage medium is used for storing instructions
  • the processor is adapted to operate in accordance with the instructions to perform steps in accordance with the method described above.
  • a computer-readable storage medium having a computer program stored thereon, the program implementing the steps of the above-described method when executed by a processor.
  • the invention selects the key frame images from the image data through the network model, and improves the convenience, practicability and accuracy of the method by processing and identifying the similarity of the key frame images;
  • the network model with the highest accuracy in identifying key frame images of different parts of the image is trained, which solves the problem that the existing method is single and only suitable for image-assisted diagnosis of specific parts.
  • FIG. 1 is a schematic diagram of an image process used to assist in diagnosing symmetrical parts of a human body in a method embodiment of the present invention
  • Fig. 2 is the superpixel segmentation brain MRI image effect diagram in the method embodiment of the present invention
  • 3 is a superpixel segmentation boundary extracted from a brain MRI image by superpixel segmentation in an embodiment of the method of the present invention.
  • the specific example of the present invention is to provide a calculation method based on superpixel image similarity, and the specific example of the present invention is to apply the method to the auxiliary diagnosis process, but the method of the present invention is not limited to the application fields provided in the specific example, and the present invention can also Equivalently applied to other fields other than auxiliary diagnosis.
  • FIG. 1 it is a schematic diagram of the process of calculating the similarity of image pixel blocks in the method embodiment of the present invention.
  • the method is applied to auxiliary diagnosis.
  • the image data is input into the network model, and the key frame image x is obtained.
  • the key frame image y is obtained by flipping, and the key frame image x and the key frame image y are separately imaged, and the similarity analysis is performed on each pair of superpixel blocks, and the lesion area is finally located.
  • the specific steps of the method are as follows:
  • Step 1 according to the different key frames diagnosed by doctors in different parts of the human body, train a network model that can identify the image key frame images of the part with the highest accuracy. Select the body part that needs image analysis, and automatically filter out key frame images from the acquired image data through the network model.
  • the specific implementation steps are:
  • Step 1.1 according to the parts to be identified, select the key frames of the corresponding parts to label, and create a training set for identifying key frames;
  • Step 1.2 for the training set in Step 1.1, create a test set and a validation set that do not overlap with the training set;
  • Step 1.3 using the training set and verification set produced in Step 1.1 and Step 1.2, select a network model whose neural network depth is suitable for the data volume of the data set, and train the network model with the highest accuracy on the test set.
  • the network model classifies and identifies the input image data, the key frame image label is 0, and the image with the highest probability of the 0th class predicted by the network model is the key frame image.
  • the Softmax function used for the output of the final result of the network model is calculated as follows:
  • e is the natural constant
  • j is the classification category
  • k is the total number of categories to be classified
  • z i is the i-th dimension component of the k-dimensional vector
  • P i is the image classification predicted probability of class i.
  • step 2 a simple linear iterative clustering image segmentation algorithm (superpixel segmentation) is used to label each pixel of the key frame image and divide it into a set of multiple pixels. In this way, pixels with similar characteristics, such as texture, information entropy, brightness, etc., are subdivided into an irregular block.
  • This method is compatible with common grayscale and color images for segmented images, and has a fast operation speed. Keeping a relatively complete outline is more in line with the segmentation effect of the region of interest.
  • the specific implementation steps are:
  • Step 2.1 convert the colored key frame image into a 5-dimensional feature vector in Lab color space and XY coordinates.
  • Lab color space is device-independent and consists of three elements: luminance L, color channel a and b, and XY is plane coordinate, used to locate the location;
  • step 2.2 a new distance metric is constructed from the feature vector transformed in step 2.1, and then the local image pixels are clustered.
  • the color distance and spatial distance of the searched pixels are calculated as follows:
  • i and j represent the key frame image x and the key frame image y, respectively.
  • l, a and b are eigenvectors in Lab color space
  • x and y are eigenvectors in XY coordinates.
  • d c and d s are expressed as color distance and spatial distance, respectively, and the distance D i between the pixel point and the seed point is obtained.
  • m is a constant
  • the value range is [1, 40]
  • the general value is 10
  • S is the maximum spatial distance. Because in this process, each pixel will be searched for many times, take the minimum distance between it and the surrounding seed points, and the corresponding seed point is the cluster center of the pixel;
  • Step 2.3 through continuous iterative optimization, until the difference between each pixel point and the cluster center no longer changes, after multiple image segmentation experiences, it is found that the effect of superpixel segmentation after at most 20 iterations is the most ideal.
  • step 3 the key frame image is horizontally flipped, and the segmentation boundary of this superpixel processing is extracted from the superpixel segmentation map obtained in step 2, and the extracted superpixel segmentation boundary is superimposed on the key after the horizontal flipping process.
  • the two images are the corresponding parts of their horizontal parts in the same segmented block.
  • Step 3.1 on the basis of step 1 and step 2, can obtain the part that divides the region of interest and the superpixel segmentation boundary, and the key frame image is horizontally flipped;
  • Step 3.2 superimpose the superpixel segmentation boundary on the horizontally flipped key frame image, and segment the flipped key frame image, so that the two images are respectively the corresponding superpixel blocks of the horizontal part in the same segmentation block;
  • Step 4 using the phase consistency algorithm to compare the similarity of the corresponding pixel blocks in the superpixel segmentation key frame image and the superpixel segmentation boundary superimposed and flipped key frame image, and obtain the similarity of each segmented superpixel block respectively.
  • the specific implementation steps are:
  • Step 4.1 transform the superpixel segmentation key frame image and the superpixel segmentation boundary superimposed and flipped key frame image into a YIQ color space image, where the Y component represents the brightness information of the image, the I and Q components represent the chrominance information, and the YIQ color space can Separating luminance and chrominance of color images;
  • Step 4.2 calculate the PC value of the two images, PC is a measure of the phase consistency information of the images, the similarity between the chromaticity features and the gradient magnitude, and the similarity between each point on the image is obtained; the calculation formula is as follows:
  • Pc represents the phase consistency information of the image x, y
  • G represents the gradient amplitude.
  • S PC (x, y) is the feature similarity of the two images x, y
  • S G (x, y) is the gradient similarity of the two images x, y, the role of the T 1 constant and the T 2 constant is to avoid The denominator is zero, and the value is 0.001;
  • Step 4.3 on the basis of step 4.2, combine the feature similarity and gradient magnitude of the image, weight the chroma feature similarity measure, obtain the similarity of each point, and further obtain the similarity between the two images, the calculation formula as follows:
  • x, y are the key frame image x and key frame image y
  • is the entire airspace
  • S PC (x, y) is the feature similarity of the two images x, y
  • S G (x, y) is the two Gradient similarity of images x, y, ⁇ , ⁇ are positive integers, mainly used to adjust the weight between feature similarity and gradient similarity
  • Pc represents the phase consistency information of images x, y
  • n represents the analysis of each
  • Pc n (x,y) max[Pc(x),Pc(y)], which is used to weight the overall similarity of the two images.
  • the similarity FSIM between the two images is obtained by calculation. The closer the FSIM is, the lower the similarity between the two images is.
  • Step 5 Set a threshold to analyze the similarity of each pair of superpixel blocks. Since the nutrients and density of the diseased tissue have changed compared with the normal tissue, the image shows that the pixels of the cancerous tissue and the normal tissue are different. The lower the similarity of each superpixel block in the key frame image, the greater the possibility of the cancerous part in that part, and the normal tissue on the contrary. The coordinates of the superpixel block are obtained according to the set threshold. After a large number of experimental tests, when the similarity is between 0.15 and 0.48, the located pathological part is the closest to the expected effect. Therefore, the threshold is set to 0.15 to 0.48, that is, the If the similarity is within this range, the location of the suspected lesion that is different from the normal location can be located, and the pathological location can be accurately located.
  • the specific implementation steps are:
  • Step 5.1 analyze the similarity of each pair of superpixel blocks, and sort all superpixel blocks in ascending order according to the similarity;
  • step 5.2 the superpixel blocks sorted in step 5.1 are analyzed, and the coordinates of the superpixel block are obtained by setting a threshold value, and the position of the suspected lesion that is different from the normal part can be located, and the pathological part that may have the lesion can be accurately located.
  • step 1 According to the parts that need to be identified, select the key frames of the corresponding parts to label, and make training sets, test sets and validation sets for identifying key frames. Select a network model with a neural network depth suitable for the amount of data in the data set, and train the network model with the highest accuracy on the test set to filter out key frame images.
  • step 2 is performed, and a simple linear iterative clustering image segmentation algorithm is used to label each pixel of the key frame image and divide it into a set of multiple pixels. This will subdivide pixels with similar features, such as texture, information entropy, brightness, etc., into an irregular block.
  • the effect of superpixel segmentation image is shown in Figure 2.
  • the extracted superpixel segmentation boundaries are shown in Figure 3.
  • step 3 perform horizontal flip processing on the key frame image, and superimpose the extracted superpixel segmentation boundary on the key frame image after horizontal flip processing.
  • the extracted superpixel segmentation boundary is shown in Figure 3.
  • the segmentation flip After the key frame image, the two images are the corresponding parts of their horizontal parts in the same segmentation block;
  • step 4 is performed, and the similarity calculation is performed on the corresponding pixel blocks in the key frame image after the superpixel segmentation key frame image and the superpixel segmentation boundary are superimposed and flipped by the phase consistency algorithm, and each superpixel block that is segmented is obtained respectively. similarity;
  • step 5 sort each pair of superpixel blocks according to the similarity, and obtain the coordinates of superpixel blocks whose similarity is within the threshold by setting a threshold range of 0.15 to 0.48, which can accurately locate the pathological part of the patient.
  • a computing system based on superpixel image similarity includes:
  • Screening module used to select key frame images from the image data through the network model
  • Segmentation module for dividing the key frame image into a superpixel segmentation map of different pixels by an image segmentation algorithm
  • Flip module used to perform horizontal flip processing on the key frame image
  • Extraction module for extracting segmentation boundaries from the superpixel segmentation map
  • Superimposing module used to superimpose the segmentation boundary on the key frame image after horizontal flip processing to obtain a superimposed image
  • Comparison module It is used to compare the similarity between the superpixel segmentation map and the corresponding pixel blocks in the superimposed image through the phase consistency algorithm, and obtain the similarity of each pair of pixel blocks divided.
  • a computing system based on superpixel image similarity includes a processor and a storage medium;
  • the storage medium is used for storing instructions
  • the processor is adapted to operate in accordance with the instructions to perform steps in accordance with the method described above.
  • a computer-readable storage medium having a computer program stored thereon, the program implementing the steps of the above-described method when executed by a processor.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flows of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

一种基于超像素图像相似度的计算方法。包括以下步骤:通过网络模型从影像数据中选出关键帧图像;采用图像分割算法将所述关键帧图像划分为不同像素块的超像素分割图;对所述关键帧图像进行水平翻转处理;从所述超像素分割图中提取其分割边界;将所述分割边界叠加在水平翻转处理后的关键帧图像上,得到叠加图像;通过相位一致性算法对超像素分割图和叠加图像中对应的像素块进行相似性比较,分别得到分割出来的每一对像素块的相似度。本方法从获取的影像视频流中自动筛选出关键帧图像,并对关键帧图像进行识别,实现智能的计算机辅助诊断。

Description

一种基于超像素图像相似度的计算方法 技术领域
本发明属于影像识别领域,特别涉及一种基于超像素图像相似度的计算方法。
背景技术
近年来深度学习在图像处理领域的应用也得到了快速的发展,但是针对影像进行分类识别是深度学习的重要挑战。随着数据量的不断增大,越来越需要可靠的计算机辅助诊断方法。目前有关学者对人工智能应用于计算机辅助诊断方法已有大量研究。计算机辅助诊断方法是指从一种或多种模态影像数据中提取出有效特征,并采用机器学习的方法对提取到的有效特征样本进行分类识别。然而,目前的计算机辅助诊断影像方法存在如下一些问题:(1)仅针对某种部位或者特定的部位训练出网络模型,不适用于其他部位;(2)关键帧和感兴趣区域选取需要人工干预手动完成,造成便捷性不高。
发明内容
针对现有技术所存在的问题,本发明的目的是提供了一种基于超像素图像相似度的计算方法,能够对选取关键帧图像分割后的像素块进行相似度判别。以解决现有技术中实用性、精度不高的问题。
为解决上述技术问题,本发明采用的技术方案为:
一种基于超像素图像相似度的计算方法,包括以下步骤:
通过网络模型从影像数据中选出关键帧图像;
采用图像分割算法将所述关键帧图像划分为不同像素块的超像素分割图;
对所述关键帧图像进行水平翻转处理;
从所述超像素分割图中提取分割边界;
将所述分割边界叠加在水平翻转处理后的关键帧图像上,得到叠加图像;
通过相位一致性算法对超像素分割图和叠加图像中对应的像素块进行相似性比较,分别得到分割出来的每一对像素块的相似度。
进一步的,所述网络模型的训练方法如下:
根据需要识别的部位选取对应部位的关键帧图像;
制作所述关键帧图像的训练集;
制作与所述训练集不交叉重叠的测试集和验证集;
根据所述训练集和验证集训练出在测试集上识别关键帧图像准确率最高的网络模型。
进一步的,所述关键帧图像的分割过程如下:
将所述关键帧图像转化为特征向量;
根据所述特征向量构造出距离度量标准;
根据所述距离度量标准对局部的图像像素进行聚类;
对所述聚类进行不断地迭代优化直到每个像素点到聚类中心的差不再发生变化,得到不同像素块的超像素分割图。
进一步的,所述方法还包括根据距离度量标准计算所搜索的像素点的颜色距离和空间距离。
进一步的,所述颜色距离和空间距离的计算公式如下:
Figure PCTCN2021098184-appb-000001
Figure PCTCN2021098184-appb-000002
Figure PCTCN2021098184-appb-000003
式中,i和j分别代表关键帧图像x和关键帧图像y,l,a和b为Lab颜色空间下的特征向量,x和y为XY坐标下的特征向量,d c和d s分别表示为颜色距离和空间距离,D i为像素点和种子点的距离,m是常数,S为最大空间距离。
进一步的,所述相似度的获取方法包括:
将所述超像素分割图和叠加图像转化为YIQ色彩空间图像;
计算两幅色彩空间图像的PC值;
根据所述PC值得到图像上每一点间的相似性;
对所述相似性进行度量加权,获得每对超像素块的相似度。
进一步的,所述相似性的计算公式如下:
Figure PCTCN2021098184-appb-000004
Figure PCTCN2021098184-appb-000005
式中,用Pc代表图像x,y的相位一致性信息,G表示梯度幅值,S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,T 1和T 2为常量。
进一步的,所述相似度的计算公式如下:
Figure PCTCN2021098184-appb-000006
Pc n(x,y)=max[Pc(x),Pc(y)],
式中,x,y为关键帧图像x和关键帧图像y,Ω表示整个空域,S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,α、β为正整数,Pc代表图像x,y的相位一致性信息,n代表分析每对超像素块 标签。
进一步的,所述网络模型的结果输出采用Softmax函数,计算公示如下:
Figure PCTCN2021098184-appb-000007
式中,e为自然常数,j为分类的类别,k为要分类的总类别数,z i是k维向量中的第i维分量,P i为图像分类中预测为第i类的概率。
一种基于超像素图像相似度的计算系统,所述系统包括:
筛选模块:用于通过网络模型从影像数据中选出关键帧图像;
分割模块:用于通过图像分割算法将所述关键帧图像划分为不同像素的超像素分割图;
翻转模块:用于对所述关键帧图像进行水平翻转处理;
提取模块:用于从所述超像素分割图中提取分割边界;
叠加模块:用于将所述分割边界叠加在水平翻转处理后的关键帧图像上,得到叠加图像;
比较模块:用于通过相位一致性算法对超像素分割图和叠加图像中对应的像素块进行相似性比较,分别得到分割出来的每一对像素块的相似度。
一种基于超像素图像相似度的计算方法系统,所述系统包括处理器和存储介质;
所述存储介质用于存储指令;
所述处理器用于根据所述指令进行操作以执行根据上述所述方法的步骤。
计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述所述方法的步骤。
与现有技术相比,本发明所达到的有益效果是:
本发明通过网络模型从影像数据中筛选出关键帧图像,通过对关键帧图像进行处理和相似度识别,提高了方法的便捷性、实用性和精度;该方法可以根据不同部位的关键帧不同,训练出能够识别不同部位影像关键帧图像准确率最高的网络模型,解决了现有方法单一,仅适用于特定部位影像辅助诊断的问题。
附图说明
图1是本发明方法实施例中的用于辅助诊断人体对称部位影像过程示意图;
图2是本发明方法实施例中的超像素分割脑部核磁共振图像效果图;
图3是本发明方法实施例中的超像素分割脑部核磁共振图像提取出来的超像素分割边界。
具体实施方式
下面通过具体实施例对本发明作进一步详述,以下实施例只是描述性的,不是限定性的,不能以此限定本发明的保护范围。
首先介绍一下本发明的具体操作方法:
本发明具体实例是提供一种基于超像素图像相似度的计算方法,本发明具体实例是将该方法应用于辅助诊断过程,但本发明方法不限于具体实例中提供的应用领域,本发明还可以等效的应用于除了辅助诊断以外的其他领域。
如图1所示,是本发明方法实施例中计算图像像素块相似度的过程示意图,该方法应用于辅助诊断,图中,影像数据输入网络模型,获取关键帧图像x,关键帧图像x水平翻转得到关键帧图像y,关键帧图像x和关键帧图像y分别进行图像分割,对每对超像素块进行相似性分析,最终定位发生病变区域。所述方法具体步骤如下:
步骤1,根据人体不同部位医生诊断的关键帧不同,训练出能够识别该部 位影像关键帧图像准确率最高的网络模型。选择需要影像分析的人体部位,通过网络模型从获取的影像数据中自动筛选出关键帧图像。具体实现步骤为:
步骤1.1,根据需要识别的部位,选取对应部位的关键帧贴标签,制作识别关键帧的训练集;
步骤1.2,针对步骤1.1的训练集,制作与训练集不交叉重叠的测试集和验证集;
步骤1.3,用步骤1.1和步骤1.2制作的训练集和验证集,选取神经网络深度适合于制作数据集数据量的网络模型,训练出在测试集上准确率最高的网络模型。网络模型对输入影像数据进行分类识别,关键帧图像标签为0,网络模型预测为第0类概率最高的图像为关键帧图像。网络模型最后结果的输出采用的Softmax函数,计算公示如下:
Figure PCTCN2021098184-appb-000008
式中,e为自然常数,j为分类的类别,k为要分类的总类别数,z i是k维向量中的第i维分量,P i为图像分类中预测为第i类的概率。
步骤2,采用简单线性迭代聚类的图像分割算法(超像素分割)将关键帧图像的每个像素加标签,划分为多个像素的集合。这样会将具有相似特征,如纹理、信息熵、亮度等特征的像素点细分在一个不规则块内,这种方法兼容分割影像常见的灰度图及彩色图,且运算速度快的同时能够保持比较完整的轮廓,较为符合对感兴趣区域的分割效果。具体实现步骤为:
步骤2.1,将彩色的关键帧图像转化为Lab颜色空间和XY坐标下的5维特征向量,Lab颜色空间是与设备无关由亮度L,颜色通道a和b三个要素组成,XY为平面坐标,用于定位位置;
步骤2.2,对步骤2.1转化得到的特征向量构造出一种新的距离度量标准进而对局部的图像像素进行聚类。先初始化数据,在种子点周围的邻域内为每个像素点分配不同的标签,计算其属于聚类中心的标签,保存该像素点到像素中心的距离,通过一种新的距离度量标准计算所搜索的像素点的颜色距离和空间距离,计算公式如下:
Figure PCTCN2021098184-appb-000009
Figure PCTCN2021098184-appb-000010
Figure PCTCN2021098184-appb-000011
式中,i和j分别代表关键帧图像x和关键帧图像y。l,a和b为Lab颜色空间下的特征向量,x和y为XY坐标下的特征向量。d c和d s分别表示为颜色距离和空间距离,得到像素点和种子点的距离D i。m是常数,取值范围为[1,40],一般取值为10,S为最大空间距离。由于在这个过程中,每个像素点都会多次被搜索到,取它与周围种子点距离的最小值,对应的种子点即为该像素的聚类中心;
步骤2.3,经过不断地迭代优化,直到每个像素点到聚类中心的差不再发生变化,经过多次图像分割经验发现至多20次迭代之后的超像素分割的效果最为理想。
步骤3,将关键帧图像进行水平翻转处理,从步骤2中得到的超像素分割图中提取出本次超像素处理的分割边界,将提取出来的超像素分割边界叠加在水平翻转处理后的关键帧图像上,使得两幅图中在同一分割块内分别是其水平部位的对应部位。具体实现步骤为:
步骤3.1,在步骤1和步骤2的基础上,能得到分割出感兴趣区域的部分 和超像素分割边界,将关键帧图像进行水平的翻转;
步骤3.2,将超像素分割边界叠加在水平翻转的关键帧图像上,对翻转后的关键帧图像进行分割,使得两幅图中在同一分割块内分别是其水平部位的对应超像素块;
步骤4,使用相位一致性算法对超像素分割关键帧图像和超像素分割边界叠加翻转后的关键帧图像中对应的像素块进行相似性比较,分别得到分割出来的每一超像素块的相似度。具体实现步骤为:
步骤4.1,将超像素分割关键帧图像和超像素分割边界叠加翻转后的关键帧图像转化为YIQ色彩空间图像,Y分量代表图像的亮度信息,I、Q分量代表色度信息,YIQ色彩空间能将彩色图像的亮度与色度分离;
步骤4.2,计算两幅图像的PC值,PC为对图像相位一致性信息的度量,色度特征之间的相似性和梯度幅值,得到图像上每一点间的相似性;计算公式如下:
Figure PCTCN2021098184-appb-000012
Figure PCTCN2021098184-appb-000013
式中,用Pc代表图像x,y的相位一致性信息,G表示梯度幅值。S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,T 1常量和T 2常量的作用是为了避免分母为零,取值为0.001;
步骤4.3,在步骤4.2的基础上,结合图像的特征相似性和梯度幅值,对色度特征相似性度量加权,获得每一点的相似性,进一步得到两幅图像之间的相似度,计算公式如下:
Figure PCTCN2021098184-appb-000014
式中,x,y为关键帧图像x和关键帧图像y,Ω表示整个空域,S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,α、β为正整数,主要是用于调整特征相似性和梯度相似性之间的权重,Pc代表图像x,y的相位一致性信息,n代表分析每对超像素块标签,Pc n(x,y)=max[Pc(x),Pc(y)],用来对两幅图像整体的相似性加权。通过计算得到两幅图像之间的相似度FSIM,FSIM越接近小,说明两幅图像的相似性越低。
步骤5,设置阈值分析每一对超像素块的相似度,由于发生病变的组织相比正常组织的营养物质、密度等方面发生变化,在影像上表现是癌变组织和正常组织的像素不同,因此关键帧图像每个超像素块相似度越低,该部位发生癌变的部位可能性越大,反之则是正常组织。根据设置的阈值获取超像素块坐标,经过大量实验测试,相似度在0.15~0.48之间时,定位到的病理部位与期望效果最为接近,因此将阈值设为0.15~0.48,即超像素块的相似度在此范围内,定位出与正常部位不同疑似发生病变的位置,能够准确地定位到病理部位。具体实现步骤为:
步骤5.1,分析每一对超像素块的相似度,按照相似度大小对所有的超像素块按照升序进行排序;
步骤5.2,对步骤5.1排序的超像素块进行分析,通过设置阈值的方式,获取超像素块坐标,定位出与正常部位不同疑似发生病变的位置,能够准确地定位到可能发生病变的病理部位。
下面是应用本发明的方法进行辅助诊断人体对称部位影像识别具体实施例:以脑部核磁共振影像进行辅助诊断分析为实施例,详细描述本发明的实施过程。
由于核磁共振成像具有高度的软组织分辨率,因此在临床上广泛使用磁共 振成像技术评估脑部的病变。然而随着数据量的不断增大以及肉眼辨别可能产生的经验误差,越来越需要自动和可靠的定位脑部病理部位的方法。
首先,执行步骤1,根据需要识别的部位,选取对应部位的关键帧贴标签,制作识别关键帧的训练集,测试集和验证集。选取神经网络深度适合于制作数据集数据量的网络模型,训练出在测试集上准确率最高的网络模型,用于筛选出关键帧图像。
其次,执行步骤2,采用简单线性迭代聚类的图像分割算法将关键帧图像的每个像素加标签,划分为多个像素的集合。这样会将具有相似特征,如纹理、信息熵、亮度等特征的像素点细分在一个不规则块内。超像素分割图像效果如图2所示。提取出来的超像素分割边界如图3所示。
再次,执行步骤3,将关键帧图像进行水平翻转处理,将提取出来的超像素分割边界叠加在水平翻转处理后的关键帧图像上,提取出来的超像素分割边界如图3所示,分割翻转之后的关键帧图像,使得两幅图中在同一分割块内分别是其水平部位的对应部位;
从此,执行步骤4,通过相位一致性算法对超像素分割关键帧图像和超像素分割边界叠加翻转后的关键帧图像中对应的像素块进行相似性计算,分别得到分割出来的每一超像素块的相似度;
最后,执行步骤5,按照相似度大小对每对超像素块进行排序,通过设置的阈值范围为0.15~0.48获取相似度在阈值内的超像素块坐标,能够准确地定位到病人的病理部位。
一种基于超像素图像相似度的计算系统,所述系统包括:
筛选模块:用于通过网络模型从影像数据中选出关键帧图像;
分割模块:用于通过图像分割算法将所述关键帧图像划分为不同像素的超 像素分割图;
翻转模块:用于对所述关键帧图像进行水平翻转处理;
提取模块:用于从所述超像素分割图中提取分割边界;
叠加模块:用于将所述分割边界叠加在水平翻转处理后的关键帧图像上,得到叠加图像;
比较模块:用于通过相位一致性算法对超像素分割图和叠加图像中对应的像素块进行相似性比较,分别得到分割出来的每一对像素块的相似度。
一种基于超像素图像相似度的计算系统,所述系统包括处理器和存储介质;
所述存储介质用于存储指令;
所述处理器用于根据所述指令进行操作以执行根据上述所述方法的步骤。
计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述所述方法的步骤。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一 个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
最后应当说明的是:以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者等同替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求保护范围之内。

Claims (9)

  1. 一种基于超像素图像相似度的计算方法,其特征在于,包括以下步骤:
    通过网络模型从影像数据中选出关键帧图像;
    通过图像分割算法将所述关键帧图像划分为不同像素块的超像素分割图;
    对所述关键帧图像进行水平翻转处理;
    从所述超像素分割图中提取分割边界;
    将所述分割边界叠加在水平翻转处理后的关键帧图像上,得到叠加图像;
    通过相位一致性算法对超像素分割图和叠加图像中对应的像素块进行相似性比较,分别得到分割出来的每一对像素块的相似度。
  2. 根据权利要求1所述的一种基于超像素图像相似度的计算方法,其特征在于,所述网络模型的训练方法如下:
    根据需要识别的部位选取对应部位的关键帧图像;
    制作所述关键帧图像的训练集;
    制作与所述训练集不交叉重叠的测试集和验证集;
    根据所述训练集和验证集训练出在测试集上识别关键帧图像准确率最高的网络模型。
  3. 根据权利要求1所述的一种基于超像素图像相似度的计算方法,其特征在于,所述关键帧图像的分割过程如下:
    将所述关键帧图像转化为特征向量;
    根据所述特征向量构造出距离度量标准;
    根据所述距离度量标准对局部的图像像素进行聚类;
    对所述聚类进行不断地迭代优化直到每个像素点到聚类中心的差不再发 生变化,得到不同像素块的超像素分割图。
  4. 根据权利要求3所述的一种基于超像素图像相似度的计算方法,其特征在于,所述方法还包括根据距离度量标准计算所搜索的像素点的颜色距离和空间距离。
  5. 根据权利要求4所述的一种基于超像素图像相似度的计算方法,其特征在于,所述颜色距离和空间距离的计算公式如下:
    Figure PCTCN2021098184-appb-100001
    Figure PCTCN2021098184-appb-100002
    Figure PCTCN2021098184-appb-100003
    式中,i和j分别代表关键帧图像x和关键帧图像y,l,a和b为Lab颜色空间下的特征向量,x和y为XY坐标下的特征向量,d c和d s分别表示为颜色距离和空间距离,D i为像素点和种子点的距离,m是常数,S为最大空间距离。
  6. 根据权利要求1所述的一种基于超像素图像相似度的计算方法,其特征在于,所述相似度的获取方法包括:
    将所述超像素分割图和叠加图像转化为YIQ色彩空间图像;
    计算两幅色彩空间图像的PC值;
    根据所述PC值得到图像上每一点间的相似性;
    对所述相似性进行度量加权,获得每对超像素块的相似度。
  7. 根据权利要求6所述的一种基于超像素图像相似度的计算方法,其特征在于,所述相似性的计算公式如下:
    Figure PCTCN2021098184-appb-100004
    Figure PCTCN2021098184-appb-100005
    式中,用Pc代表图像x,y的相位一致性信息,G表示梯度幅值,S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,T 1和T 2为常量。
  8. 根据权利要求6所述的一种基于超像素图像相似度的计算方法,其特征在于,所述相似度的计算公式如下:
    Figure PCTCN2021098184-appb-100006
    Pc n(x,y)=max[Pc(x),Pc(y)],
    式中,x,y为关键帧图像x和关键帧图像y,Ω表示整个空域,S PC(x,y)为两幅图像x,y的特征相似性,S G(x,y)为两幅图像x,y的梯度相似性,α、β为正整数,Pc代表图像x,y的相位一致性信息,n代表分析每对超像素块标签。
  9. 根据权利要求1所述的一种基于超像素图像相似度的计算方法,其特征在于,所述网络模型的结果输出采用Softmax函数,计算公示如下:
    Figure PCTCN2021098184-appb-100007
    式中,e为自然常数,j为分类的类别,k为要分类的总类别数,z i是k维向量中的第i维分量,P i为图像分类中预测为第i类的概率。
PCT/CN2021/098184 2020-06-29 2021-06-03 一种基于超像素图像相似度的计算方法 WO2022001571A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010607158.0 2020-06-29
CN202010607158.0A CN111931811B (zh) 2020-06-29 2020-06-29 一种基于超像素图像相似度的计算方法

Publications (1)

Publication Number Publication Date
WO2022001571A1 true WO2022001571A1 (zh) 2022-01-06

Family

ID=73317721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098184 WO2022001571A1 (zh) 2020-06-29 2021-06-03 一种基于超像素图像相似度的计算方法

Country Status (2)

Country Link
CN (1) CN111931811B (zh)
WO (1) WO2022001571A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294131A (zh) * 2022-10-08 2022-11-04 南通海发水处理工程有限公司 一种污水处理质量检测方法及系统
CN115641327A (zh) * 2022-11-09 2023-01-24 浙江天律工程管理有限公司 一种基于大数据的建筑工程质量监理和预警系统
CN115880295A (zh) * 2023-02-28 2023-03-31 吉林省安瑞健康科技有限公司 具有精准定位功能的计算机辅助肿瘤消融导航系统
CN115914649A (zh) * 2023-03-01 2023-04-04 广州高通影像技术有限公司 一种用于医疗视频的数据传输方法及系统
CN116630311B (zh) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 用于高速公路路政管理的路面破损识别告警方法
CN116823811A (zh) * 2023-08-25 2023-09-29 汶上县誉诚制衣有限公司 一种功能性冲锋衣表面质量检测方法
CN116863469A (zh) * 2023-06-27 2023-10-10 首都医科大学附属北京潞河医院 一种基于深度学习的手术解剖部位识别标注方法
CN117173175A (zh) * 2023-11-02 2023-12-05 湖南格尔智慧科技有限公司 一种基于超像素的图像相似度检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931811B (zh) * 2020-06-29 2024-03-29 南京巨鲨显示科技有限公司 一种基于超像素图像相似度的计算方法
CN112669346B (zh) * 2020-12-25 2024-02-20 浙江大华技术股份有限公司 一种路面突发状况确定方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation
CN108600865A (zh) * 2018-05-14 2018-09-28 西安理工大学 一种基于超像素分割的视频摘要生成方法
CN109712153A (zh) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 一种遥感图像城区超像素分割方法
CN109712143A (zh) * 2018-12-27 2019-05-03 北京邮电大学世纪学院 一种基于超像素多特征融合的快速图像分割方法
CN111931811A (zh) * 2020-06-29 2020-11-13 南京巨鲨显示科技有限公司 一种基于超像素图像相似度的计算方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180012365A1 (en) * 2015-03-20 2018-01-11 Ventana Medical Systems, Inc. System and method for image segmentation
CN108600865A (zh) * 2018-05-14 2018-09-28 西安理工大学 一种基于超像素分割的视频摘要生成方法
CN109712153A (zh) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 一种遥感图像城区超像素分割方法
CN109712143A (zh) * 2018-12-27 2019-05-03 北京邮电大学世纪学院 一种基于超像素多特征融合的快速图像分割方法
CN111931811A (zh) * 2020-06-29 2020-11-13 南京巨鲨显示科技有限公司 一种基于超像素图像相似度的计算方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294131A (zh) * 2022-10-08 2022-11-04 南通海发水处理工程有限公司 一种污水处理质量检测方法及系统
CN115641327A (zh) * 2022-11-09 2023-01-24 浙江天律工程管理有限公司 一种基于大数据的建筑工程质量监理和预警系统
CN115880295A (zh) * 2023-02-28 2023-03-31 吉林省安瑞健康科技有限公司 具有精准定位功能的计算机辅助肿瘤消融导航系统
CN115914649A (zh) * 2023-03-01 2023-04-04 广州高通影像技术有限公司 一种用于医疗视频的数据传输方法及系统
CN116863469A (zh) * 2023-06-27 2023-10-10 首都医科大学附属北京潞河医院 一种基于深度学习的手术解剖部位识别标注方法
CN116863469B (zh) * 2023-06-27 2024-05-14 首都医科大学附属北京潞河医院 一种基于深度学习的手术解剖部位识别标注方法
CN116630311B (zh) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 用于高速公路路政管理的路面破损识别告警方法
CN116823811A (zh) * 2023-08-25 2023-09-29 汶上县誉诚制衣有限公司 一种功能性冲锋衣表面质量检测方法
CN116823811B (zh) * 2023-08-25 2023-12-01 汶上县誉诚制衣有限公司 一种功能性冲锋衣表面质量检测方法
CN117173175A (zh) * 2023-11-02 2023-12-05 湖南格尔智慧科技有限公司 一种基于超像素的图像相似度检测方法
CN117173175B (zh) * 2023-11-02 2024-02-09 湖南格尔智慧科技有限公司 一种基于超像素的图像相似度检测方法

Also Published As

Publication number Publication date
CN111931811A (zh) 2020-11-13
CN111931811B (zh) 2024-03-29

Similar Documents

Publication Publication Date Title
WO2022001571A1 (zh) 一种基于超像素图像相似度的计算方法
CN106056595B (zh) 基于深度卷积神经网络自动识别甲状腺结节良恶性的辅助诊断系统
CN107194937B (zh) 一种开放环境下中医舌象图像分割方法
WO2019104767A1 (zh) 基于深度卷积神经网络与视觉显著性的织物缺陷检测方法
CN108364288A (zh) 用于乳腺癌病理图像的分割方法和装置
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
CN108537751B (zh) 一种基于径向基神经网络的甲状腺超声图像自动分割方法
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
US20230005140A1 (en) Automated detection of tumors based on image processing
CN107622280B (zh) 基于场景分类的模块化处方式图像显著性检测方法
Abdullah et al. Multi-sectional views textural based SVM for MS lesion segmentation in multi-channels MRIs
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
CN112348059A (zh) 基于深度学习的多种染色病理图像分类方法及系统
Zhang et al. TUnet-LBF: Retinal fundus image fine segmentation model based on transformer Unet network and LBF
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
Zaaboub et al. Optic disc detection and segmentation using saliency mask in retinal fundus images
Rachmad et al. Classification of mycobacterium tuberculosis based on color feature extraction using adaptive boosting method
Fazilov et al. Patch-based lesion detection using deep learning method on small mammography dataset
CN110910497B (zh) 实现增强现实地图的方法和系统
Cheng et al. Superpixel classification based optic disc segmentation
CN111415350B (zh) 一种用于检测宫颈病变的阴道镜图像识别方法
Khomairoh et al. Segmentation system of acute myeloid leukemia (AML) subtypes on microscopic blood smear image
Pan et al. Preferential image segmentation using trees of shapes
Srikanth et al. Analysis and Detection of Multi Tumor from MRI of Brain using Advance Adaptive Feature Fuzzy C-means (AAFFCM) Algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21832854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21832854

Country of ref document: EP

Kind code of ref document: A1