CN117541580A - A method for establishing a thyroid cancer image comparison model based on deep neural network - Google Patents
A method for establishing a thyroid cancer image comparison model based on deep neural network Download PDFInfo
- Publication number
- CN117541580A CN117541580A CN202410022905.2A CN202410022905A CN117541580A CN 117541580 A CN117541580 A CN 117541580A CN 202410022905 A CN202410022905 A CN 202410022905A CN 117541580 A CN117541580 A CN 117541580A
- Authority
- CN
- China
- Prior art keywords
- comparison
- area
- grayscale
- overall
- distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000024770 Thyroid neoplasm Diseases 0.000 title claims abstract description 45
- 201000002510 thyroid cancer Diseases 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 96
- 210000001685 thyroid gland Anatomy 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 238000009826 distribution Methods 0.000 claims description 98
- 230000002093 peripheral effect Effects 0.000 claims description 47
- 241001270131 Agaricus moelleri Species 0.000 claims description 17
- 238000005520 cutting process Methods 0.000 claims description 16
- 238000005192 partition Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 239000002775 capsule Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 206010033701 Papillary thyroid cancer Diseases 0.000 description 2
- 230000002308 calcification Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000030045 thyroid gland papillary carcinoma Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical field
本发明涉及图像识别技术领域,具体为一种基于深度神经网络的甲状腺癌图像比对模型建立方法。The present invention relates to the field of image recognition technology, specifically a method for establishing a thyroid cancer image comparison model based on a deep neural network.
背景技术Background technique
图像识别,是指利用计算机对图像进行处理、分析和理解,以识别各种不同模式的目标和对象的技术,是应用深度学习算法的一种实践应用,通常情况下图像的识别流程分为四个步骤:图像采集、图像预处理、特征提取以及图像识别,深度神经网络是机器学习领域中一种技术,在图像识别的过程中通过深度神经网络对训练样本中的特征进行提取,能够提高图像识别比对的准确性。Image recognition refers to the technology of using computers to process, analyze and understand images to identify various targets and objects of different modes. It is a practical application of deep learning algorithms. Usually, the image recognition process is divided into four There are three steps: image acquisition, image preprocessing, feature extraction and image recognition. Deep neural network is a technology in the field of machine learning. In the process of image recognition, the deep neural network is used to extract features from training samples, which can improve the image quality. Identify the accuracy of the comparison.
现有的技术中,在对甲状腺癌图像进行识别过程中,就有采用图像识别的技术进行特征提取比对的,例如在申请公开号为CN112233106A的中国专利中公开了一种基于残差胶囊网络的甲状腺癌超声影像分析方法,该方法就是通过残差胶囊网络对甲状腺癌的超声影像进行分析,得到待识别甲状腺乳头状癌超声图像对应的超声影像分类识别结果,但是该方法仅仅公开了残差胶囊网络对图像进行分析分类的技术,在该方法的步骤S1中原始甲状腺乳头状癌超声图像数据集中的每一张图像均包括形状不规则属性、边界不清晰属性、回声不均匀属性、钙化属性和正常属性中的一个或多个,但是该方法仅仅罗列了上述关于甲状腺癌图像中的特征,缺少对上述特征的具体识别方案,无法通过该方法对甲状腺癌图像进行有效的特征比对识别,因此还需要一种能够对甲状腺癌图像中的特征进行有效提取比对的方法来解决上述问题。In the existing technology, in the process of identifying thyroid cancer images, image recognition technology is used for feature extraction and comparison. For example, the Chinese patent application with publication number CN112233106A discloses a residual capsule network-based An ultrasonic image analysis method for thyroid cancer. This method analyzes the ultrasonic images of thyroid cancer through a residual capsule network to obtain the ultrasonic image classification and recognition results corresponding to the ultrasonic image of papillary thyroid cancer to be identified. However, this method only discloses the residual Capsule network is a technology for analyzing and classifying images. In step S1 of the method, each image in the original papillary thyroid cancer ultrasound image data set includes irregular shape attributes, unclear boundary attributes, uneven echo attributes, and calcification attributes. and one or more of the normal attributes. However, this method only lists the above-mentioned features in thyroid cancer images and lacks a specific identification scheme for the above-mentioned features. This method cannot be used to effectively compare and identify thyroid cancer images. Therefore, a method that can effectively extract and compare features in thyroid cancer images is needed to solve the above problems.
发明内容Contents of the invention
本发明旨在至少在一定程度上解决现有技术中的技术问题之一,通过对若干甲状腺癌图像进行特征提取,基于提取的特征建立图像比对模型,能够有助于提高图像筛选的特征比对准确性,以解决现有的甲状腺癌图像的识别方法缺少具体的特征提取手段,导致无法进行有效的特征比对识别的问题。The present invention aims to solve one of the technical problems in the prior art at least to a certain extent. By extracting features from several thyroid cancer images and establishing an image comparison model based on the extracted features, it can help improve the feature ratio of image screening. Regarding accuracy, it solves the problem that existing thyroid cancer image recognition methods lack specific feature extraction means, resulting in the inability to carry out effective feature comparison and recognition.
为实现上述目的,第一方面本申请提供一种基于深度神经网络的甲状腺癌图像比对模型建立方法,包括:获取若干训练图像,对训练图像中的甲状腺癌区域进行标记,所述训练图像中包括甲状腺癌区域;In order to achieve the above purpose, the first aspect of this application provides a method for establishing a thyroid cancer image comparison model based on a deep neural network, which includes: acquiring a number of training images, and marking the thyroid cancer areas in the training images. Includes areas of thyroid cancer;
对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数;Extract grayscale features from the thyroid area, perform grayscale feature training on the thyroid area of several training images, and obtain grayscale comparison parameters;
将甲状腺区域划分为整体区域以及点状区域,对若干训练图像的整体区域以及点状区域进行形状特征训练,得到形状比对参数;Divide the thyroid area into overall areas and point-like areas, conduct shape feature training on the overall areas and point-like areas of several training images, and obtain shape comparison parameters;
基于灰度比对参数和形状比对参数建立图像比对模型。An image comparison model is established based on grayscale comparison parameters and shape comparison parameters.
进一步地,获取若干训练图像,对训练图像中的甲状腺癌区域进行标记包括:将训练图像进行像素点划分,基于像素点建立二维坐标系;Further, obtaining several training images and marking the thyroid cancer area in the training images includes: dividing the training images into pixels and establishing a two-dimensional coordinate system based on the pixels;
在二维坐标系中将甲状腺癌区域的像素点进行坐标标记,并将甲状腺癌区域的像素点设定为比对像素点。The pixels in the thyroid cancer area are marked with coordinates in a two-dimensional coordinate system, and the pixels in the thyroid cancer area are set as comparison pixels.
进一步地,对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数包括:在二维坐标系中将比对像素点之外的像素点设定为外围像素点;Further, grayscale features are extracted from the thyroid area, and grayscale feature training is performed on the thyroid area of several training images. The grayscale comparison parameters obtained include: setting the pixel points other than the comparison pixel points in the two-dimensional coordinate system. Defined as peripheral pixels;
将比对像素点与外围像素点相邻的像素点设定为比对轮廓像素点,将外围像素点与比对轮廓像素点相邻的像素点设定为外围相接像素点;Set the pixel points adjacent to the comparison pixel point and the peripheral pixel point as the comparison outline pixel point, and set the pixel points adjacent to the peripheral pixel point and the comparison outline pixel point as the peripheral contact pixel point;
计算训练图像中的比对轮廓像素点的灰度的平均值,设定为比对轮廓灰度;计算训练图像中的外围相接像素点的灰度的平均值,设定为外围相接灰度;计算比对轮廓灰度与外围相接灰度的差值的绝对值,设定为灰度比对值;Calculate the average gray level of the comparison contour pixels in the training image, and set it as the comparison contour gray level; calculate the average gray level of the peripheral connecting pixels in the training image, and set it as the peripheral connecting gray level. degree; calculate the absolute value of the difference between the contrasting contour grayscale and the peripheral grayscale, and set it as the grayscale comparison value;
将若干训练图像逐一通过上述步骤进行灰度特征训练得到若干比对轮廓灰度、若干外围相接灰度以及若干灰度比对值;Perform grayscale feature training on several training images one by one through the above steps to obtain several comparison contour grayscales, several peripheral connecting grayscales, and several grayscale comparison values;
对若干比对轮廓灰度、若干外围相接灰度以及若干灰度比对值分别通过比对参数提取方法处理得到灰度比对参数,所述灰度比对参数包括比对轮廓灰度范围、外围相接灰度范围以及灰度比对值范围。Several comparison contour grayscales, several peripheral connecting grayscales, and several grayscale comparison values are processed through a comparison parameter extraction method to obtain grayscale comparison parameters. The grayscale comparison parameters include the comparison contour grayscale range. , peripheral connected grayscale range and grayscale comparison value range.
进一步地,所述比对参数提取方法包括:获取输入的一组灰度值的最大值和最小值,分别设定为灰度最大值和灰度最小值;一组灰度值包括若干比对轮廓灰度、若干外围相接灰度或若干灰度比对值中的一组;Further, the comparison parameter extraction method includes: obtaining the maximum value and the minimum value of a set of input grayscale values, and setting them as the maximum grayscale value and the minimum grayscale value respectively; a set of grayscale values includes several comparisons. A group of contour grayscales, several peripheral grayscales, or several grayscale comparison values;
将灰度最大值减去灰度最小值得到灰度差值,将灰度差值除以第一预设数量得到第一待定划分值,提取第一待定划分值的整数位加一得到第二待定划分值;Subtract the minimum gray value from the maximum gray value to obtain the gray difference, divide the gray difference by the first preset number to obtain the first undetermined division value, extract the integer bit of the first undetermined division value and add one to obtain the second Pending partition value;
以灰度最小值为划分起点,以第二待定划分值为划分单位,划分第一预设数量个灰度组,将输入的一组灰度值分别对应到每个灰度组中,选取灰度值的数量分布最多的灰度组设定为灰度比对参数组;Use the minimum gray value as the starting point for division, use the second undetermined division value as the division unit, divide a first preset number of gray scale groups, correspond the input set of gray scale values to each gray scale group, and select the gray scale The gray level group with the largest number of distribution values is set as the gray level comparison parameter group;
将灰度比对参数组的灰度范围设定为灰度比对参数。Set the grayscale range of the grayscale comparison parameter group as the grayscale comparison parameter.
进一步地,将甲状腺区域划分为整体区域以及点状区域包括:获取二维坐标系中的比对像素点,将相互连接的比对像素点的区域设定为整体待定区域;Further, dividing the thyroid area into an overall area and a point area includes: obtaining the comparison pixel points in the two-dimensional coordinate system, and setting the area of interconnected comparison pixel points as the overall undetermined area;
获取二维坐标系中整体待定区域的数量,当整体待定区域的数量小于等于第一分布数量阈值时,将整体待定区域设定为整体区域,当整体待定区域的数量大于第一分布数量阈值时,将整体待定区域设定为点状区域。Obtain the number of the overall undetermined area in the two-dimensional coordinate system. When the number of the overall undetermined area is less than or equal to the first distribution quantity threshold, the overall undetermined area is set as the overall area. When the number of the overall undetermined area is greater than the first distribution quantity threshold, , setting the overall undetermined area as a point area.
进一步地,对若干训练图像的整体区域进行形状特征训练,得到形状比对参数包括:设定外围圆对整体区域进行框选,外围圆为能够将整体区域进完全框选的最小圆;Further, shape feature training is performed on the entire area of several training images, and the obtained shape comparison parameters include: setting a peripheral circle to frame the entire area, and the outer circle is the smallest circle that can completely frame the entire area;
每次将外围圆的半径缩小第一单位长度得到更新圆,更新圆的圆心与外围圆的圆心保持一致;Each time the radius of the outer circle is reduced by the first unit length, an updated circle is obtained, and the center of the updated circle is consistent with the center of the outer circle;
将每次得到的更新圆内部的整体区域设定为内部待划分区域,将每次得到的更新圆与外部相邻的更新圆或外围圆之间的整体区域设定为切割区域;The overall area inside the update circle obtained each time is set as the internal area to be divided, and the overall area between the update circle obtained each time and the adjacent update circle or peripheral circle outside is set as the cutting area;
将切割区域中相互连接的比对像素点的区域设定为切割独立区域,统计每一组切割区域的切割独立区域的数量,设定为边缘发散数量;当边缘发散数量小于等于第一独立数量阈值时,停止对更新圆或外围圆的半径进行缩小;Set the area of interconnected comparison pixels in the cutting area as cutting independent areas, count the number of cutting independent areas in each group of cutting areas, and set it as the edge divergence number; when the edge divergence number is less than or equal to the first independent number When the threshold is reached, stop reducing the radius of the update circle or peripheral circle;
获取若干组边缘发散数量的最大值,设定为整体发散分布数量;Obtain the maximum value of the number of edge divergences of several groups and set it as the number of overall divergence distribution;
将若干训练图像对应的整体发散分布数量通过形状比对提取方法处理得到整体比对参数,所述整体比对参数包括整体发散分布数量范围。The overall divergence distribution quantity corresponding to several training images is processed through the shape comparison extraction method to obtain the overall comparison parameter, and the overall comparison parameter includes the overall divergence distribution quantity range.
进一步地,对若干训练图像的点状区域进行形状特征训练,得到形状比对参数包括:获取训练图像中的点状区域的数量,设定为点状分布数量;Further, shape feature training is performed on the point-like areas of several training images, and the obtained shape comparison parameters include: obtaining the number of point-like areas in the training image and setting it as the number of point-like distributions;
将若干训练图像对应的点状分布数量通过形状比对提取方法处理得到点状比对参数,所述点状比对参数包括点状分布数量范围。The number of point distributions corresponding to several training images is processed through a shape comparison extraction method to obtain point comparison parameters, where the point comparison parameters include a range of point distribution numbers.
进一步地,所述形状比对提取方法包括:获取输入的一组分布量的最大值和最小值,分别设定为分布量最大值和分布量最小值;一组分布量包括若干训练图像对应的整体发散分布数量或若干训练图像对应的点状分布数量中的一组;Further, the shape comparison extraction method includes: obtaining the maximum value and the minimum value of a set of input distribution amounts, and setting them as the maximum value and the minimum distribution amount respectively; a set of distribution amounts includes a plurality of training images corresponding to The number of overall divergent distributions or a group of the number of point distributions corresponding to several training images;
将分布量最大值减去分布量最小值得到分布量差值,将分布量差值除以第二预设数量得到第一分布量划分值,提取第一分布量划分值的整数位加一得到第二分布量划分值;Subtract the minimum value of the distribution amount from the maximum value of the distribution amount to obtain the distribution amount difference, divide the distribution amount difference by the second preset number to obtain the first distribution amount division value, extract the integer digit of the first distribution amount division value and add one to obtain The second distribution quantity division value;
以分布量最小值为划分起点,以第二分布量划分值为划分单位,划分第二预设数量个分布量组,将输入的一组分布量分别对应到每个分布量组中,选取分布量的数量分布最多的分布量组设定为分布量比对参数组;Use the minimum value of the distribution amount as the starting point for division, use the second distribution amount division value as the division unit, divide the second preset number of distribution amount groups, correspond the input set of distribution amounts to each distribution amount group, and select the distribution The distribution quantity group with the largest quantity distribution is set as the distribution quantity comparison parameter group;
将分布量比对参数组的范围设定为形状比对参数,所述形状比对参数包括整体比对参数以及点状比对参数。The range of the distribution comparison parameter group is set as a shape comparison parameter, and the shape comparison parameter includes an overall comparison parameter and a point comparison parameter.
本发明的有益效果:本发明通过获取若干训练图像,对训练图像中的甲状腺癌区域进行标记,能够便于特征训练时的区域对应精准,提高数据获取的准确性;通过对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数,通过灰度比对参数能够在图像比对模型中搭建好特征比对的框架,提高特征进行初步比对识别的效率和有效性;Beneficial effects of the present invention: By acquiring several training images and marking the thyroid cancer area in the training images, the present invention can facilitate accurate regional correspondence during feature training and improve the accuracy of data acquisition; by performing grayscale features on the thyroid area Extract, conduct grayscale feature training on the thyroid area of several training images, and obtain grayscale comparison parameters. Through the grayscale comparison parameters, a good feature comparison framework can be built in the image comparison model, and the features can be improved for preliminary comparison and identification. efficiency and effectiveness;
本发明通过将甲状腺区域划分为整体区域以及点状区域,对若干训练图像的整体区域以及点状区域进行形状特征训练,得到形状比对参数,再通过形状比对参数能够将通过灰度筛选得到的特征比对区域进一步得到特征比对筛选,从而提高甲状腺癌图像比对识别的有效性和准确性;最后基于灰度比对参数和形状比对参数建立图像比对模型,图像比对模型基于灰度比对参数和形状比对参数进行建立,能够提高实际运用时的图像比对准确性。The present invention divides the thyroid area into overall areas and point-like areas, performs shape feature training on the overall areas and point-like areas of several training images, and obtains shape comparison parameters, and then uses the shape comparison parameters to obtain the results obtained through grayscale screening. The feature comparison area is further characterized by feature comparison and screening, thereby improving the effectiveness and accuracy of thyroid cancer image comparison and recognition; finally, an image comparison model is established based on grayscale comparison parameters and shape comparison parameters, and the image comparison model is based on The establishment of grayscale comparison parameters and shape comparison parameters can improve the accuracy of image comparison in practical applications.
本申请的其他特征和优点将在随后的说明书阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
附图说明Description of drawings
图1为本发明的方法的步骤流程图;Figure 1 is a step flow chart of the method of the present invention;
图2为本发明的切割独立区域的获取示意图;Figure 2 is a schematic diagram of obtaining an independent cutting area according to the present invention;
图3为本发明的训练图像中包含整体区域的示意图;Figure 3 is a schematic diagram of the entire area included in the training image of the present invention;
图4为本发明的训练图像中包含点状区域的示意图。Figure 4 is a schematic diagram of a dotted area included in the training image of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.
实施例一请参阅图1所示,一种基于深度神经网络的甲状腺癌图像比对模型建立方法,通过对若干甲状腺癌图像进行特征提取,基于提取的特征建立图像比对模型,能够有助于提高图像筛选的特征比对准确性,以解决现有的甲状腺癌图像的识别方法缺少具体的特征提取手段,导致无法进行有效的特征比对识别的问题。Embodiment 1 As shown in Figure 1, a method for establishing a thyroid cancer image comparison model based on a deep neural network is used to extract features from several thyroid cancer images and establish an image comparison model based on the extracted features, which can help Improve the feature comparison accuracy of image screening to solve the problem that existing thyroid cancer image recognition methods lack specific feature extraction methods, resulting in the inability to perform effective feature comparison and recognition.
具体地,基于深度神经网络的甲状腺癌图像比对模型建立方法包括如下步骤:步骤S1,获取若干训练图像,对训练图像中的甲状腺癌区域进行标记,训练图像中包括甲状腺癌区域;步骤S1还包括如下子步骤:步骤S101,将训练图像进行像素点划分,基于像素点建立二维坐标系,具体实施时,像素点划分按照宽是1280像素、高是720像素进行划分;Specifically, the method for establishing a thyroid cancer image comparison model based on a deep neural network includes the following steps: Step S1, obtain a number of training images, mark the thyroid cancer area in the training image, and the training image includes the thyroid cancer area; Step S1 also It includes the following sub-steps: Step S101, divide the training image into pixels, and establish a two-dimensional coordinate system based on the pixels. During specific implementation, the pixels are divided according to a width of 1280 pixels and a height of 720 pixels;
步骤S102,在二维坐标系中将甲状腺癌区域的像素点进行坐标标记,并将甲状腺癌区域的像素点设定为比对像素点。Step S102: Mark the pixels in the thyroid cancer area with coordinates in a two-dimensional coordinate system, and set the pixels in the thyroid cancer area as comparison pixels.
步骤S2,对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数;步骤S2还包括:步骤S2011,在二维坐标系中将比对像素点之外的像素点设定为外围像素点;Step S2: Extract grayscale features from the thyroid region, perform grayscale feature training on the thyroid region of several training images, and obtain grayscale comparison parameters; Step S2 also includes: Step S2011, compare the pixels in the two-dimensional coordinate system Pixels outside the point are set as peripheral pixels;
步骤S2012,将比对像素点与外围像素点相邻的像素点设定为比对轮廓像素点,将外围像素点与比对轮廓像素点相邻的像素点设定为外围相接像素点;Step S2012, set the pixel points adjacent to the comparison pixel point and the peripheral pixel point as the comparison outline pixel point, and set the pixel points adjacent to the peripheral pixel point and the comparison outline pixel point as the peripheral contact pixel point;
步骤S2013,计算训练图像中的比对轮廓像素点的灰度的平均值,设定为比对轮廓灰度;计算训练图像中的外围相接像素点的灰度的平均值,设定为外围相接灰度;计算比对轮廓灰度与外围相接灰度的差值的绝对值,设定为灰度比对值;Step S2013: Calculate the average grayscale of the comparison contour pixels in the training image and set it as the comparison contour grayscale; calculate the average grayscale of the peripheral adjacent pixels in the training image and set it as the peripheral grayscale. Connected grayscale; calculate the absolute value of the difference between the contrasting contour grayscale and the peripheral connected grayscale, and set it as the grayscale comparison value;
步骤S2014,将若干训练图像逐一通过步骤S2011至步骤S2013进行灰度特征训练得到若干比对轮廓灰度、若干外围相接灰度以及若干灰度比对值;Step S2014, perform grayscale feature training on several training images one by one through steps S2011 to S2013 to obtain several comparison contour grayscales, several peripheral connection grayscales, and several grayscale comparison values;
步骤S2015,对若干比对轮廓灰度、若干外围相接灰度以及若干灰度比对值分别通过比对参数提取方法处理得到灰度比对参数,灰度比对参数包括比对轮廓灰度范围、外围相接灰度范围以及灰度比对值范围,通过得到的比对轮廓灰度范围、外围相接灰度范围以及灰度比对值范围能够在实际的影像比对时便于通过灰度比对参数划定初始的特征区域,有助于提高初步特征提取的效率,从而能够保证降低下一步形状比对的数据处理量的同时,提高数据比对的准确度。Step S2015: A plurality of comparison contour grayscales, a plurality of peripheral connecting grayscales and a plurality of grayscale comparison values are respectively processed through a comparison parameter extraction method to obtain grayscale comparison parameters. The grayscale comparison parameters include comparison contour grayscales. range, peripheral connected grayscale range and grayscale comparison value range. The obtained comparison contour grayscale range, peripheral connected grayscale range and grayscale comparison value range can facilitate grayscale comparison during actual image comparison. The degree comparison parameter delimits the initial feature area, which helps to improve the efficiency of preliminary feature extraction, thereby ensuring that the data processing volume of the next step of shape comparison is reduced while improving the accuracy of data comparison.
比对参数提取方法包括如下步骤:步骤S2021,获取输入的一组灰度值的最大值和最小值,分别设定为灰度最大值和灰度最小值;一组灰度值包括若干比对轮廓灰度、若干外围相接灰度或若干灰度比对值中的一组;The comparison parameter extraction method includes the following steps: Step S2021, obtain the maximum value and minimum value of a set of input grayscale values, and set them as the maximum grayscale value and the minimum grayscale value respectively; a set of grayscale values includes several comparisons A group of contour grayscales, several peripheral grayscales, or several grayscale comparison values;
步骤S2022,将灰度最大值减去灰度最小值得到灰度差值,将灰度差值除以第一预设数量得到第一待定划分值,提取第一待定划分值的整数位加一得到第二待定划分值;第一预设数量设置为10,例如灰度差值为55时,得到的第一待定划分值为5.5,第二待定划分值为6;Step S2022: Subtract the minimum gray value from the maximum gray value to obtain the gray difference, divide the gray difference by the first preset number to obtain the first undetermined division value, and extract the integer digit of the first undetermined division value plus one. Obtain the second undetermined division value; the first preset number is set to 10, for example, when the grayscale difference is 55, the first undetermined division value obtained is 5.5, and the second undetermined division value is 6;
步骤S2023,以灰度最小值为划分起点,以第二待定划分值为划分单位,划分第一预设数量个灰度组,将输入的一组灰度值分别对应到每个灰度组中,选取灰度值的数量分布最多的灰度组设定为灰度比对参数组;Step S2023, use the minimum gray value as the starting point for division, use the second undetermined division value as the division unit, divide the first preset number of gray scale groups, and correspond the input set of gray scale values to each gray scale group. , select the grayscale group with the largest number of grayscale values and set it as the grayscale comparison parameter group;
步骤S2024,将灰度比对参数组的灰度范围设定为灰度比对参数。Step S2024: Set the grayscale range of the grayscale comparison parameter group as the grayscale comparison parameter.
请参阅图2至图4所示,步骤S3,将甲状腺区域划分为整体区域以及点状区域,图3中,灰色箭头所指的区域为整体区域,图4中白色箭头所指的区域为点状区域,对若干训练图像的整体区域以及点状区域进行形状特征训练,得到形状比对参数;步骤S3还包括:步骤S3011,获取二维坐标系中的比对像素点,将相互连接的比对像素点的区域设定为整体待定区域;Please refer to Figures 2 to 4, step S3, divide the thyroid area into an overall area and a point area. In Figure 3, the area pointed by the gray arrow is the overall area, and the area pointed by the white arrow in Figure 4 is a point. Shape area, perform shape feature training on the overall area and point area of several training images, and obtain shape comparison parameters; Step S3 also includes: Step S3011, obtain the comparison pixel points in the two-dimensional coordinate system, and compare the connected comparison pixels. Set the area of pixels as the overall undetermined area;
步骤S3012,获取二维坐标系中整体待定区域的数量,当整体待定区域的数量小于等于第一分布数量阈值时,将整体待定区域设定为整体区域,当整体待定区域的数量大于第一分布数量阈值时,将整体待定区域设定为点状区域,第一分布数量设置为3,通常情况下整体区域都是连接在一起的,因此第一分布数量的数值不必设置得过大。Step S3012, obtain the number of the overall undetermined areas in the two-dimensional coordinate system. When the number of the overall undetermined areas is less than or equal to the first distribution quantity threshold, set the overall undetermined area as the overall area. When the number of the overall undetermined areas is greater than the first distribution When setting the quantity threshold, the entire undetermined area is set to a point-like area, and the first distribution quantity is set to 3. Normally, the entire area is connected together, so the value of the first distribution quantity does not need to be set too large.
步骤S3还包括:步骤S3021,设定外围圆对整体区域进行框选,外围圆为能够将整体区域进完全框选的最小圆;Step S3 also includes: step S3021, setting a peripheral circle to frame the entire area, and the outer circle is the smallest circle that can completely frame the entire area;
步骤S3022,每次将外围圆的半径缩小第一单位长度得到更新圆,更新圆的圆心与外围圆的圆心保持一致;第一单位长度根据像素点的边长进行设定,参照图2所示,更新圆与外围圆的半径相差一个像素点的边长,第一单位长度具体设定为一个像素点的边长;Step S3022, each time the radius of the peripheral circle is reduced by the first unit length to obtain an update circle, the center of the update circle is consistent with the center of the peripheral circle; the first unit length is set according to the side length of the pixel, as shown in Figure 2 , the radius difference between the update circle and the peripheral circle is one pixel side length, and the first unit length is specifically set to the side length of one pixel point;
步骤S3023,将每次得到的更新圆内部的整体区域设定为内部待划分区域,将每次得到的更新圆与外部相邻的更新圆或外围圆之间的整体区域设定为切割区域;Step S3023, set the entire area inside the update circle obtained each time as the internal area to be divided, and set the entire area between the update circle obtained each time and the external adjacent update circle or peripheral circle as the cutting area;
步骤S3024,将切割区域中相互连接的比对像素点的区域设定为切割独立区域,统计每一组切割区域的切割独立区域的数量,设定为边缘发散数量;当边缘发散数量小于等于第一独立数量阈值时,停止对更新圆或外围圆的半径进行缩小;第一独立阈值设定为3,当边缘发散数量小于等于3时,停止对更新圆或外围圆进行缩小操作;Step S3024: Set the area of interconnected comparison pixels in the cutting area as cutting independent areas, count the number of cutting independent areas in each group of cutting areas, and set it as the number of edge divergences; when the number of edge divergences is less than or equal to the th. When an independent quantity threshold is reached, stop reducing the radius of the update circle or peripheral circle; when the first independent threshold is set to 3, when the number of edge divergences is less than or equal to 3, stop reducing the update circle or peripheral circle;
步骤S3025,获取若干组边缘发散数量的最大值,设定为整体发散分布数量;在获取边缘发散数量时,边缘发散数量越多说明整体区域的边缘越不规则,针刺状或其他形状的边缘结构越多;Step S3025, obtain the maximum value of several groups of edge divergence numbers, and set it as the overall divergence distribution number; when obtaining the edge divergence number, the greater the edge divergence number, the more irregular the edges of the overall area, acupuncture-like or other-shaped edges. The more structure there is;
步骤S3026,将若干训练图像对应的整体发散分布数量通过形状比对提取方法处理得到整体比对参数,整体比对参数包括整体发散分布数量范围。Step S3026: The overall divergence distribution quantity corresponding to several training images is processed through the shape comparison extraction method to obtain the overall comparison parameter. The overall comparison parameter includes the overall divergence distribution quantity range.
步骤S3还包括:步骤S3031,获取训练图像中的点状区域的数量,设定为点状分布数量;点状分布数量越多说明训练图像中的钙化点位越多;Step S3 also includes: step S3031, obtaining the number of point-like areas in the training image and setting it as the number of point-like distributions; the greater the number of point-like distributions, the more calcification points are in the training image;
步骤S3032,将若干训练图像对应的点状分布数量通过形状比对提取方法处理得到点状比对参数,点状比对参数包括点状分布数量范围。Step S3032: Process the number of point distributions corresponding to several training images through the shape comparison extraction method to obtain point comparison parameters. The point comparison parameters include the range of the number of point distributions.
形状比对提取方法包括:步骤S3041,获取输入的一组分布量的最大值和最小值,分别设定为分布量最大值和分布量最小值;一组分布量包括若干训练图像对应的整体发散分布数量或若干训练图像对应的点状分布数量中的一组;The shape comparison extraction method includes: step S3041, obtaining the maximum value and the minimum value of a set of input distribution amounts, and setting them as the maximum value and the minimum distribution amount respectively; a set of distribution amounts includes the overall divergence corresponding to several training images The number of distributions or a group of the number of point distributions corresponding to several training images;
步骤S3042,将分布量最大值减去分布量最小值得到分布量差值,将分布量差值除以第二预设数量得到第一分布量划分值,提取第一分布量划分值的整数位加一得到第二分布量划分值;第二预设数量具体设置为5,例如分布量差值为99时,得到的第一分布量划分值为19.8,第二分布量划分值为20;Step S3042: Subtract the minimum value of the distribution amount from the maximum value of the distribution amount to obtain the distribution amount difference, divide the distribution amount difference by the second preset number to obtain the first distribution amount division value, and extract the integer digits of the first distribution amount division value. Add one to get the second distribution amount division value; the second preset number is specifically set to 5. For example, when the distribution amount difference is 99, the obtained first distribution amount division value is 19.8, and the second distribution amount division value is 20;
步骤S3043,以分布量最小值为划分起点,以第二分布量划分值为划分单位,划分第二预设数量个分布量组,将输入的一组分布量分别对应到每个分布量组中,选取分布量的数量分布最多的分布量组设定为分布量比对参数组;Step S3043, use the minimum value of the distribution amount as the starting point for division, use the second distribution amount division value as the division unit, divide the second preset number of distribution amount groups, and correspond the input set of distribution amounts to each distribution amount group. , select the distribution group with the largest number of distributions and set it as the distribution comparison parameter group;
步骤S3044将分布量比对参数组的范围设定为形状比对参数,形状比对参数包括整体比对参数以及点状比对参数。Step S3044 sets the range of the distribution comparison parameter group as the shape comparison parameter, and the shape comparison parameter includes the overall comparison parameter and the point comparison parameter.
步骤S4,基于灰度比对参数和形状比对参数建立图像比对模型;具体实施时,将需要比对识别的影像输入图像比对模型中,图像比对模型的处理过程包括:首先将输入的影像通过灰度比对参数中的比对轮廓灰度范围、外围相接灰度范围以及灰度比对值范围进行比对,划分好初步需要进行识别的区域,设定为待识别区域,如果通过灰度比对参数没有提取到待识别区域,则输出无识别特征信号,无识别特征信号表示输入的影像中没有提取到与图像比对模型中的参数相似的特征;再对待识别区域的形状进行比对,通过与形状比对参数中整体比对参数以及点状比对参数进行比对,得到待识别区域的形状划分类型,如果待识别区域在进行形状比对时,与整体比对参数以及点状比对参数都不存在相似之处,则输出无识别特征待定信号,无识别特征待定信号表示影像中可能会存在风险区域,需要人工进行进一步核查。Step S4, establish an image comparison model based on grayscale comparison parameters and shape comparison parameters; during specific implementation, the image that needs to be compared and identified is input into the image comparison model. The processing process of the image comparison model includes: first, input The images are compared through the comparison contour grayscale range, peripheral adjacent grayscale range and grayscale comparison value range in the grayscale comparison parameters, and the area that needs to be initially identified is divided and set as the area to be identified. If the area to be identified is not extracted through the grayscale comparison parameters, an unidentified feature signal is output. The unidentified feature signal indicates that no features similar to the parameters in the image comparison model are extracted from the input image; then the area to be identified is The shape is compared, and the shape division type of the area to be identified is obtained by comparing it with the overall comparison parameter and the point comparison parameter in the shape comparison parameters. If the area to be identified is compared with the overall shape, If there is no similarity between the parameters and the point comparison parameters, a pending signal with no identification features will be output. The pending signal with no identification features indicates that there may be a risk area in the image, and further manual verification is required.
实施例二第二方面,本申请提供一种电子设备,包括处理器以及存储器,存储器存储有计算机可读取指令,当计算机可读取指令由处理器执行时,运行如上任意一项方法中的步骤。通过上述技术方案,处理器和存储器通过通信总线和/或其他形式的连接机构互连并相互通讯,存储器存储有处理器可执行的计算机程序,当电子设备运行时,处理器执行该计算机程序,以执行时执行上述实施例的任一可选的实现方式中的方法,以实现以下功能:首先获取若干训练图像,对训练图像中的甲状腺癌区域进行标记;然后对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数;再将甲状腺区域划分为整体区域以及点状区域,对若干训练图像的整体区域以及点状区域进行形状特征训练,得到形状比对参数;最后基于灰度比对参数和形状比对参数建立图像比对模型。In the second aspect of Embodiment 2, the present application provides an electronic device, including a processor and a memory. The memory stores computer-readable instructions. When the computer-readable instructions are executed by the processor, any one of the above methods is executed. step. Through the above technical solution, the processor and the memory are interconnected and communicate with each other through a communication bus and/or other forms of connection mechanisms. The memory stores a computer program executable by the processor. When the electronic device is running, the processor executes the computer program. The method in any optional implementation of the above embodiment is executed during execution to achieve the following functions: first obtain several training images, mark the thyroid cancer area in the training images; and then extract grayscale features from the thyroid area , perform grayscale feature training on the thyroid area of several training images to obtain grayscale comparison parameters; then divide the thyroid area into overall areas and point-like areas, and perform shape feature training on the overall areas and point-like areas of several training images. The shape comparison parameters are obtained; finally, an image comparison model is established based on the grayscale comparison parameters and shape comparison parameters.
实施例三第三方面,本申请提供一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时,运行如上任意一项方法中的步骤。通过上述技术方案,计算机程序被处理器执行时,执行上述实施例的任一可选的实现方式中的方法,以实现以下功能:首先获取若干训练图像,对训练图像中的甲状腺癌区域进行标记;然后对甲状腺区域进行灰度特征提取,对若干训练图像的甲状腺区域进行灰度特征训练,得到灰度比对参数;再将甲状腺区域划分为整体区域以及点状区域,对若干训练图像的整体区域以及点状区域进行形状特征训练,得到形状比对参数;最后基于灰度比对参数和形状比对参数建立图像比对模型。In the third aspect of the third embodiment, the present application provides a storage medium on which a computer program is stored. When the computer program is executed by a processor, the steps in any one of the above methods are executed. Through the above technical solution, when the computer program is executed by the processor, the method in any optional implementation of the above embodiment is executed to achieve the following functions: first, obtain several training images, and mark the thyroid cancer area in the training images. ; Then perform grayscale feature extraction on the thyroid area, perform grayscale feature training on the thyroid area of several training images, and obtain grayscale comparison parameters; then divide the thyroid area into overall areas and point-like areas, and perform overall training on several training images. Conduct shape feature training on regions and point-like regions to obtain shape comparison parameters; finally, an image comparison model is established based on grayscale comparison parameters and shape comparison parameters.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。其中,存储介质可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random AccessMemory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable ProgrammableRead-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable ProgrammableRead Only Memory,简称EPROM),可编程只读存储器(Programmable Red-Only Memory,简称PROM),只读存储器(Read-OnlyMemory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。Those skilled in the art will appreciate that embodiments of the present invention may be provided as methods, systems or computer program products. Thus, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the invention may take the form of a computer program product embodied on one or more computer-usable storage media embodying computer-usable program code therein. Among them, the storage medium can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, referred to as SRAM), electrically erasable programmable read-only memory (electrically erasable programmable read-only memory) Electrically Erasable ProgrammableRead-Only Memory (EEPROM for short), Erasable ProgrammableRead Only Memory (EPROM for short), Programmable Red-Only Memory (PROM for short), Read-only memory (Read -OnlyMemory (ROM for short), magnetic memory, flash memory, magnetic disk or optical disk. These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions The device implements the functions specified in a process or processes in the flowchart and/or in a block or blocks in the block diagram.
在本申请所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其他的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其他的形式。In the embodiments provided in this application, it should be understood that the disclosed devices and methods can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented. Another point is that the coupling or direct coupling or communication connection between each other shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022905.2A CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022905.2A CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117541580A true CN117541580A (en) | 2024-02-09 |
CN117541580B CN117541580B (en) | 2024-03-19 |
Family
ID=89782644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410022905.2A Active CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117541580B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181614A1 (en) * | 2010-01-25 | 2011-07-28 | King Jen Chang | Quantification method of the feature of a tumor and an imaging method of the same |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN113034426A (en) * | 2019-12-25 | 2021-06-25 | 飞依诺科技(苏州)有限公司 | Ultrasonic image focus description method, device, computer equipment and storage medium |
CN116452464A (en) * | 2023-06-09 | 2023-07-18 | 天津市肿瘤医院(天津医科大学肿瘤医院) | A chest image enhancement processing method based on deep learning |
CN116485623A (en) * | 2023-06-21 | 2023-07-25 | 齐鲁工业大学(山东省科学院) | Multi-spectral image grayscale feature watermarking method based on fast and accurate moments of sixteen-nion |
-
2024
- 2024-01-08 CN CN202410022905.2A patent/CN117541580B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181614A1 (en) * | 2010-01-25 | 2011-07-28 | King Jen Chang | Quantification method of the feature of a tumor and an imaging method of the same |
CN113034426A (en) * | 2019-12-25 | 2021-06-25 | 飞依诺科技(苏州)有限公司 | Ultrasonic image focus description method, device, computer equipment and storage medium |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN116452464A (en) * | 2023-06-09 | 2023-07-18 | 天津市肿瘤医院(天津医科大学肿瘤医院) | A chest image enhancement processing method based on deep learning |
CN116485623A (en) * | 2023-06-21 | 2023-07-25 | 齐鲁工业大学(山东省科学院) | Multi-spectral image grayscale feature watermarking method based on fast and accurate moments of sixteen-nion |
Non-Patent Citations (2)
Title |
---|
ZULFANAHRI ET AL.: "Classification of Thyroid Ultrasound Images Based on Shape Features Analysis", 《THE 2017 BIOMEDICAL ENGINEERING INTERNATIONAL CONFERENCE》, 31 December 2017 (2017-12-31) * |
赵凌昆 等: "原发灶不明癌溯源定位方法的研究进展", 《中国肿瘤临床》, 31 December 2023 (2023-12-31) * |
Also Published As
Publication number | Publication date |
---|---|
CN117541580B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Saeedi et al. | Automatic identification of human blastocyst components via texture | |
CN109086711B (en) | Face feature analysis method and device, computer equipment and storage medium | |
CN107679466B (en) | Information output method and device | |
CN108510499B (en) | A kind of image threshold segmentation method and device based on fuzzy set and Otsu | |
CN112907576B (en) | Vehicle damage grade detection method and device, computer equipment and storage medium | |
CN107481252A (en) | Dividing method, device, medium and the electronic equipment of medical image | |
CN109614900B (en) | Image detection method and device | |
CN110969046B (en) | Face recognition method, face recognition device and computer-readable storage medium | |
CN111738351A (en) | Model training method and device, storage medium and electronic equipment | |
CN112102230B (en) | Ultrasonic section identification method, system, computer device and storage medium | |
CN116403094B (en) | Embedded image recognition method and system | |
US11615515B2 (en) | Superpixel merging | |
CN113706564A (en) | Meibomian gland segmentation network training method and device based on multiple supervision modes | |
CN111932552B (en) | Aorta modeling method and device | |
CN110889437B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112241952A (en) | Method and device for recognizing brain central line, computer equipment and storage medium | |
CN117576131A (en) | Weakly supervised cell nucleus segmentation method and device based on edge optimization and feature denoising | |
CN117635615A (en) | Defect detection method and system for realizing punching die based on deep learning | |
CN115393351B (en) | Method and device for judging cornea immune state based on Langerhans cells | |
CN114757908B (en) | Image processing method, device, equipment and storage medium based on CT image | |
CN117541580B (en) | Thyroid cancer image comparison model establishment method based on deep neural network | |
CN115860067B (en) | Method, device, computer equipment and storage medium for generating countermeasure network training | |
CN110288604B (en) | Image segmentation method and device based on K-means | |
CN112907503A (en) | Penaeus vannamei Boone quality detection method based on adaptive convolutional neural network | |
CN114757953B (en) | Medical ultrasonic image recognition method, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |