WO2021227295A1 - 一种基于cnn的癌细胞多尺度缩放定位检测方法 - Google Patents

一种基于cnn的癌细胞多尺度缩放定位检测方法 Download PDF

Info

Publication number
WO2021227295A1
WO2021227295A1 PCT/CN2020/110812 CN2020110812W WO2021227295A1 WO 2021227295 A1 WO2021227295 A1 WO 2021227295A1 CN 2020110812 W CN2020110812 W CN 2020110812W WO 2021227295 A1 WO2021227295 A1 WO 2021227295A1
Authority
WO
WIPO (PCT)
Prior art keywords
cancer cells
convolution
cnn
image
data set
Prior art date
Application number
PCT/CN2020/110812
Other languages
English (en)
French (fr)
Inventor
黄敏
肖仲喆
吴振宁
江均均
Original Assignee
苏州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州大学 filed Critical 苏州大学
Publication of WO2021227295A1 publication Critical patent/WO2021227295A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the invention relates to the technical field of cell detection, and more specifically to a CNN-based multi-scale zoom positioning detection method for cancer cells.
  • cancer cell detection technology As an important method of cancer prevention and control, cancer cell detection technology has many applications in cancer prevention and cancer treatment.
  • the current cancer cell image detection technology mainly relies on classic image processing methods and deep neural networks for judgment and processing, and has achieved good results.
  • detection methods such as threshold segmentation, gray level co-occurrence matrix, K-means clustering, convolutional neural network, etc., but these methods have complex operations, low accuracy, easy to misjudgment, and slow efficiency. , High cost and inability to accurately locate the location of cancer cells.
  • the present invention provides a CNN-based multi-scale zoom positioning and detection method for cancer cells that is convenient to operate and can accurately locate cancer cells.
  • the present invention provides the following technical solution, and the method includes the following steps:
  • Step 1 Obtain a cancer cell image that meets the requirements through the sampling needle, zoom the image multiple times at a certain ratio, and zoom the image of the adherent cells to the size of a normal single cell, so as to adapt to the convolution kernel and ensure convolution
  • the convolution window of the nucleus can effectively cover the entire area of adhered cells, and 4 images of different scales are obtained at this time;
  • Step 2 Use artificially labeled cancer cell images to create a data set, the data set label is "cancer cells”, “yes” is “Ture”, “no” is “False”, and the image size of the data set is unified as The size of the convolution window of the convolution kernel;
  • Step 3 Use the trained convolutional neural network to perform convolution sliding operation on the obtained images of different scales.
  • the training process of the convolutional neural network by adding the data set obtained in step 2 to the training process, pass The data set is expanded by rotating, folding, mirroring, etc., and the expanded data set is divided into a "training set” and a "test set” in a certain proportion, and the training set data is subjected to multiple iteration training, In this way, the network parameters are continuously updated. After a certain training period, the accuracy of the network reading judgment is checked on the test set until the training is completed. After each test, the model parameters are saved in the ".ckpt" file format;
  • Step 4 When performing the convolution sliding operation, reload the model file saved in the specified path to perform the convolution calculation to obtain the two-dimensional probability matrix corresponding to the images of different scales;
  • Step 5 According to the information on the two-dimensional probability matrix, a threshold coordinate point can be set for verification and detection, and the location information of each area can be recovered from the pico matrix. According to the convolution, the image has a large window and how many steps. To obtain the corresponding mapping relationship, use the mapping relationship to calculate the specific position of each area in the image according to the coordinates of the midpoint of the two-dimensional matrix, so as to realize the accurate positioning of cancer cells;
  • Step 6 Finally, the location information of cancer cells can be returned directly through the network and quickly marked.
  • the size of the convolution window is 40*40.
  • the division ratio of the data set is 0.2 or 0.3.
  • the threshold is set between 0.7 and 0.8.
  • the present disclosure provides a CNN-based multi-scale zoom positioning and detection method for cancer cells.
  • Multiple images are obtained by using multi-scale zoom, thereby avoiding the judgment of cancer cells.
  • the missed judgment caused by the excessively large area of the adhered cancer cells in the area improves the detection accuracy.
  • the two-dimensional matrix is generated after CNN processing, which can not only reflect the probability of cancer cells in each area, but also directly infer the location of the cancer cells through the network. Information, so that the present invention has the characteristics of convenient operation, accurate positioning and high operating efficiency.
  • Figure 1 is a schematic diagram of the multi-scale scaling of the present invention.
  • Fig. 2 is a schematic diagram of the corresponding relationship between the original image and the two-dimensional matrix coordinates of the present invention.
  • Figure 3 is a schematic diagram of the overall design process of the present invention.
  • Figures 1-4 are a CNN-based multi-scale zoom positioning detection method for cancer cells disclosed in the present invention
  • the method includes the following steps:
  • Step 1 Obtain a cancer cell image that meets the requirements through the sampling needle, zoom the image multiple times at a certain ratio, and zoom the image of the adherent cells to the size of a normal single cell, so as to adapt to the convolution kernel and ensure convolution
  • the convolution window of the nucleus can effectively cover the entire area of adhered cells, and 4 images of different scales are obtained at this time;
  • Step 2 Use artificially labeled cancer cell images to create a data set, the data set label is "cancer cells”, “yes” is “Ture”, “no” is “False”, and the image size of the data set is unified as The size of the convolution window of the convolution kernel;
  • Step 3 Use the trained convolutional neural network to perform convolution sliding operation on the obtained images of different scales.
  • the training process of the convolutional neural network by adding the data set obtained in step 2 to the training process, pass The data set is expanded by rotating, folding, mirroring, etc., and the expanded data set is divided into a "training set” and a "test set” in a certain proportion, and the training set data is subjected to multiple iteration training, In this way, the network parameters are continuously updated. After a certain training period, the accuracy of the network reading judgment is checked on the test set until the training is completed. After each test, the model parameters are saved in the ".ckpt" file format;
  • Step 4 When performing the convolution sliding operation, reload the model file saved in the specified path to perform the convolution calculation to obtain the two-dimensional probability matrix corresponding to the images of different scales;
  • Step 5 According to the information on the two-dimensional probability matrix, a threshold coordinate point can be set for verification and detection, and the location information of each area can be recovered from the pico matrix. According to the convolution, the image has a large window and how many steps. To obtain the corresponding mapping relationship, use the mapping relationship to calculate the specific position of each area in the image according to the coordinates of the midpoint of the two-dimensional matrix, so as to realize the accurate positioning of cancer cells;
  • Step 6 Finally, the location information of cancer cells can be returned directly through the network and quickly marked.
  • the size of the convolution window is 40*40.
  • the division ratio of the data set is 0.2 or 0.3.
  • the threshold value is set between 0.7-0.8. If the threshold value is less than 0.7, there will be more candidate regions and misjudgment will occur, resulting in lower detection efficiency; if the threshold value is greater than 0.8, all target regions cannot be effectively selected , There is a missed test.
  • each convolution in the matrix generation process The window corresponds to a point in the two-dimensional matrix.
  • the convolution result of each window represents the probability that the area in the window is a cancer cell, which is expressed as the value of each point on the two-dimensional matrix, and the coordinates of each point in the two-dimensional matrix
  • the mapping relationship of convolution can indicate the position of the window in the image. Therefore, the two-dimensional probability matrix can indicate the probability that the corresponding area is a cancer cell and its position information.
  • the corresponding relationship between the two-dimensional matrix and the original image is shown in Figure 3.
  • the sampling needle obtains an image of cancer cells that meets the requirements, and first zooms in and out of the image multiple times at a certain ratio.
  • the multi-scale scaling effect is shown in Figure 1. This step can scale the adhesion cells to the size of a normal single cell to adapt to the convolution nucleus and ensure that the convolution nucleus can cover the entire area of the adhesion cells, thus solving the segmentation of the adhesion cells.
  • the problem is to avoid the missed judgment caused by the excessively large cell adhesion area and improve the operating efficiency of the algorithm.
  • Cancer cell image zooming zoom the original image 3 times at a zoom ratio of 0.707, and get a total of 4 images plus the original image.
  • the number of zooming is related to the size of the original image and the size of the attached cancer cells, and it needs to be 40 ⁇ 40.
  • the convolution window can effectively cover the adhesion area on the zoomed image;
  • the label of the data set is whether it is a cancer cell, whether it is True but not False.
  • the size of the data set image is unified to the size of the convolution window (40 ⁇ 40).
  • the data set is expanded and divided into training and test sets in a ratio of 0.2, and 1000 iterations of training are performed.
  • the network performance is tested on the test set every 50 training intervals. After each test, the updated model parameters will be converted to .ckpt files.
  • the form of is saved in the specified path, and after the training is completed, a mature convolutional neural network for cancer cell judgment can be generated;
  • each two-dimensional matrix and the coordinates in the image is: set the coordinates of the two-dimensional matrix point as (x, y), the coordinates corresponding to the upper left corner of the cancer cell area in the image is (2x, 2y), and the lower right corner coordinates Is (2x+40,2y+40);

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

一种基于CNN的癌细胞多尺度缩放定位检测方法,采用多尺度缩放比例,通过卷积神经网络的训练过程进行卷积计算,结合结果映射得出相应的二维矩阵,根据二维矩阵的信息,在阈值的标注下进行检测,实现癌细胞的精确定位。通过采用多尺度缩放得到多张图像,避免了在判断癌细胞时因粘连癌细胞面积过大而造成的漏判,提高了检测精度,通过CNN处理后生成二维矩阵,既能反映各区域存在癌细胞的概率,又能直接通过网络推断出癌细胞的位置信息,操作便捷。

Description

一种基于CNN的癌细胞多尺度缩放定位检测方法 技术领域
本发明涉及细胞检测技术领域,更具体的说是涉及一种基于CNN的癌细胞多尺度缩放定位检测方法。
背景技术
癌细胞检测技术作为一种防控癌症的重要手段,无论是在预防癌症还是癌症治疗方面都有很多的应用。目前的癌细胞图像检测技术主要依托经典图像处理方法和深度神经网络进行判断处理,已经取得了不错的成效。出现了诸如利用阈值分割、灰度共生矩阵、K均值聚类、卷积神经网络等各式各样的检测方法,但是这些方法均存在操作复杂、精确度低极易产生误判现象、效率慢、成本高以及无法精确定位癌细胞位置的问题。
因此,如何提供一种操作便捷,而且能够精确定位癌细胞的一种癌细胞定位检测方法是本领域技术人员亟需解决的问题。
发明内容
有鉴于此,本发明提供了一种操作便捷,而且能够精确定位癌细胞的一种基于CNN的癌细胞多尺度缩放定位检测方法。
为实现上述目的,本发明提供如下技术方案,所述方法包括以下步骤:
步骤1:通过采样针获取符合要求的癌细胞图像,以一定的比例对图像进行多次缩放,将粘连细胞的图像缩放至正常单个细胞大小的图像,以此来适应卷积核,保证卷积核的卷积窗口能够有效的覆盖整个黏连细胞区域,此时得到4副不同尺度的图像;
步骤2:利用人工标记的癌细胞图像建立一个数据集,数据集标签为“是否为癌细胞”,“是”为“Ture”,“不是”为“False”,其中数据集的图像大小统一为卷积核的卷积窗口大小;
步骤3:对得到的几幅不同尺度的图像利用训练好的卷积神经网络进行卷积滑动操作,卷积神经网络的训练过程中,通过将步骤2得到的数据集加入至训练过程中,通过将数据集进行旋转、翻折、镜像等方式进行扩充,并将扩充后的数据集以一定的比例进行划分为“训练集”和“测试集”,将训练集的数据进行多次迭代训练,从而不断更新网络参数,每隔一定的训练周期后,在测试集上检验网路偶读判断准确率直至完成训练,每次检验后都会将模型参数以“.ckpt”文件格式保存下来;
步骤4:在进行卷积滑动操作时,重新加载保存在指定路径的模型文件进行卷积计算,得到不同尺度图像对应的二维概率矩阵;
步骤5:通过二维概率矩阵上的信息,即可设置一个阈值的坐标点进行验证、检测,从微微矩阵恢复出各个区域的位置信息,根据卷积是对图像进行了多大窗口、多少步长的操作,得到相应的映射关系,利用映射关系在根据二维矩阵中点的坐标推算出各个区域在图像中的具体位置,实现癌细胞的准确定位;
步骤6:最后直接通过网络可以返回癌细胞的位置信息,并快速标记。
优选的,在上述一种基于CNN的癌细胞多尺度缩放定位检测方法中,所述卷积窗口的大小为40*40。
优选的,在上述一种基于CNN的癌细胞多尺度缩放定位检测方法中,所述数据集的划分比例为0.2或者0.3。
优选的,在上述一种基于CNN的癌细胞多尺度缩放定位检测方法中,所述阈值设置在0.7-0.8之间。
经由上述的技术方案可知,与现有技术相比,本发明公开提供了一种基于CNN的癌细胞多尺度缩放定位检测方法,通过采用多尺度缩放得到多张图像,从而避免了在判断癌细胞面积时因粘连癌细胞面积过大而造成的漏判,提高了检测精度,通过CNN处理后生成二维矩阵,既能反映各区域存在癌细胞的概率又能直接通过网络推断出癌细胞的位置信息,使本发明具有操作便捷,定位精确以及运行效率高的特点。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1附图为本发明的多尺度缩放示意图。
[根据细则91更正 03.02.2021] 
[根据细则91更正 03.02.2021] 
图2附图为本发明的原图、二维矩阵坐标对应关系示意图。
[根据细则91更正 03.02.2021] 
图3附图为本发明的设计总体流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅附图1-4,为本发明公开的一种基于CNN的癌细胞多尺度缩放定位检测方法
本发明,所述方法包括以下步骤:
步骤1:通过采样针获取符合要求的癌细胞图像,以一定的比例对图像进行多次缩放,将粘连细胞的图像缩放至正常单个细胞大小的图像,以此来适应卷积核,保证卷积核的卷积窗口能够有效的覆盖整个黏连细胞区域,此时得到4副不同尺度的图像;
步骤2:利用人工标记的癌细胞图像建立一个数据集,数据集标签为“是否为癌细胞”,“是”为“Ture”,“不是”为“False”,其中数据集的图像大小统一为卷积核的卷积窗口大小;
步骤3:对得到的几幅不同尺度的图像利用训练好的卷积神经网络进行卷积滑动操作,卷积神经网络的训练过程中,通过将步骤2得到的数据集加入至训练过程中,通过将数据集进行旋转、翻折、镜像等方式进行扩充,并将扩充后的数据集以一定的比例进行划分为“训练集”和“测试集”,将训练集的数据进行多次迭代训练,从而不断更新网络参数,每隔一定的训练周期后,在测试集上检验网路偶读判断准确率直至完成训练,每次检验后都会将模型参数以“.ckpt”文件格式保存下来;
步骤4:在进行卷积滑动操作时,重新加载保存在指定路径的模型文件进行卷积计算,得到不同尺度图像对应的二维概率矩阵;
步骤5:通过二维概率矩阵上的信息,即可设置一个阈值的坐标点进行验证、检测,从微微矩阵恢复出各个区域的位置信息,根据卷积是对图像进行了多大窗口、多少步长的操作,得到相应的映射关系,利用映射关系在根据二维矩阵中点的坐标推算出各个区域在图像中的具体位置,实现癌细胞的准确定位;
步骤6:最后直接通过网络可以返回癌细胞的位置信息,并快速标记。
为了进一步优化上述技术方案,卷积窗口的大小为40*40。
为了进一步优化上述技术方案,数据集的划分比例为0.2或者0.3。
为了进一步优化上述技术方案,阈值设置在0.7-0.8之间,若阈值小于0.7会导致候选区域较多,会出现误判,导致检测效率降低;若阈值大于0.8则不能有效地选取出所有目标区域,出现漏检。
为了进一步优化上述技术方案,在进行卷积滑动操作时,重新加载保存在指定路径下的模型文件进行卷积计算,可以得到不同尺度图像对应的二维概率矩阵,矩阵生成过程中每个卷积窗口对应二维矩阵中的一个点,每个窗口的卷积结果表示了窗口中区域为癌细胞的概率,在二维矩阵上表现为每一个点的值,同时二维矩阵中每一点的坐标通过卷积的映射关系可以表示窗口在图像中的位置,因此,二维概率矩阵可以表示相应区域为癌细胞的概率及其位置信息,二维矩阵和原图的对应关系如图3所示。
为了进一步优化上述技术方案,采样针获取符合要求的癌细胞图像,先以一定的比例对图像进行多次缩放。多尺度缩放效果如图1所示,此步可以将粘连细胞缩放到正常单个细胞大小,以此来适应卷积核,保证卷积核能覆盖整个粘连细胞区域,这样就解决了对粘连细胞的分割问题,避免了因为细胞粘连面积过大造成的漏判,提高了算法的运行效率。
具体实施例如下:
1、首先,获取提供的采样针采集到的癌细胞图像,作为需要的测试样本;
2、癌细胞图像缩放:分别对原始图像以0.707为缩放比例进行3次缩放,得到加上原图的一共4幅图像,缩放次数与原图大小以及粘连癌细胞大小有关,需要保证40×40的卷积窗口在缩放后的图像上能有效覆盖粘连区域;
3、利用已知人工标记的癌细胞图像建立一个数据集,数据集标签为是否为癌细胞,是为True,不是为False,数据集图像大小统一为卷积窗口大小(40×40),将数据集进行扩充并以0.2为比例划分为训练和测试集,进行1000次迭代训练,每间隔50次训练在测试集上检验一次网络性能,每次检验 后都会将更新的模型参数以.ckpt文件的形式保存在指定路径下,训练完成后可以生成一个用于癌细胞判断的成熟的卷积神经网络;
4、加载.ckpt文件中的模型参数,对这4幅图像分别用训练好的40×40的卷积网络窗口以步长为2进行滑动,每滑动一次将窗口转化为二维矩阵中的一个点,直至遍历整幅图像,对应每幅图像可以得到一个二维概率矩阵;
5、每个二维矩阵和图像中坐标的对应关系为:设二维矩阵点坐标为(x,y),对应到图像中癌细胞区域的左上角坐标为(2x,2y),右下角坐标为(2x+40,2y+40);
6、由于另外3幅图像是经过缩放得到的,所以要想还原所有的原始图像中的坐标,还要再对得到的坐标除以缩放比;
7、经过卷积网络处理之后,将概率阈值设置为0.7,对于二维矩阵中概率高于0.7的坐标点全部利用以上映射关系还原出对应区域在原始图像中的位置信息;
8、汇总所有的位置信息,利用网络可以直接返回这些信息,最终就可以利用标选框实现癌细胞图像的精准定位。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下, 在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (4)

  1. 一种基于CNN的癌细胞多尺度缩放定位检测方法,其特征在于,所述方法包括以下步骤:
    步骤1:通过采样针获取符合要求的癌细胞图像,以一定的比例对图像进行多次缩放,将粘连细胞的图像缩放至正常单个细胞大小的图像,以此来适应卷积核,保证卷积核的卷积窗口能够有效的覆盖整个黏连细胞区域,此时得到4副不同尺度的图像;
    步骤2:利用人工标记的癌细胞图像建立一个数据集,数据集标签为“是否为癌细胞”,“是”为“Ture”,“不是”为“False”,其中数据集的图像大小统一为卷积核的卷积窗口大小;
    步骤3:对得到的几幅不同尺度的图像利用训练好的卷积神经网络进行卷积滑动操作,卷积神经网络的训练过程中,通过将步骤2得到的数据集加入至训练过程中,通过将数据集进行旋转、翻折、镜像等方式进行扩充,并将扩充后的数据集以一定的比例进行划分为“训练集”和“测试集”,将训练集的数据进行多次迭代训练,从而不断更新网络参数,每隔一定的训练周期后,在测试集上检验网路偶读判断准确率直至完成训练,每次检验后都会将模型参数以“.ckpt”文件格式保存下来;
    步骤4:在进行卷积滑动操作时,重新加载保存在指定路径的模型文件进行卷积计算,得到不同尺度图像对应的二维概率矩阵;
    步骤5:通过二维概率矩阵上的信息,即可设置一个阈值的坐标点进行验证、检测,从微微矩阵恢复出各个区域的位置信息,根据卷积是对图像进行了多大窗口、多少步长的操作,得到相应的映射关系,利用映射关系在根据二维矩阵中点的坐标推算出各个区域在图像中的具体位置,实现癌细胞的准确定位;
    步骤6:最后直接通过网络可以返回癌细胞的位置信息,并快速标记。
  2. 根据权利要求1所述的一种基于CNN的癌细胞多尺度缩放定位检测方法,其特征在于,所述卷积窗口的大小为40*40。
  3. 根据权利要求1所述的一种基于CNN的癌细胞多尺度缩放定位检测方法,其特征在于,所述数据集的划分比例为0.2或者0.3。
  4. 根据权利要求1所述的一种基于CNN的癌细胞多尺度缩放定位检测方法,其特征在于,所述阈值设置在0.7-0.8之间。
PCT/CN2020/110812 2020-05-11 2020-08-24 一种基于cnn的癌细胞多尺度缩放定位检测方法 WO2021227295A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010390335.4A CN111652927B (zh) 2020-05-11 2020-05-11 一种基于cnn的癌细胞多尺度缩放定位检测方法
CN202010390335.4 2020-05-11

Publications (1)

Publication Number Publication Date
WO2021227295A1 true WO2021227295A1 (zh) 2021-11-18

Family

ID=72347839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/110812 WO2021227295A1 (zh) 2020-05-11 2020-08-24 一种基于cnn的癌细胞多尺度缩放定位检测方法

Country Status (2)

Country Link
CN (1) CN111652927B (zh)
WO (1) WO2021227295A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113985156A (zh) * 2021-09-07 2022-01-28 绍兴电力局柯桥供电分局 一种基于变压器声纹大数据的智能故障识别方法
CN115424093A (zh) * 2022-09-01 2022-12-02 南京博视医疗科技有限公司 一种识别眼底图像中细胞的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364288A (zh) * 2018-03-01 2018-08-03 北京航空航天大学 用于乳腺癌病理图像的分割方法和装置
CN109145941A (zh) * 2018-07-03 2019-01-04 怀光智能科技(武汉)有限公司 一种非规则宫颈细胞团图像分类方法及系统
CN110276745A (zh) * 2019-05-22 2019-09-24 南京航空航天大学 一种基于生成对抗网络的病理图像检测算法
US10504005B1 (en) * 2019-05-10 2019-12-10 Capital One Services, Llc Techniques to embed a data object into a multidimensional frame
CN110580699A (zh) * 2019-05-15 2019-12-17 徐州医科大学 基于改进Faster RCNN算法的病理图像细胞核检测方法
CN110781953A (zh) * 2019-10-24 2020-02-11 广州乐智医疗科技有限公司 基于多尺度金字塔卷积神经网络的肺癌病理切片分类方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512683B (zh) * 2015-12-08 2019-03-08 浙江宇视科技有限公司 基于卷积神经网络的目标定位方法及装置
CN105931226A (zh) * 2016-04-14 2016-09-07 南京信息工程大学 基于深度学习的自适应椭圆拟合细胞自动检测分割方法
CN108537775A (zh) * 2018-03-02 2018-09-14 浙江工业大学 一种基于深度学习检测的癌细胞跟踪方法
CN108550133B (zh) * 2018-03-02 2021-05-18 浙江工业大学 一种基于Faster R-CNN的癌细胞检测方法
US10354122B1 (en) * 2018-03-02 2019-07-16 Hong Kong Applied Science and Technology Research Institute Company Limited Using masks to improve classification performance of convolutional neural networks with applications to cancer-cell screening
CN108446617B (zh) * 2018-03-09 2022-04-22 华南理工大学 抗侧脸干扰的人脸快速检测方法
CN109242844B (zh) * 2018-09-04 2021-08-06 青岛大学附属医院 基于深度学习的胰腺癌肿瘤自动识别系统、计算机设备、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364288A (zh) * 2018-03-01 2018-08-03 北京航空航天大学 用于乳腺癌病理图像的分割方法和装置
CN109145941A (zh) * 2018-07-03 2019-01-04 怀光智能科技(武汉)有限公司 一种非规则宫颈细胞团图像分类方法及系统
US10504005B1 (en) * 2019-05-10 2019-12-10 Capital One Services, Llc Techniques to embed a data object into a multidimensional frame
CN110580699A (zh) * 2019-05-15 2019-12-17 徐州医科大学 基于改进Faster RCNN算法的病理图像细胞核检测方法
CN110276745A (zh) * 2019-05-22 2019-09-24 南京航空航天大学 一种基于生成对抗网络的病理图像检测算法
CN110781953A (zh) * 2019-10-24 2020-02-11 广州乐智医疗科技有限公司 基于多尺度金字塔卷积神经网络的肺癌病理切片分类方法

Also Published As

Publication number Publication date
CN111652927A (zh) 2020-09-11
CN111652927B (zh) 2023-12-19

Similar Documents

Publication Publication Date Title
CA2954516C (en) Touch classification
CN111723786B (zh) 一种基于单模型预测的安全帽佩戴检测方法及装置
US8407589B2 (en) Grouping writing regions of digital ink
WO2021227295A1 (zh) 一种基于cnn的癌细胞多尺度缩放定位检测方法
Shi et al. Advanced Hough transform using a multilayer fractional Fourier method
CN111191649A (zh) 一种识别弯曲多行文本图像的方法与设备
Murdock et al. ICDAR 2015 competition on text line detection in historical documents
Liao et al. Study on power line insulator defect detection via improved faster region-based convolutional neural network
CN111462109A (zh) 一种耐张线夹的缺陷检测方法、装置、设备及存储介质
CN103345738B (zh) 基于感兴趣区域的对象检测方法及装置
CN105184225A (zh) 一种多国纸币图像识别方法和装置
CN114519881A (zh) 人脸位姿估计方法、装置、电子设备及存储介质
Xu et al. A new object detection algorithm based on yolov3 for lung nodules
Ni et al. An improved adaptive ORB-SLAM method for monocular vision robot under dynamic environments
CN110909804B (zh) 基站异常数据的检测方法、装置、服务器和存储介质
CN113487610A (zh) 疱疹图像识别方法、装置、计算机设备和存储介质
WO2020199498A1 (zh) 指静脉比对方法、装置、计算机设备及存储介质
CN110020638B (zh) 人脸表情识别方法、装置、设备和介质
CN111445386A (zh) 基于文本内容四点检测的图像校正方法
CN115661255B (zh) 一种激光slam回环检测与校正方法
WO2024000989A1 (zh) 对抗样本的检测方法、系统、设备及非易失性可读存储介质
CN114821272A (zh) 图像识别方法、系统、介质、电子设备及目标检测模型
WO2021184178A1 (zh) 标注方法和装置
WO2012162200A2 (en) Identifying contacts and contact attributes in touch sensor data using spatial and temporal features
Gong et al. Vanishing point detection method based on constrained classification for checkpoints on urban roads

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20935743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20935743

Country of ref document: EP

Kind code of ref document: A1