WO2023019559A1 - Automated stem cell detection method and system, and terminal and storage medium - Google Patents

Automated stem cell detection method and system, and terminal and storage medium Download PDF

Info

Publication number
WO2023019559A1
WO2023019559A1 PCT/CN2021/113808 CN2021113808W WO2023019559A1 WO 2023019559 A1 WO2023019559 A1 WO 2023019559A1 CN 2021113808 W CN2021113808 W CN 2021113808W WO 2023019559 A1 WO2023019559 A1 WO 2023019559A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
image
tracking
training
initial
Prior art date
Application number
PCT/CN2021/113808
Other languages
French (fr)
Chinese (zh)
Inventor
吴昊
魏彦杰
潘毅
Original Assignee
深圳先进技术研究院
中国科学院深圳理工大学(筹)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院, 中国科学院深圳理工大学(筹) filed Critical 深圳先进技术研究院
Priority to PCT/CN2021/113808 priority Critical patent/WO2023019559A1/en
Publication of WO2023019559A1 publication Critical patent/WO2023019559A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Definitions

  • the application belongs to the technical field of biomedical image processing, and in particular relates to an automatic stem cell detection method, system, terminal and storage medium.
  • iPSCs induced pluripotent stem cells
  • this technology still has the problem of inefficiency—the rate of cells being reprogrammed in most reprogramming schemes is very low, which greatly limits the research and application of induced pluripotent stem cells in scientific research and clinical fields.
  • the detection and tracking of stem cells mainly rely on manual marking, or training a deep model based on manual marking, and the training process requires a large amount of data sets, which greatly increases the difficulty and cost of training.
  • the present application provides an automatic stem cell detection method, system, terminal and storage medium, aiming to solve one of the above-mentioned technical problems in the prior art at least to a certain extent.
  • An automated stem cell detection method comprising:
  • the cell image training set is input into the deep learning model for the first round of model training, and the first round of cell prediction results of the cell image training set is output by the deep learning model;
  • the acquisition of the cell image also includes:
  • the initial cell markers of each cell image are scaled, rotated, cropped, and mirrored and filled in order to generate cell markers for each enhanced image.
  • the technical solution adopted in the embodiment of the present application further includes: said using the initial cell label of the cell image as the initial training label of the cell image training set further includes:
  • the technical solution adopted in the embodiment of the present application further includes: the deep learning model is a U-Net model, and the U-Net model uses binary cross entropy as a loss function.
  • the technical solution adopted in the embodiment of the present application further includes: the updating of the initial cell marker of the cell image according to the cell prediction result is specifically:
  • the weighted summation result of each cell image is added to the initial cell marker of the cell image to be a new cell marker of each cell image.
  • the technical solution adopted in the embodiment of the present application further includes: performing cell tracking on the cell image according to the updated cell marker is specifically:
  • the technical solution adopted in the embodiment of the present application further includes: updating the initial training labels of the cell image training set according to the cell tracking results further includes:
  • the detection of the wrong tracking object on the cell tracking result is specifically: judging whether the number of consecutive frames of the tracking object in the cell tracking result is greater than the set frame number ⁇ , and if it is greater than ⁇ , it is determined that the tracking object is a cell; Otherwise, re-track the tracking object, and judge whether there is an object associated with the tracking object in the next consecutive ⁇ frames. If it exists, it is determined that the tracking object is a cell; if it does not exist, it is judged The tracking object is an erroneous tracking object, and the cell marker of the erroneous tracking object is removed from the cell image.
  • an automated stem cell detection system comprising:
  • Data acquisition module used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
  • Model training module used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • Cell tracking module for updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker, to obtain a cell tracking result;
  • Data update module used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model , performing cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  • a terminal includes a processor and a memory coupled to the processor, wherein,
  • the memory stores program instructions for realizing the automated stem cell detection method
  • the processor is configured to execute the program instructions stored in the memory to control automated stem cell detection.
  • Another technical solution adopted in the embodiment of the present application is: a storage medium storing program instructions executable by a processor, and the program instructions are used to execute the automatic stem cell detection method.
  • the beneficial effect of the embodiment of the present application lies in that the automated stem cell detection method, system, terminal and storage medium of the embodiment of the present application weight the cell prediction results of the n enhanced images corresponding to each cell image Summing to improve the reliability of the label; by adding the weighted summation result of each cell image to the initial cell label of the cell image, the performance of the model is prevented from degrading; cell tracking is performed according to the added result, and the tracking As a result, the training label is updated and iterative training is performed again to obtain the final cell detection model.
  • the embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
  • Fig. 1 is the flowchart of the automated stem cell detection method of the embodiment of the present application
  • Fig. 2 is the schematic diagram of the overlapping area calculation of the embodiment of the present application.
  • Figure 3 is a schematic diagram of the cell tracking results of the embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an automated stem cell detection system according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
  • FIG. 1 is a flow chart of the automated stem cell detection method of the embodiment of the present application.
  • the automatic stem cell detection method of the embodiment of the present application comprises the following steps:
  • the initial cell marker acquisition method of the cell image is specifically: obtaining the corresponding initial cell marker by processing the fluorescence image corresponding to the cell image, or performing cell detection on the cell image based on an unsupervised cell detector to obtain the initial cell marker.
  • S2 Perform data enhancement on the cell image to obtain an enhanced cell image training set, and use the initial cell label as the initial training label of the cell image training set;
  • the data enhancement method of cell images is as follows: perform brightness, contrast, scaling, rotation, cropping, and mirror filling operations on each cell image in turn, obtain n enhanced images corresponding to each cell image, and perform each cell image respectively.
  • the initial cell markers of are sequentially scaled, rotated, cropped, and mirror-filled to generate cell markers for each enhanced image.
  • the parameter selection of each step of data enhancement operation follows the principle of non-degeneration, that is, the cell prediction result obtained after the first round of model training using the enhanced cell image is compared with the initial cell label corresponding to the cell image, and the predicted cell label The number has not decreased.
  • the initial label of each cell image is also the same operation.
  • S3 Input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • the deep learning model is the U-Net model, and binary cross entropy is used as the loss function during model training.
  • the embodiment of the present application uses weighted summation to improve the reliability of cell markers.
  • the weight of the pixel value corresponding to each enhanced image is 1/n. Due to the uncertainty of model training, the direction of parameter update is likely to deviate from the expected direction.
  • the purpose of adding the weighted summation result to the cell marker during model training is to predict when the current round of cell prediction results is worse than the cell marker. , to prevent the gradual regression of subsequent model training performance.
  • due to the complex characteristics of some cells it is difficult for the model to fully learn the cell characteristics during a limited round of parameter update. Therefore, in the cell prediction results, the prediction effect of this type of complex cells is not good.
  • the next round of During model training complex cells are fully learned to prevent model performance degradation.
  • S5 Perform cell tracking on the cell image according to the overlapping area of the new cell marker in adjacent frames, and obtain the cell tracking result;
  • cell tracking is performed by calculating the overlapping area (over lap) of cell markers in adjacent frames.
  • FIG. 2 it is a schematic diagram of calculating the overlapping area in the embodiment of the present application. Firstly calculate the area A t and A t+1 of a certain cell marker in the t-th frame and the t+1-th frame (ie the next frame), and then calculate the cell marker in the t -th frame and the t+1-th frame The overlapping area of , that is, A t ⁇ A t+1 , and judge whether the ratio of the overlapping area to the area of the cell marker in frame t is greater than the set threshold A, that is, judge Whether it is true, if it is greater than the set threshold, it is judged that the cells in the tth frame and the t+1th frame are marked as the same cell, and so on, to obtain the cell tracking result.
  • the threshold is set to be 0.1.
  • S6 Perform false tracking object detection on the cell tracking result, and generate a training label for the next round of model training after eliminating the detected cell markers of the false tracking object;
  • FIG. 3 is a schematic diagram of cell tracking results, where (a) is the ideal cell tracking results, with High degree of continuity, (b), (c), and (d) are the actual cell tracking results, the number of continuous frames of cell tracking is short, and the continuity is not strong. Therefore, the embodiment of the present application removes such false tracking objects by analyzing the cell tracking results.
  • S8 Iteratively execute S4-S7 until the set number of model training times is reached, and a trained cell prediction model is obtained;
  • the number of model trainings is set to 5 times, that is, after 5 rounds of training, a cell prediction model with better performance can be obtained, and the training labels of the training set cell images are updated according to the results of each round of training, without artificial Participation greatly reduces the cost of training and improves training efficiency.
  • S9 Perform cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  • the automated stem cell detection method in the embodiment of the present application improves the credibility of the marker by weighting and summing the cell prediction results of the n enhanced images corresponding to each cell image; by weighting and summing the weighted sum of each cell image
  • the result is added to the initial cell label of the cell image to prevent the performance degradation of the model; cell tracking is performed according to the added result, and the training label is updated according to the tracking result and iterative training is performed again to obtain the final cell detection model.
  • the embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
  • FIG. 4 is a schematic structural diagram of the automated stem cell detection system of the embodiment of the present application.
  • the automated stem cell detection system 40 of the embodiment of the present application includes:
  • Data acquisition module 41 used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
  • Model training module 42 used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • Cell tracking module 43 used to update the initial cell marker of the cell image according to the cell prediction result, and perform cell tracking on the cell image according to the updated cell marker, to obtain the cell tracking result;
  • Data update module 44 used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model.
  • a good cell prediction model performs cell detection and tracking on the image of the cell to be detected.
  • FIG. 5 is a schematic diagram of a terminal structure in an embodiment of the present application.
  • the terminal 50 includes a processor 51 and a memory 52 coupled to the processor 51 .
  • the memory 52 stores program instructions for realizing the above automatic stem cell detection method.
  • the processor 51 is used to execute the program instructions stored in the memory 52 to control the automated stem cell detection.
  • the processor 51 may also be referred to as a CPU (Central Processing Unit, central processing unit).
  • the processor 51 may be an integrated circuit chip with signal processing capabilities.
  • the processor 51 can also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
  • the storage medium of the embodiment of the present application stores a program file 61 capable of realizing all the above-mentioned methods, wherein the program file 61 can be stored in the above-mentioned storage medium in the form of a software product, and includes several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the methods in various embodiments of the present invention.
  • a computer device which can It is a personal computer, a server, or a network device, etc.
  • processor processor
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. , or terminal devices such as computers, servers, mobile phones, and tablets.

Abstract

The present application relates to an automated stem cell detection method and system, and a terminal and a storage medium. The method comprises: acquiring cell images, and generating a cell image training set; inputting the cell image training set into a deep learning model to perform first-round model training, and outputting a first-round cell prediction result of the cell image training set by means of the deep learning model; updating initial cell markers of the cell images according to the cell prediction result, and performing cell tracking on the cell images according to the updated cell markers, so as to obtain cell tracking results; updating an initial training label of the cell image training set according to the cell tracking results, and inputting the updated cell image training set into the deep learning model to perform iterative training, so as to obtain a trained cell detection model. By means of the embodiments of the present application, manual labeling is not required, and the training process is simple, thereby achieving relatively good performance, significantly reducing the training cost, and improving the training efficiency.

Description

一种自动化干细胞检测方法、系统、终端以及存储介质An automated stem cell detection method, system, terminal and storage medium 技术领域technical field
本申请属生物医学图像处理技术领域,特别涉及一种自动化干细胞检测方法、系统、终端以及存储介质。The application belongs to the technical field of biomedical image processing, and in particular relates to an automatic stem cell detection method, system, terminal and storage medium.
背景技术Background technique
通过观察细胞的行为有助于更好地理解其生物学机制,例如组织形成和修复、伤口愈合和肿瘤的产生。在研究细胞行为时,跟踪其运动轨迹十分有用,尤其是对于干细胞而言。以诱导多能干细胞(induced pluripotent stem cells,iPSCs)技术为例,该技术已被应用于治疗血小板不足、脊髓损伤、黄斑变性、帕金森和阿茨海默等疾病。但是这项技术目前仍然存在低效性的问题——大多数重编程方案中的细胞被重编程的比率都很低,极大地限制了诱导多能干细胞在科研和临床领域的研究和应用。Observing the behavior of cells helps to better understand their biological mechanisms, such as tissue formation and repair, wound healing and tumor development. Tracking the movement of cells is useful when studying their behavior, especially stem cells. Take induced pluripotent stem cells (iPSCs) technology as an example, which has been applied to treat diseases such as platelet deficiency, spinal cord injury, macular degeneration, Parkinson's and Alzheimer's. However, this technology still has the problem of inefficiency—the rate of cells being reprogrammed in most reprogramming schemes is very low, which greatly limits the research and application of induced pluripotent stem cells in scientific research and clinical fields.
目前,对于干细胞的检测以及追踪主要依赖于人工标记,或是基于人工标记训练深度模型来完成,且训练过程需要大量的数据集,大大增加了训练的难度和代价。At present, the detection and tracking of stem cells mainly rely on manual marking, or training a deep model based on manual marking, and the training process requires a large amount of data sets, which greatly increases the difficulty and cost of training.
发明内容Contents of the invention
本申请提供了一种自动化干细胞检测方法、系统、终端以及存储介质,旨在至少在一定程度上解决现有技术中的上述技术问题之一。The present application provides an automatic stem cell detection method, system, terminal and storage medium, aiming to solve one of the above-mentioned technical problems in the prior art at least to a certain extent.
为了解决上述问题,本申请提供了如下技术方案:In order to solve the above problems, the application provides the following technical solutions:
一种自动化干细胞检测方法,包括:An automated stem cell detection method comprising:
获取细胞图像,生成细胞图像训练集,并将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签;Acquiring cell images, generating a cell image training set, and using the initial cell label of the cell image as the initial training label of the cell image training set;
将所述细胞图像训练集输入深度学习模型进行第一轮模型训练,通过所述深度学习模型输出细胞图像训练集的第一轮细胞预测结果;The cell image training set is input into the deep learning model for the first round of model training, and the first round of cell prediction results of the cell image training set is output by the deep learning model;
根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新,并根据所述更新的细胞标记对细胞图像进行细胞追踪,得到细胞追踪结果;updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker to obtain a cell tracking result;
根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新,并将更新后的细胞图像训练集输入深度学习模型进行迭代训练,得到训练好的细胞检测模型;updating the initial training labels of the cell image training set according to the cell tracking results, and inputting the updated cell image training set into a deep learning model for iterative training to obtain a trained cell detection model;
根据所述训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。Perform cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
本申请实施例采取的技术方案还包括:所述获取细胞图像还包括:The technical solution adopted in the embodiment of the present application also includes: the acquisition of the cell image also includes:
分别对每张细胞图像依次进行亮度、对比度、缩放、旋转、裁切以及镜像填充操作,获取每张细胞图像的n张增强图像;Perform brightness, contrast, scaling, rotation, cropping, and mirror filling operations on each cell image in sequence to obtain n enhanced images of each cell image;
分别对每张细胞图像的初始细胞标记依次进行缩放、旋转、裁切以及镜像填充操作,生成每张增强图像的细胞标记。The initial cell markers of each cell image are scaled, rotated, cropped, and mirrored and filled in order to generate cell markers for each enhanced image.
本申请实施例采取的技术方案还包括:所述将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签还包括:The technical solution adopted in the embodiment of the present application further includes: said using the initial cell label of the cell image as the initial training label of the cell image training set further includes:
对所述细胞图像对应的荧光图像进行处理,获取所述细胞图像的初始细胞标记;Processing the fluorescence image corresponding to the cell image to obtain the initial cell marker of the cell image;
或基于细胞检测器对所述细胞图像进行细胞检测,获取所述细胞图像的初始细胞标记。Or perform cell detection on the cell image based on a cell detector to obtain an initial cell marker of the cell image.
本申请实施例采取的技术方案还包括:所述深度学习模型为U-Net模型,所述U-Net模型采用二进制交叉熵作为损失函数。The technical solution adopted in the embodiment of the present application further includes: the deep learning model is a U-Net model, and the U-Net model uses binary cross entropy as a loss function.
本申请实施例采取的技术方案还包括:所述根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新具体为:The technical solution adopted in the embodiment of the present application further includes: the updating of the initial cell marker of the cell image according to the cell prediction result is specifically:
将每张细胞图像的n张增强图像的细胞预测结果进行加权求和;Carry out weighted summation of the cell prediction results of n enhanced images of each cell image;
将每张细胞图像的加权求和结果与该细胞图像的初始细胞标记相加,作为每张细胞图像的新的细胞标记。The weighted summation result of each cell image is added to the initial cell marker of the cell image to be a new cell marker of each cell image.
本申请实施例采取的技术方案还包括:所述根据所述更新的细胞标记对细胞图像进行细胞追踪具体为:The technical solution adopted in the embodiment of the present application further includes: performing cell tracking on the cell image according to the updated cell marker is specifically:
分别计算某一细胞标记在第t帧以及第t+1帧中的面积;Calculate the area of a certain cell mark in frame t and frame t+1 respectively;
计算该细胞标记在第t帧与第t+1帧中的重叠面积;Calculate the overlapping area of the cell marker in frame t and frame t+1;
判断所述重叠面积与所述细胞标记在第t帧中的面积之间的比值是否大于设定阈值,如果大于设定阈值,则判定所述第t帧与第t+1帧中的细胞标记为同一细胞,得到细胞追踪结果。Judging whether the ratio between the overlapping area and the area of the cell marker in the tth frame is greater than a set threshold, if it is greater than the set threshold, then determine the cell marker in the tth frame and the t+1th frame For the same cell, get the cell tracking result.
本申请实施例采取的技术方案还包括:所述根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新还包括:The technical solution adopted in the embodiment of the present application further includes: updating the initial training labels of the cell image training set according to the cell tracking results further includes:
对所述细胞追踪结果进行错误追踪对象检测,并将检测出的错误追踪对象的细胞标记消除后,生成用于下一轮模型训练的训练标签;Performing false tracking object detection on the cell tracking result, and after eliminating the detected cell markers of the false tracking object, generating a training label for the next round of model training;
所述对所述细胞追踪结果进行错误追踪对象检测具体为:判断所述细胞追踪结果中的追踪对象的连续帧数是否大于设定帧数α,如果大于α,则判定该追踪对象为细胞;反之,则对该追踪对象重新进行追踪,并判断在接下来的连续β帧中是否均存在与该追踪对象相关联的对象,如果存在,则判定该追踪对象为细胞;如果不存在,则判断该追踪对象为错误追踪对象,将所述错误追踪对象的细胞标记从细胞图像中清除。The detection of the wrong tracking object on the cell tracking result is specifically: judging whether the number of consecutive frames of the tracking object in the cell tracking result is greater than the set frame number α, and if it is greater than α, it is determined that the tracking object is a cell; Otherwise, re-track the tracking object, and judge whether there is an object associated with the tracking object in the next consecutive β frames. If it exists, it is determined that the tracking object is a cell; if it does not exist, it is judged The tracking object is an erroneous tracking object, and the cell marker of the erroneous tracking object is removed from the cell image.
本申请实施例采取的另一技术方案为:一种自动化干细胞检测系统,包括:Another technical solution adopted in the embodiment of the present application is: an automated stem cell detection system, comprising:
数据获取模块:用于获取细胞图像,生成细胞图像训练集,并将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签;Data acquisition module: used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
模型训练模块:用于将所述细胞图像训练集输入深度学习模型进行第一轮模型训练,通过所述深度学习模型输出细胞图像训练集的第一轮细胞预测结果;Model training module: used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
细胞追踪模块:用于根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新,并根据所述更新的细胞标记对细胞图像进行细胞追踪,得到细胞追踪结果;Cell tracking module: for updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker, to obtain a cell tracking result;
数据更新模块:用于根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新,并将更新后的细胞图像训练集输入深度学习模型进行迭代训练,得到训练好的细胞检测模型,根据所述训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。Data update module: used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model , performing cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
本申请实施例采取的又一技术方案为:一种终端,所述终端包括处理器、与所述处理器耦接的存储器,其中,Another technical solution adopted by the embodiment of the present application is: a terminal, the terminal includes a processor and a memory coupled to the processor, wherein,
所述存储器存储有用于实现所述自动化干细胞检测方法的程序指令;The memory stores program instructions for realizing the automated stem cell detection method;
所述处理器用于执行所述存储器存储的所述程序指令以控制自动化干细胞检测。The processor is configured to execute the program instructions stored in the memory to control automated stem cell detection.
本申请实施例采取的又一技术方案为:一种存储介质,存储有处理器可运行的程序指令,所述程序指令用于执行所述自动化干细胞检测方法。Another technical solution adopted in the embodiment of the present application is: a storage medium storing program instructions executable by a processor, and the program instructions are used to execute the automatic stem cell detection method.
相对于现有技术,本申请实施例产生的有益效果在于:本申请实施例的自动化干细胞检测方法、系统、终端以及存储介质通过将每张细胞图像对应的n张增强图像的细胞预测结果进行加权求和,提升标记的可信度;通过将每张细胞图像的加权求和结果与该细胞图像的初始细胞标记相加,防止模型的性能退化;根据相加后的结果进行细胞追踪,根据追踪结果更新训练标签并重新进行迭代训练,得到最终的细胞检测模型。本申请实施例无需人工标签,训练过程简单,减少了人工成本的同时获得较好的性能,并大大降低了训练的成本,提 高了训练效率。Compared with the prior art, the beneficial effect of the embodiment of the present application lies in that the automated stem cell detection method, system, terminal and storage medium of the embodiment of the present application weight the cell prediction results of the n enhanced images corresponding to each cell image Summing to improve the reliability of the label; by adding the weighted summation result of each cell image to the initial cell label of the cell image, the performance of the model is prevented from degrading; cell tracking is performed according to the added result, and the tracking As a result, the training label is updated and iterative training is performed again to obtain the final cell detection model. The embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
附图说明Description of drawings
图1是本申请实施例的自动化干细胞检测方法的流程图;Fig. 1 is the flowchart of the automated stem cell detection method of the embodiment of the present application;
图2为本申请实施例的重叠面积计算示意图;Fig. 2 is the schematic diagram of the overlapping area calculation of the embodiment of the present application;
图3为本申请实施例的细胞追踪结果示意图;Figure 3 is a schematic diagram of the cell tracking results of the embodiment of the present application;
图4为本申请实施例的自动化干细胞检测系统结构示意图;FIG. 4 is a schematic structural diagram of an automated stem cell detection system according to an embodiment of the present application;
图5为本申请实施例的终端结构示意图;FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
图6为本申请实施例的存储介质的结构示意图。FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, not to limit the present application.
请参阅图1,是本申请实施例的自动化干细胞检测方法的流程图。本申请实施例的自动化干细胞检测方法包括以下步骤:Please refer to FIG. 1 , which is a flow chart of the automated stem cell detection method of the embodiment of the present application. The automatic stem cell detection method of the embodiment of the present application comprises the following steps:
S1:获取一定数量的细胞图像,并获取每张细胞图像的初始细胞标记;S1: Obtain a certain number of cell images, and obtain the initial cell label of each cell image;
本步骤中,细胞图像的初始细胞标记获取方式具体为:通过处理细胞图像对应的荧光图像获取相应的初始细胞标记,或基于无监督的细胞检测器对细胞图像进行细胞检测以获取初始细胞标记。In this step, the initial cell marker acquisition method of the cell image is specifically: obtaining the corresponding initial cell marker by processing the fluorescence image corresponding to the cell image, or performing cell detection on the cell image based on an unsupervised cell detector to obtain the initial cell marker.
S2:对细胞图像进行数据增强,得到增强后的细胞图像训练集,并将初始细胞标记作为细胞图像训练集的初始训练标签;S2: Perform data enhancement on the cell image to obtain an enhanced cell image training set, and use the initial cell label as the initial training label of the cell image training set;
本步骤中,数据增强的目的是为了增加模型的鲁棒性。细胞图像的数据增强方式为:分别对每张细胞图像依次进行亮度、对比度、缩放、旋转、裁切以 及镜像填充操作,获取每张细胞图像对应的n张增强图像,并分别对每张细胞图像的初始细胞标记依次进行缩放、旋转、裁切以及镜像填充操作,生成每张增强图像的细胞标记。对于每一步数据增强操作的参数选取遵循不退化原则,即:使用增强后的细胞图像进行第一轮模型训练后得到的细胞预测结果与该细胞图像对应的初始细胞标记相比,预测的细胞标记数量没有减少。同样的,除亮度、对比度之外,每张细胞图像的初始标签也是进行同样的操作。In this step, the purpose of data augmentation is to increase the robustness of the model. The data enhancement method of cell images is as follows: perform brightness, contrast, scaling, rotation, cropping, and mirror filling operations on each cell image in turn, obtain n enhanced images corresponding to each cell image, and perform each cell image respectively. The initial cell markers of are sequentially scaled, rotated, cropped, and mirror-filled to generate cell markers for each enhanced image. The parameter selection of each step of data enhancement operation follows the principle of non-degeneration, that is, the cell prediction result obtained after the first round of model training using the enhanced cell image is compared with the initial cell label corresponding to the cell image, and the predicted cell label The number has not decreased. Similarly, in addition to brightness and contrast, the initial label of each cell image is also the same operation.
S3:将细胞图像训练集输入深度学习模型进行第一轮模型训练,通过深度学习模型输出细胞图像训练集的第一轮细胞预测结果;S3: Input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
本步骤中,深度学习模型为U-Net模型,在模型训练过程中采用二进制交叉熵作为损失函数。In this step, the deep learning model is the U-Net model, and binary cross entropy is used as the loss function during model training.
S4:分别将每张细胞图像对应的n张增强图像的细胞预测结果进行加权求和,并将每张细胞图像的加权求和结果与该细胞图像的初始细胞标记相加,作为每张细胞图像的新的细胞标记;S4: The cell prediction results of the n enhanced images corresponding to each cell image are weighted and summed, and the weighted summation result of each cell image is added to the initial cell marker of the cell image as each cell image new cell markers;
本步骤中,本申请实施例通过加权求和以提升细胞标记的可信度。在加权求和时,每张增强图像对应位置像素值的权重为1/n。由于模型训练的不确定性,参数更新的方向很可能与预期方向有所偏差,将加权求和结果与模型训练时的细胞标记相加的目的是为了在本轮细胞预测结果比细胞标记差时,防止引起后续模型训练性能的逐渐倒退。同时,由于部分细胞特征较为复杂,在一轮有限的参数更新过程中模型难以充分学习细胞特征,因此在细胞预测结果中该类复杂细胞的预测效果不佳,通过更新细胞标记可以使得在下一轮模型训练中对复杂细胞进行充分学习,防止模型的性能退化。In this step, the embodiment of the present application uses weighted summation to improve the reliability of cell markers. During weighted summation, the weight of the pixel value corresponding to each enhanced image is 1/n. Due to the uncertainty of model training, the direction of parameter update is likely to deviate from the expected direction. The purpose of adding the weighted summation result to the cell marker during model training is to predict when the current round of cell prediction results is worse than the cell marker. , to prevent the gradual regression of subsequent model training performance. At the same time, due to the complex characteristics of some cells, it is difficult for the model to fully learn the cell characteristics during a limited round of parameter update. Therefore, in the cell prediction results, the prediction effect of this type of complex cells is not good. By updating the cell markers, the next round of During model training, complex cells are fully learned to prevent model performance degradation.
S5:根据新的细胞标记在相邻帧中的重叠面积对细胞图像进行细胞追踪,得到细胞追踪结果;S5: Perform cell tracking on the cell image according to the overlapping area of the new cell marker in adjacent frames, and obtain the cell tracking result;
本步骤中,以计算相邻帧中细胞标记的重叠面积(over lap)进行细胞追踪。具体如图2所示,为本申请实施例的重叠面积计算示意图。首先分别计算某一细胞标记在第t帧以及第t+1帧(即下一帧)中的面积A t、A t+1,然后计算该细胞标记在第t帧与第t+1帧中的重叠面积,即A t∩A t+1,并判断该重叠面积与细胞标记在第t帧中的面积的比值是否大于设定阈值A,即判断
Figure PCTCN2021113808-appb-000001
是否成立,如果大于设定阈值,则判定第t帧与第t+1帧中的细胞标记为同一细胞,以此类推,得到细胞追踪结果。优选地,本申请实施例设定该阈值为0.1。
In this step, cell tracking is performed by calculating the overlapping area (over lap) of cell markers in adjacent frames. Specifically, as shown in FIG. 2 , it is a schematic diagram of calculating the overlapping area in the embodiment of the present application. Firstly calculate the area A t and A t+1 of a certain cell marker in the t-th frame and the t+1-th frame (ie the next frame), and then calculate the cell marker in the t -th frame and the t+1-th frame The overlapping area of , that is, A tA t+1 , and judge whether the ratio of the overlapping area to the area of the cell marker in frame t is greater than the set threshold A, that is, judge
Figure PCTCN2021113808-appb-000001
Whether it is true, if it is greater than the set threshold, it is judged that the cells in the tth frame and the t+1th frame are marked as the same cell, and so on, to obtain the cell tracking result. Preferably, in this embodiment of the present application, the threshold is set to be 0.1.
S6:对细胞追踪结果进行错误追踪对象检测,并将检测出的错误追踪对象的细胞标记消除后,生成用于下一轮模型训练的训练标签;S6: Perform false tracking object detection on the cell tracking result, and generate a training label for the next round of model training after eliminating the detected cell markers of the false tracking object;
本步骤中,由于干细胞的活动产生的气泡和杂质易被模型误识别为细胞,即错误追踪对象。不同于细胞追踪的高度连续性,此类错误追踪对象的追踪结果持续帧数较短,具体如图3所示,为细胞追踪结果示意图,其中(a)为理想情况下的细胞追踪结果,有高度的连续性,(b)、(c)、(d)为实际的细胞追踪结果,细胞追踪连续的帧数较短,持续性不强。因此本申请实施例通过分析细胞追踪结果去除此类错误追踪对象。具体为:判断细胞追踪结果中的追踪对象的连续帧数是否大于设定帧数α,如果大于α,则判定该追踪对象为细胞;反之,如果小于α,则对该追踪对象重新进行追踪,并判断在接下来的连续β帧中是否均存在与该追踪对象相关联的对象,如果存在,则判定该追踪对象为细胞;如果不存在,则判断该追踪对象为错误追踪对象,将所述错误追踪对象的细胞标记从细胞图像中清除。优选地,本申请实施例设定α=3,β=5,具体可根据实际操作进行设定。In this step, the air bubbles and impurities generated due to the activities of stem cells are easily misidentified as cells by the model, that is, wrongly tracked objects. Different from the high continuity of cell tracking, the tracking results of such erroneous tracking objects last for a short number of frames, as shown in Figure 3, which is a schematic diagram of cell tracking results, where (a) is the ideal cell tracking results, with High degree of continuity, (b), (c), and (d) are the actual cell tracking results, the number of continuous frames of cell tracking is short, and the continuity is not strong. Therefore, the embodiment of the present application removes such false tracking objects by analyzing the cell tracking results. Specifically: determine whether the number of consecutive frames of the tracking object in the cell tracking result is greater than the set frame number α, if it is greater than α, determine that the tracking object is a cell; otherwise, if it is less than α, re-track the tracking object, And determine whether there is an object associated with the tracking object in the next consecutive β frames, if it exists, then determine that the tracking object is a cell; if not, then determine that the tracking object is a wrong tracking object, and Cell markers that mistrack objects are removed from the cell image. Preferably, in the embodiment of the present application, α=3 and β=5 are set, which can be set according to actual operation.
S7:将更新训练标签后的训练集细胞图像输入深度学习模型进行下一轮训 练,通过深度学习模型输出新的细胞预测结果;S7: Input the training set cell image after updating the training label into the deep learning model for the next round of training, and output new cell prediction results through the deep learning model;
S8:迭代执行S4-S7,直到达到设定的模型训练次数,得到训练好的细胞预测模型;S8: Iteratively execute S4-S7 until the set number of model training times is reached, and a trained cell prediction model is obtained;
本申请实施例中,设定模型训练次数为5次,即5轮训练后即可得到性能表现较好的细胞预测模型,根据每一轮训练结果更新训练集细胞图像的训练标签,无需无人工参与,大大降低了训练的成本,提高了训练效率。In the embodiment of the present application, the number of model trainings is set to 5 times, that is, after 5 rounds of training, a cell prediction model with better performance can be obtained, and the training labels of the training set cell images are updated according to the results of each round of training, without artificial Participation greatly reduces the cost of training and improves training efficiency.
S9:根据训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。S9: Perform cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
基于上述,本申请实施例的自动化干细胞检测方法通过将每张细胞图像对应的n张增强图像的细胞预测结果进行加权求和,提升标记的可信度;通过将每张细胞图像的加权求和结果与该细胞图像的初始细胞标记相加,防止模型的性能退化;根据相加后的结果进行细胞追踪,根据追踪结果更新训练标签并重新进行迭代训练,得到最终的细胞检测模型。本申请实施例无需人工标签,训练过程简单,减少了人工成本的同时获得较好的性能,并大大降低了训练的成本,提高了训练效率。Based on the above, the automated stem cell detection method in the embodiment of the present application improves the credibility of the marker by weighting and summing the cell prediction results of the n enhanced images corresponding to each cell image; by weighting and summing the weighted sum of each cell image The result is added to the initial cell label of the cell image to prevent the performance degradation of the model; cell tracking is performed according to the added result, and the training label is updated according to the tracking result and iterative training is performed again to obtain the final cell detection model. The embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
请参阅图4,为本申请实施例的自动化干细胞检测系统结构示意图。本申请实施例的自动化干细胞检测系统40包括:Please refer to FIG. 4 , which is a schematic structural diagram of the automated stem cell detection system of the embodiment of the present application. The automated stem cell detection system 40 of the embodiment of the present application includes:
数据获取模块41:用于获取细胞图像,生成细胞图像训练集,并将细胞图像的初始细胞标记作为细胞图像训练集的初始训练标签;Data acquisition module 41: used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
模型训练模块42:用于将细胞图像训练集输入深度学习模型进行第一轮模型训练,通过深度学习模型输出细胞图像训练集的第一轮细胞预测结果;Model training module 42: used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
细胞追踪模块43:用于根据细胞预测结果对细胞图像的初始细胞标记进行更新,并根据更新的细胞标记对细胞图像进行细胞追踪,得到细胞追踪结果;Cell tracking module 43: used to update the initial cell marker of the cell image according to the cell prediction result, and perform cell tracking on the cell image according to the updated cell marker, to obtain the cell tracking result;
数据更新模块44:用于根据细胞追踪结果对细胞图像训练集的初始训练标 签进行更新,并将更新后的细胞图像训练集输入深度学习模型进行迭代训练,得到训练好的细胞检测模型,根据训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。Data update module 44: used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model. A good cell prediction model performs cell detection and tracking on the image of the cell to be detected.
请参阅图5,为本申请实施例的终端结构示意图。该终端50包括处理器51、与处理器51耦接的存储器52。Please refer to FIG. 5 , which is a schematic diagram of a terminal structure in an embodiment of the present application. The terminal 50 includes a processor 51 and a memory 52 coupled to the processor 51 .
存储器52存储有用于实现上述自动化干细胞检测方法的程序指令。The memory 52 stores program instructions for realizing the above automatic stem cell detection method.
处理器51用于执行存储器52存储的程序指令以控制自动化干细胞检测。The processor 51 is used to execute the program instructions stored in the memory 52 to control the automated stem cell detection.
其中,处理器51还可以称为CPU(Central Processing Unit,中央处理单元)。处理器51可能是一种集成电路芯片,具有信号的处理能力。处理器51还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。Wherein, the processor 51 may also be referred to as a CPU (Central Processing Unit, central processing unit). The processor 51 may be an integrated circuit chip with signal processing capabilities. The processor 51 can also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components . A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
请参阅图6,为本申请实施例的存储介质的结构示意图。本申请实施例的存储介质存储有能够实现上述所有方法的程序文件61,其中,该程序文件61可以以软件产品的形式存储在上述存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施方式方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质,或者是计算机、服务器、手机、平板等终端设备。Please refer to FIG. 6 , which is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium of the embodiment of the present application stores a program file 61 capable of realizing all the above-mentioned methods, wherein the program file 61 can be stored in the above-mentioned storage medium in the form of a software product, and includes several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the methods in various embodiments of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. , or terminal devices such as computers, servers, mobile phones, and tablets.
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的, 本发明中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本发明所示的这些实施例,而是要符合与本发明所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined in this invention may be implemented in other embodiments without departing from the spirit or scope of the invention. Therefore, the present invention will not be limited to these embodiments shown in the present invention, but will conform to the widest scope consistent with the principles and novel features disclosed in the present invention.

Claims (10)

  1. 一种自动化干细胞检测方法,其特征在于,包括:An automatic stem cell detection method, characterized in that, comprising:
    获取细胞图像,生成细胞图像训练集,并将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签;Acquiring cell images, generating a cell image training set, and using the initial cell label of the cell image as the initial training label of the cell image training set;
    将所述细胞图像训练集输入深度学习模型进行第一轮模型训练,通过所述深度学习模型输出细胞图像训练集的第一轮细胞预测结果;The cell image training set is input into the deep learning model for the first round of model training, and the first round of cell prediction results of the cell image training set is output by the deep learning model;
    根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新,并根据所述更新的细胞标记对细胞图像进行细胞追踪,得到细胞追踪结果;updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker to obtain a cell tracking result;
    根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新,并将更新后的细胞图像训练集输入深度学习模型进行迭代训练,得到训练好的细胞检测模型;updating the initial training labels of the cell image training set according to the cell tracking results, and inputting the updated cell image training set into a deep learning model for iterative training to obtain a trained cell detection model;
    根据所述训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。Perform cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  2. 根据权利要求1所述的自动化干细胞检测方法,其特征在于,所述获取细胞图像还包括:The automated stem cell detection method according to claim 1, wherein said acquiring cell images further comprises:
    分别对每张细胞图像依次进行亮度、对比度、缩放、旋转、裁切以及镜像填充操作,获取每张细胞图像的n张增强图像;Perform brightness, contrast, scaling, rotation, cropping, and mirror filling operations on each cell image in sequence to obtain n enhanced images of each cell image;
    分别对每张细胞图像的初始细胞标记依次进行缩放、旋转、裁切以及镜像填充操作,生成每张增强图像的细胞标记。The initial cell markers of each cell image are scaled, rotated, cropped, and mirrored and filled in order to generate cell markers for each enhanced image.
  3. 根据权利要求1或2所述的自动化干细胞检测方法,其特征在于,所述将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签还包括:The automatic stem cell detection method according to claim 1 or 2, wherein the initial cell label of the cell image as the initial training label of the cell image training set also includes:
    对所述细胞图像对应的荧光图像进行处理,获取所述细胞图像的初始细胞 标记;Processing the fluorescence image corresponding to the cell image to obtain the initial cell marker of the cell image;
    或基于细胞检测器对所述细胞图像进行细胞检测,获取所述细胞图像的初始细胞标记。Or perform cell detection on the cell image based on a cell detector to obtain an initial cell marker of the cell image.
  4. 根据权利要求1所述的自动化干细胞检测方法,其特征在于,所述深度学习模型为U-Net模型,所述U-Net模型采用二进制交叉熵作为损失函数。The automated stem cell detection method according to claim 1, wherein the deep learning model is a U-Net model, and the U-Net model uses binary cross entropy as a loss function.
  5. 根据权利要求2所述的自动化干细胞检测方法,其特征在于,所述根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新具体为:The automated stem cell detection method according to claim 2, wherein the updating of the initial cell marker of the cell image according to the cell prediction result is specifically:
    将每张细胞图像的n张增强图像的细胞预测结果进行加权求和;Carry out weighted summation of the cell prediction results of n enhanced images of each cell image;
    将每张细胞图像的加权求和结果与该细胞图像的初始细胞标记相加,作为每张细胞图像的新的细胞标记。The weighted summation result of each cell image is added to the initial cell marker of the cell image to be a new cell marker of each cell image.
  6. 根据权利要求5所述的自动化干细胞检测方法,其特征在于,所述根据所述更新的细胞标记对细胞图像进行细胞追踪具体为:The automated stem cell detection method according to claim 5, wherein the cell tracking of the cell image according to the updated cell marker is specifically:
    分别计算某一细胞标记在第t帧以及第t+1帧中的面积;Calculate the area of a certain cell mark in frame t and frame t+1 respectively;
    计算该细胞标记在第t帧与第t+1帧中的重叠面积;Calculate the overlapping area of the cell marker in frame t and frame t+1;
    判断所述重叠面积与所述细胞标记在第t帧中的面积之间的比值是否大于设定阈值,如果大于设定阈值,则判定所述第t帧与第t+1帧中的细胞标记为同一细胞,得到细胞追踪结果。Judging whether the ratio between the overlapping area and the area of the cell marker in the tth frame is greater than a set threshold, if it is greater than the set threshold, then determine the cell marker in the tth frame and the t+1th frame For the same cell, get the cell tracking result.
  7. 根据权利要求6所述的自动化干细胞检测方法,其特征在于,所述根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新还包括:The automated stem cell detection method according to claim 6, wherein said updating the initial training labels of said cell image training set according to said cell tracking results further comprises:
    对所述细胞追踪结果进行错误追踪对象检测,并将检测出的错误追踪对象的细胞标记消除后,生成用于下一轮模型训练的训练标签;Performing false tracking object detection on the cell tracking result, and after eliminating the detected cell markers of the false tracking object, generating a training label for the next round of model training;
    所述对所述细胞追踪结果进行错误追踪对象检测具体为:判断所述细胞追踪结果中的追踪对象的连续帧数是否大于设定帧数α,如果大于α,则判定该追 踪对象为细胞;反之,则对该追踪对象重新进行追踪,并判断在接下来的连续β帧中是否均存在与该追踪对象相关联的对象,如果存在,则判定该追踪对象为细胞;如果不存在,则判断该追踪对象为错误追踪对象,将所述错误追踪对象的细胞标记从细胞图像中清除。The detection of the wrong tracking object on the cell tracking result is specifically: judging whether the number of consecutive frames of the tracking object in the cell tracking result is greater than the set frame number α, and if it is greater than α, it is determined that the tracking object is a cell; Otherwise, re-track the tracking object, and judge whether there is an object associated with the tracking object in the next consecutive β frames. If it exists, it is determined that the tracking object is a cell; if it does not exist, it is judged The tracking object is an erroneous tracking object, and the cell marker of the erroneous tracking object is removed from the cell image.
  8. 一种自动化干细胞检测系统,其特征在于,包括:An automated stem cell detection system, characterized in that it comprises:
    数据获取模块:用于获取细胞图像,生成细胞图像训练集,并将所述细胞图像的初始细胞标记作为所述细胞图像训练集的初始训练标签;Data acquisition module: used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
    模型训练模块:用于将所述细胞图像训练集输入深度学习模型进行第一轮模型训练,通过所述深度学习模型输出细胞图像训练集的第一轮细胞预测结果;Model training module: used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
    细胞追踪模块:用于根据所述细胞预测结果对所述细胞图像的初始细胞标记进行更新,并根据所述更新的细胞标记对细胞图像进行细胞追踪,得到细胞追踪结果;Cell tracking module: for updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker, to obtain a cell tracking result;
    数据更新模块:用于根据所述细胞追踪结果对所述细胞图像训练集的初始训练标签进行更新,并将更新后的细胞图像训练集输入深度学习模型进行迭代训练,得到训练好的细胞检测模型,根据所述训练好的细胞预测模型对待检测细胞图像进行细胞检测与追踪。Data update module: used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model , performing cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  9. 一种终端,其特征在于,所述终端包括处理器、与所述处理器耦接的存储器,其中,A terminal, characterized in that the terminal includes a processor and a memory coupled to the processor, wherein,
    所述存储器存储有用于实现权利要求1-7任一项所述的自动化干细胞检测方法的程序指令;The memory stores program instructions for realizing the automated stem cell detection method according to any one of claims 1-7;
    所述处理器用于执行所述存储器存储的所述程序指令以控制自动化干细胞检测。The processor is configured to execute the program instructions stored in the memory to control automated stem cell detection.
  10. 一种存储介质,其特征在于,存储有处理器可运行的程序指令,所述程序指令用于执行权利要求1至7任一项所述自动化干细胞检测方法。A storage medium, characterized in that it stores program instructions executable by a processor, and the program instructions are used to execute the automated stem cell detection method according to any one of claims 1 to 7.
PCT/CN2021/113808 2021-08-20 2021-08-20 Automated stem cell detection method and system, and terminal and storage medium WO2023019559A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/113808 WO2023019559A1 (en) 2021-08-20 2021-08-20 Automated stem cell detection method and system, and terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/113808 WO2023019559A1 (en) 2021-08-20 2021-08-20 Automated stem cell detection method and system, and terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2023019559A1 true WO2023019559A1 (en) 2023-02-23

Family

ID=85239391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113808 WO2023019559A1 (en) 2021-08-20 2021-08-20 Automated stem cell detection method and system, and terminal and storage medium

Country Status (1)

Country Link
WO (1) WO2023019559A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127809A (en) * 2016-06-22 2016-11-16 浙江工业大学 Cancerous cell trajectory track and correlating method in a kind of micro-image sequence
CN107944360A (en) * 2017-11-13 2018-04-20 中国科学院深圳先进技术研究院 A kind of induced multi-potent stem cell recognition methods, system and electronic equipment
CN108256408A (en) * 2017-10-25 2018-07-06 四川大学 A kind of stem cell method for tracing based on deep learning
US20210019499A1 (en) * 2018-03-20 2021-01-21 Shimadzu Corporation Cell Image Analysis Apparatus, Cell Image Analysis System, Method of Generating Training Data, Method of Generating Trained Model, Training Data Generation Program, and Method of Producing Training Data
CN113192107A (en) * 2021-05-06 2021-07-30 上海锵玫人工智能科技有限公司 Target identification tracking method and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127809A (en) * 2016-06-22 2016-11-16 浙江工业大学 Cancerous cell trajectory track and correlating method in a kind of micro-image sequence
CN108256408A (en) * 2017-10-25 2018-07-06 四川大学 A kind of stem cell method for tracing based on deep learning
CN107944360A (en) * 2017-11-13 2018-04-20 中国科学院深圳先进技术研究院 A kind of induced multi-potent stem cell recognition methods, system and electronic equipment
US20210019499A1 (en) * 2018-03-20 2021-01-21 Shimadzu Corporation Cell Image Analysis Apparatus, Cell Image Analysis System, Method of Generating Training Data, Method of Generating Trained Model, Training Data Generation Program, and Method of Producing Training Data
CN113192107A (en) * 2021-05-06 2021-07-30 上海锵玫人工智能科技有限公司 Target identification tracking method and robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JU MENGXI, LI XINWEI, LI ZHANGYONG: "Detection of white blood cells in microscopic leucorrhea images based on deep active learning", SHENGWU YIXUE GONGCHENGXUE ZAZHI = JOURNAL OF BIOMEDICAL ENGINEERING, SICHUAN DAXUE HUAXI YIYUAN, CN, vol. 37, no. 3, 25 June 2020 (2020-06-25), CN , pages 519 - 526, XP093036924, ISSN: 1001-5515, DOI: 10.7507/1001-5515.201909040 *

Similar Documents

Publication Publication Date Title
US11663293B2 (en) Image processing method and device, and computer-readable storage medium
CN108256562B (en) Salient target detection method and system based on weak supervision time-space cascade neural network
US10643130B2 (en) Systems and methods for polygon object annotation and a method of training and object annotation system
Zhang et al. Non-rigid object tracking via deep multi-scale spatial-temporal discriminative saliency maps
Song et al. Seednet: Automatic seed generation with deep reinforcement learning for robust interactive segmentation
CN107292887B (en) Retinal vessel segmentation method based on deep learning adaptive weight
Wang et al. Inverse sparse tracker with a locally weighted distance metric
Babenko et al. Robust object tracking with online multiple instance learning
US11030750B2 (en) Multi-level convolutional LSTM model for the segmentation of MR images
KR102037303B1 (en) Method and Apparatus for Estimating Position of Capsule Endoscope
Yang et al. An improving faster-RCNN with multi-attention ResNet for small target detection in intelligent autonomous transport with 6G
CN111310609B (en) Video target detection method based on time sequence information and local feature similarity
CN112581462A (en) Method and device for detecting appearance defects of industrial products and storage medium
Xie et al. Online multiple instance gradient feature selection for robust visual tracking
WO2023123847A1 (en) Model training method and apparatus, image processing method and apparatus, and device, storage medium and computer program product
CN112927266A (en) Weak supervision time domain action positioning method and system based on uncertainty guide training
CN116670687A (en) Method and system for adapting trained object detection models to domain offsets
CN116740362B (en) Attention-based lightweight asymmetric scene semantic segmentation method and system
WO2023019559A1 (en) Automated stem cell detection method and system, and terminal and storage medium
CN112991280A (en) Visual detection method and system and electronic equipment
CN112991281A (en) Visual detection method, system, electronic device and medium
CN113689395A (en) Automatic stem cell detection method, system, terminal and storage medium
CN111553250A (en) Accurate facial paralysis degree evaluation method and device based on face characteristic points
TWI803243B (en) Method for expanding images, computer device and storage medium
CN115862119A (en) Human face age estimation method and device based on attention mechanism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE