WO2023019559A1 - Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage - Google Patents

Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage Download PDF

Info

Publication number
WO2023019559A1
WO2023019559A1 PCT/CN2021/113808 CN2021113808W WO2023019559A1 WO 2023019559 A1 WO2023019559 A1 WO 2023019559A1 CN 2021113808 W CN2021113808 W CN 2021113808W WO 2023019559 A1 WO2023019559 A1 WO 2023019559A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
image
tracking
training
initial
Prior art date
Application number
PCT/CN2021/113808
Other languages
English (en)
Chinese (zh)
Inventor
吴昊
魏彦杰
潘毅
Original Assignee
深圳先进技术研究院
中国科学院深圳理工大学(筹)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院, 中国科学院深圳理工大学(筹) filed Critical 深圳先进技术研究院
Priority to PCT/CN2021/113808 priority Critical patent/WO2023019559A1/fr
Publication of WO2023019559A1 publication Critical patent/WO2023019559A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Definitions

  • the application belongs to the technical field of biomedical image processing, and in particular relates to an automatic stem cell detection method, system, terminal and storage medium.
  • iPSCs induced pluripotent stem cells
  • this technology still has the problem of inefficiency—the rate of cells being reprogrammed in most reprogramming schemes is very low, which greatly limits the research and application of induced pluripotent stem cells in scientific research and clinical fields.
  • the detection and tracking of stem cells mainly rely on manual marking, or training a deep model based on manual marking, and the training process requires a large amount of data sets, which greatly increases the difficulty and cost of training.
  • the present application provides an automatic stem cell detection method, system, terminal and storage medium, aiming to solve one of the above-mentioned technical problems in the prior art at least to a certain extent.
  • An automated stem cell detection method comprising:
  • the cell image training set is input into the deep learning model for the first round of model training, and the first round of cell prediction results of the cell image training set is output by the deep learning model;
  • the acquisition of the cell image also includes:
  • the initial cell markers of each cell image are scaled, rotated, cropped, and mirrored and filled in order to generate cell markers for each enhanced image.
  • the technical solution adopted in the embodiment of the present application further includes: said using the initial cell label of the cell image as the initial training label of the cell image training set further includes:
  • the technical solution adopted in the embodiment of the present application further includes: the deep learning model is a U-Net model, and the U-Net model uses binary cross entropy as a loss function.
  • the technical solution adopted in the embodiment of the present application further includes: the updating of the initial cell marker of the cell image according to the cell prediction result is specifically:
  • the weighted summation result of each cell image is added to the initial cell marker of the cell image to be a new cell marker of each cell image.
  • the technical solution adopted in the embodiment of the present application further includes: performing cell tracking on the cell image according to the updated cell marker is specifically:
  • the technical solution adopted in the embodiment of the present application further includes: updating the initial training labels of the cell image training set according to the cell tracking results further includes:
  • the detection of the wrong tracking object on the cell tracking result is specifically: judging whether the number of consecutive frames of the tracking object in the cell tracking result is greater than the set frame number ⁇ , and if it is greater than ⁇ , it is determined that the tracking object is a cell; Otherwise, re-track the tracking object, and judge whether there is an object associated with the tracking object in the next consecutive ⁇ frames. If it exists, it is determined that the tracking object is a cell; if it does not exist, it is judged The tracking object is an erroneous tracking object, and the cell marker of the erroneous tracking object is removed from the cell image.
  • an automated stem cell detection system comprising:
  • Data acquisition module used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
  • Model training module used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • Cell tracking module for updating the initial cell marker of the cell image according to the cell prediction result, and performing cell tracking on the cell image according to the updated cell marker, to obtain a cell tracking result;
  • Data update module used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model , performing cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  • a terminal includes a processor and a memory coupled to the processor, wherein,
  • the memory stores program instructions for realizing the automated stem cell detection method
  • the processor is configured to execute the program instructions stored in the memory to control automated stem cell detection.
  • Another technical solution adopted in the embodiment of the present application is: a storage medium storing program instructions executable by a processor, and the program instructions are used to execute the automatic stem cell detection method.
  • the beneficial effect of the embodiment of the present application lies in that the automated stem cell detection method, system, terminal and storage medium of the embodiment of the present application weight the cell prediction results of the n enhanced images corresponding to each cell image Summing to improve the reliability of the label; by adding the weighted summation result of each cell image to the initial cell label of the cell image, the performance of the model is prevented from degrading; cell tracking is performed according to the added result, and the tracking As a result, the training label is updated and iterative training is performed again to obtain the final cell detection model.
  • the embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
  • Fig. 1 is the flowchart of the automated stem cell detection method of the embodiment of the present application
  • Fig. 2 is the schematic diagram of the overlapping area calculation of the embodiment of the present application.
  • Figure 3 is a schematic diagram of the cell tracking results of the embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an automated stem cell detection system according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
  • FIG. 1 is a flow chart of the automated stem cell detection method of the embodiment of the present application.
  • the automatic stem cell detection method of the embodiment of the present application comprises the following steps:
  • the initial cell marker acquisition method of the cell image is specifically: obtaining the corresponding initial cell marker by processing the fluorescence image corresponding to the cell image, or performing cell detection on the cell image based on an unsupervised cell detector to obtain the initial cell marker.
  • S2 Perform data enhancement on the cell image to obtain an enhanced cell image training set, and use the initial cell label as the initial training label of the cell image training set;
  • the data enhancement method of cell images is as follows: perform brightness, contrast, scaling, rotation, cropping, and mirror filling operations on each cell image in turn, obtain n enhanced images corresponding to each cell image, and perform each cell image respectively.
  • the initial cell markers of are sequentially scaled, rotated, cropped, and mirror-filled to generate cell markers for each enhanced image.
  • the parameter selection of each step of data enhancement operation follows the principle of non-degeneration, that is, the cell prediction result obtained after the first round of model training using the enhanced cell image is compared with the initial cell label corresponding to the cell image, and the predicted cell label The number has not decreased.
  • the initial label of each cell image is also the same operation.
  • S3 Input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • the deep learning model is the U-Net model, and binary cross entropy is used as the loss function during model training.
  • the embodiment of the present application uses weighted summation to improve the reliability of cell markers.
  • the weight of the pixel value corresponding to each enhanced image is 1/n. Due to the uncertainty of model training, the direction of parameter update is likely to deviate from the expected direction.
  • the purpose of adding the weighted summation result to the cell marker during model training is to predict when the current round of cell prediction results is worse than the cell marker. , to prevent the gradual regression of subsequent model training performance.
  • due to the complex characteristics of some cells it is difficult for the model to fully learn the cell characteristics during a limited round of parameter update. Therefore, in the cell prediction results, the prediction effect of this type of complex cells is not good.
  • the next round of During model training complex cells are fully learned to prevent model performance degradation.
  • S5 Perform cell tracking on the cell image according to the overlapping area of the new cell marker in adjacent frames, and obtain the cell tracking result;
  • cell tracking is performed by calculating the overlapping area (over lap) of cell markers in adjacent frames.
  • FIG. 2 it is a schematic diagram of calculating the overlapping area in the embodiment of the present application. Firstly calculate the area A t and A t+1 of a certain cell marker in the t-th frame and the t+1-th frame (ie the next frame), and then calculate the cell marker in the t -th frame and the t+1-th frame The overlapping area of , that is, A t ⁇ A t+1 , and judge whether the ratio of the overlapping area to the area of the cell marker in frame t is greater than the set threshold A, that is, judge Whether it is true, if it is greater than the set threshold, it is judged that the cells in the tth frame and the t+1th frame are marked as the same cell, and so on, to obtain the cell tracking result.
  • the threshold is set to be 0.1.
  • S6 Perform false tracking object detection on the cell tracking result, and generate a training label for the next round of model training after eliminating the detected cell markers of the false tracking object;
  • FIG. 3 is a schematic diagram of cell tracking results, where (a) is the ideal cell tracking results, with High degree of continuity, (b), (c), and (d) are the actual cell tracking results, the number of continuous frames of cell tracking is short, and the continuity is not strong. Therefore, the embodiment of the present application removes such false tracking objects by analyzing the cell tracking results.
  • S8 Iteratively execute S4-S7 until the set number of model training times is reached, and a trained cell prediction model is obtained;
  • the number of model trainings is set to 5 times, that is, after 5 rounds of training, a cell prediction model with better performance can be obtained, and the training labels of the training set cell images are updated according to the results of each round of training, without artificial Participation greatly reduces the cost of training and improves training efficiency.
  • S9 Perform cell detection and tracking on the image of the cell to be detected according to the trained cell prediction model.
  • the automated stem cell detection method in the embodiment of the present application improves the credibility of the marker by weighting and summing the cell prediction results of the n enhanced images corresponding to each cell image; by weighting and summing the weighted sum of each cell image
  • the result is added to the initial cell label of the cell image to prevent the performance degradation of the model; cell tracking is performed according to the added result, and the training label is updated according to the tracking result and iterative training is performed again to obtain the final cell detection model.
  • the embodiment of the present application does not require manual labeling, and the training process is simple, which reduces labor costs and obtains better performance, greatly reduces training costs, and improves training efficiency.
  • FIG. 4 is a schematic structural diagram of the automated stem cell detection system of the embodiment of the present application.
  • the automated stem cell detection system 40 of the embodiment of the present application includes:
  • Data acquisition module 41 used to acquire cell images, generate a cell image training set, and use the initial cell label of the cell image as the initial training label of the cell image training set;
  • Model training module 42 used to input the cell image training set into the deep learning model for the first round of model training, and output the first round of cell prediction results of the cell image training set through the deep learning model;
  • Cell tracking module 43 used to update the initial cell marker of the cell image according to the cell prediction result, and perform cell tracking on the cell image according to the updated cell marker, to obtain the cell tracking result;
  • Data update module 44 used to update the initial training labels of the cell image training set according to the cell tracking results, and input the updated cell image training set into the deep learning model for iterative training to obtain a trained cell detection model.
  • a good cell prediction model performs cell detection and tracking on the image of the cell to be detected.
  • FIG. 5 is a schematic diagram of a terminal structure in an embodiment of the present application.
  • the terminal 50 includes a processor 51 and a memory 52 coupled to the processor 51 .
  • the memory 52 stores program instructions for realizing the above automatic stem cell detection method.
  • the processor 51 is used to execute the program instructions stored in the memory 52 to control the automated stem cell detection.
  • the processor 51 may also be referred to as a CPU (Central Processing Unit, central processing unit).
  • the processor 51 may be an integrated circuit chip with signal processing capabilities.
  • the processor 51 can also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • FIG. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
  • the storage medium of the embodiment of the present application stores a program file 61 capable of realizing all the above-mentioned methods, wherein the program file 61 can be stored in the above-mentioned storage medium in the form of a software product, and includes several instructions to make a computer device (which can It is a personal computer, a server, or a network device, etc.) or a processor (processor) that executes all or part of the steps of the methods in various embodiments of the present invention.
  • a computer device which can It is a personal computer, a server, or a network device, etc.
  • processor processor
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc, etc., which can store program codes. , or terminal devices such as computers, servers, mobile phones, and tablets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

La présente demande concerne un procédé et un système de détection automatisée de cellule souche, un terminal, et un support de stockage. Le procédé comprend les étapes consistant à : acquérir des images de cellule, et générer un ensemble d'entraînement d'image de cellule ; entrer l'ensemble d'entraînement d'image de cellule dans un modèle d'apprentissage profond pour effectuer un entraînement de modèle de premier tour, et délivrer en sortie un résultat de prédiction de cellule de premier tour de l'ensemble d'entraînement d'image de cellule au moyen du modèle d'apprentissage profond ; mettre à jour des marqueurs de cellule initiaux des images de cellule en fonction du résultat de prédiction de cellule, et effectuer un suivi de cellule sur les images de cellule en fonction des marqueurs de cellule mis à jour, de façon à obtenir des résultats de suivi de cellule ; mettre à jour une étiquette d'entraînement initiale de l'ensemble d'entraînement d'image de cellule en fonction des résultats de suivi de cellule, et entrer l'ensemble d'entraînement d'image de cellule mis à jour dans le modèle d'apprentissage profond pour effectuer un entraînement itératif, de façon à obtenir un modèle de détection de cellule entraîné. Au moyen des modes de réalisation de la présente demande, l'étiquetage manuel n'est pas nécessaire, et le processus d'entraînement est simple, ce qui permet d'obtenir une performance relativement bonne, de réduire significativement le coût d'entraînement, et d'améliorer l'efficacité d'entraînement.
PCT/CN2021/113808 2021-08-20 2021-08-20 Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage WO2023019559A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/113808 WO2023019559A1 (fr) 2021-08-20 2021-08-20 Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/113808 WO2023019559A1 (fr) 2021-08-20 2021-08-20 Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage

Publications (1)

Publication Number Publication Date
WO2023019559A1 true WO2023019559A1 (fr) 2023-02-23

Family

ID=85239391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113808 WO2023019559A1 (fr) 2021-08-20 2021-08-20 Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage

Country Status (1)

Country Link
WO (1) WO2023019559A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127809A (zh) * 2016-06-22 2016-11-16 浙江工业大学 一种显微图像序列中癌细胞轨迹追踪与关联方法
CN107944360A (zh) * 2017-11-13 2018-04-20 中国科学院深圳先进技术研究院 一种诱导多能干细胞识别方法、系统及电子设备
CN108256408A (zh) * 2017-10-25 2018-07-06 四川大学 一种基于深度学习的干细胞追踪方法
US20210019499A1 (en) * 2018-03-20 2021-01-21 Shimadzu Corporation Cell Image Analysis Apparatus, Cell Image Analysis System, Method of Generating Training Data, Method of Generating Trained Model, Training Data Generation Program, and Method of Producing Training Data
CN113192107A (zh) * 2021-05-06 2021-07-30 上海锵玫人工智能科技有限公司 一种目标识别追踪方法及机器人

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127809A (zh) * 2016-06-22 2016-11-16 浙江工业大学 一种显微图像序列中癌细胞轨迹追踪与关联方法
CN108256408A (zh) * 2017-10-25 2018-07-06 四川大学 一种基于深度学习的干细胞追踪方法
CN107944360A (zh) * 2017-11-13 2018-04-20 中国科学院深圳先进技术研究院 一种诱导多能干细胞识别方法、系统及电子设备
US20210019499A1 (en) * 2018-03-20 2021-01-21 Shimadzu Corporation Cell Image Analysis Apparatus, Cell Image Analysis System, Method of Generating Training Data, Method of Generating Trained Model, Training Data Generation Program, and Method of Producing Training Data
CN113192107A (zh) * 2021-05-06 2021-07-30 上海锵玫人工智能科技有限公司 一种目标识别追踪方法及机器人

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JU MENGXI, LI XINWEI, LI ZHANGYONG: "Detection of white blood cells in microscopic leucorrhea images based on deep active learning", SHENGWU YIXUE GONGCHENGXUE ZAZHI = JOURNAL OF BIOMEDICAL ENGINEERING, SICHUAN DAXUE HUAXI YIYUAN, CN, vol. 37, no. 3, 25 June 2020 (2020-06-25), CN , pages 519 - 526, XP093036924, ISSN: 1001-5515, DOI: 10.7507/1001-5515.201909040 *

Similar Documents

Publication Publication Date Title
US11663293B2 (en) Image processing method and device, and computer-readable storage medium
CN108256562B (zh) 基于弱监督时空级联神经网络的显著目标检测方法及系统
US10643130B2 (en) Systems and methods for polygon object annotation and a method of training and object annotation system
Zhang et al. Non-rigid object tracking via deep multi-scale spatial-temporal discriminative saliency maps
Song et al. Seednet: Automatic seed generation with deep reinforcement learning for robust interactive segmentation
CN107292887B (zh) 一种基于深度学习自适应权重的视网膜血管分割方法
Wang et al. Inverse sparse tracker with a locally weighted distance metric
Babenko et al. Robust object tracking with online multiple instance learning
TWI832966B (zh) 用於使用多模態成像和集成機器學習模型的自動靶標和組織分割的方法和設備
US11030750B2 (en) Multi-level convolutional LSTM model for the segmentation of MR images
KR102037303B1 (ko) 캡슐 내시경의 위치를 추정하는 방법 및 장치
Yang et al. An improving faster-RCNN with multi-attention ResNet for small target detection in intelligent autonomous transport with 6G
CN111310609B (zh) 基于时序信息和局部特征相似性的视频目标检测方法
CN111340820B (zh) 图像分割方法、装置、电子设备及存储介质
Kim et al. Stasy: Score-based tabular data synthesis
Xie et al. Online multiple instance gradient feature selection for robust visual tracking
CN112861718A (zh) 一种轻量级特征融合人群计数方法及系统
CN116670687A (zh) 用于调整训练后的物体检测模型以适应域偏移的方法和系统
WO2023123847A1 (fr) Procédé et appareil d'apprentissage de modèle, procédé et appareil de traitement d'image, et dispositif, support de stockage et produit-programme d'ordinateur
CN112927266A (zh) 基于不确定性引导训练的弱监督时域动作定位方法及系统
CN116740362B (zh) 一种基于注意力的轻量化非对称场景语义分割方法及系统
WO2023019559A1 (fr) Procédé et système de détection automatisée de cellule souche, terminal, et support de stockage
CN113689395A (zh) 一种自动化干细胞检测方法、系统、终端以及存储介质
CN111553250A (zh) 一种基于人脸特征点的精准面瘫程度评测方法及装置
CN115862119A (zh) 基于注意力机制的人脸年龄估计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953800

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE