TWI761109B - Cell density grouping method, cell density grouping device, electronic device and storage media - Google Patents

Cell density grouping method, cell density grouping device, electronic device and storage media Download PDF

Info

Publication number
TWI761109B
TWI761109B TW110107746A TW110107746A TWI761109B TW I761109 B TWI761109 B TW I761109B TW 110107746 A TW110107746 A TW 110107746A TW 110107746 A TW110107746 A TW 110107746A TW I761109 B TWI761109 B TW I761109B
Authority
TW
Taiwan
Prior art keywords
cell
density
detected
error value
image
Prior art date
Application number
TW110107746A
Other languages
Chinese (zh)
Other versions
TW202236160A (en
Inventor
李宛真
盧志德
郭錦斌
Original Assignee
鴻海精密工業股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鴻海精密工業股份有限公司 filed Critical 鴻海精密工業股份有限公司
Priority to TW110107746A priority Critical patent/TWI761109B/en
Application granted granted Critical
Publication of TWI761109B publication Critical patent/TWI761109B/en
Publication of TW202236160A publication Critical patent/TW202236160A/en

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present application provides a cell density grouping method, a cell density grouping device, an electronic device and storage medium. The method includes: inputting images into a pre-detected number of self-encoders of a density cluster model to a pre-detected number of reconstructed images to be detected, and each density cluster model corresponding to a density range; inputting the images to be detected and the reconstructed images to be detected into a twin network model of the density cluster model to calculate a first error value; determining a minimum first error value and determining a density range of the images to be detected based on the minimum first error value. The present application utilizes cell images with different densities to train the self-encoders and twin network models to distinguish the cell density range, thus accurately dividing the cell images with same density range, and improving efficiency of grouping cell image density.

Description

細胞密度分群方法、裝置、電子設備及電腦存儲介質 Cell density grouping method, device, electronic device and computer storage medium

本申請涉及影像處理領域,具體涉及一種細胞密度分群方法、裝置、電子設備及電腦存儲介質。 The present application relates to the field of image processing, in particular to a cell density grouping method, device, electronic device and computer storage medium.

在進行影像處理時,進行細胞圖像密度分群需要計算圖像中細胞的數量與體積,再計算細胞在圖像中的占比,由於細胞圖像密度分群的計算的過程比較複雜,所以通常需要花費大量的時間。 When performing image processing, it is necessary to calculate the number and volume of cells in the image for cell image density grouping, and then calculate the proportion of cells in the image. Because the calculation process of cell image density grouping is complicated, it is usually necessary to spend a lot of time.

鑒於以上內容,有必要提出一種細胞密度分群方法、裝置、電子設備及電腦存儲介質以實現對細胞密度範圍的快速分類。 In view of the above content, it is necessary to propose a cell density grouping method, device, electronic equipment and computer storage medium to realize the rapid classification of the cell density range.

本申請的第一方面提供細胞密度分群方法,所述方法包括:將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應;將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型;確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 A first aspect of the present application provides a cell density grouping method, the method comprising: inputting images to be detected into an autoencoder of a preset number of density grouping models, and obtaining a preset number of reconstructed images to be detected, each of the The density grouping model is composed of the self-encoder and the twin network model, each of the density grouping models corresponds to a density range, and each reconstructed image to be detected corresponds to a density grouping model; the to-be-detected image and the Input the Siamese network model of the density clustering model corresponding to each reconstructed image to be detected, and calculate the difference between the reconstructed image to be detected and the reconstructed image to be detected The first error value is composed of all the first error values to form a first error value set, and each first error value corresponds to a density clustering model; the minimum first error value in the first error value set is determined, and the minimum first error value is determined. The density range corresponding to the density grouping model corresponding to the first error value is used as the density range of the image to be detected.

可選地,所述方法還包括:訓練所述密度分群模型,包括:按密度範圍對細胞圖像集進行劃分,得到預設數量的細胞訓練圖像集,每個細胞訓練圖像集對應一個密度範圍;對每個細胞訓練圖像集執行訓練操作,得到預設數量的密度分群模型,每個所述密度分群模型對應一個密度範圍,所述訓練操作包括:將所述細胞訓練圖像集中的所述細胞訓練圖像轉為細胞訓練圖像向量,由所有細胞訓練圖像向量組成細胞訓練圖像向量集;使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器;將所述細胞訓練圖像向量集輸入所述密度分群模型的訓練完成的自編碼器,得到細胞重建圖像向量集,所述細胞重建圖像向量集包括細胞重建圖像向量; 使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型,所述訓練完成的自編碼器與所述訓練完成的孿生網路模型組成所述密度分群模型。 Optionally, the method further includes: training the density clustering model, including: dividing the cell image set according to the density range to obtain a preset number of cell training image sets, each cell training image set corresponds to one Density range; perform a training operation on each cell training image set to obtain a preset number of density clustering models, each of which corresponds to a density range, and the training operation includes: collecting the cell training images into a set The cell training image is converted into a cell training image vector, and a cell training image vector set is composed of all cell training image vectors; the cell training image vector set is used to train an autoencoder, and the trained autoencoder is obtained. inputting the cell training image vector set into the autoencoder after the training of the density clustering model is completed to obtain a cell reconstruction image vector set, and the cell reconstruction image vector set includes the cell reconstruction image vector; Using the cell training image vector set and the cell reconstruction image vector set to train the twin network model, the trained twin network model is obtained, the training completed autoencoder and the training completed twin network The model constitutes the density clustering model.

可選地,所述使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述自編碼器的編碼層,得到所述細胞訓練圖像向量的隱向量;將所述隱向量輸入所述自編碼器的解碼層,得到所述細胞訓練圖像向量的重建圖像向量;使用預設的誤差函數計算所述細胞訓練圖像向量和所述重建圖像向量之間的第二誤差值,調整所述編碼層與所述解碼層的參數以最小化所述誤差值,得到所述密度分群模型的自編碼器。 Optionally, the step of using the cell training image vector set to train the autoencoder, and obtaining the trained autoencoder comprises: inputting the cell training image vector in the cell training image vector set into the autoencoder. The encoding layer of the encoder obtains the hidden vector of the cell training image vector; the hidden vector is input into the decoding layer of the autoencoder to obtain the reconstructed image vector of the cell training image vector; using the preset Calculate the second error value between the cell training image vector and the reconstructed image vector, adjust the parameters of the encoding layer and the decoding layer to minimize the error value, and obtain the density Autoencoders for swarming models.

可選地,所述孿生網路模型包括第一神經網路和第二神經網路,所述使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖;將所述細胞重建圖像向量集中的所述細胞重建圖像向量輸入所述第二神經網路,得到第二特徵圖;計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值,並根據所述第三誤差值優化所述第一神經網路和所述第二神經網路,得到所述訓練完成的孿生網路模型。 Optionally, the twin network model includes a first neural network and a second neural network, and the twin network model is trained using the cell training image vector set and the cell reconstruction image vector set to obtain: The trained twin network model includes: inputting the cell training image vector in the cell training image vector set into the first neural network to obtain a first feature map; integrating the cell reconstruction image vector into the first neural network The reconstructed image vector of the cells is input into the second neural network to obtain a second feature map; the third error value between the first feature map and the second feature map is calculated, and a third error value between the first feature map and the second feature map is calculated. Three error values optimize the first neural network and the second neural network to obtain the trained twin network model.

可選地,所述第一神經網路的權重和所述第二神經網路的權重相同,且所述第一神經網路的結構和所述第二神經網路的結構相同。 Optionally, the weight of the first neural network and the weight of the second neural network are the same, and the structure of the first neural network and the structure of the second neural network are the same.

可選地,所述將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖包括:將所述細胞訓練圖像向量輸入所述第一神經網路;對所述細胞訓練圖像向量與所述第一神經網路中的矩陣進行計算,得到所述細胞訓練圖像向量的特徵,組成所述第一特徵圖。 Optionally, the inputting the cell training image vector in the cell training image vector set into the first neural network, and obtaining the first feature map includes: inputting the cell training image vector into the cell training image vector. A first neural network; calculating the cell training image vector and the matrix in the first neural network to obtain the features of the cell training image vector to form the first feature map.

可選地,所述計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用平均絕對值誤差函數計算所述第一特徵圖與所述第二特徵圖之間的平均絕對值誤差,將所述平均絕對值誤差作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 Optionally, the calculating a third error value between the first feature map and the second feature map includes: using a mean absolute value error function to calculate the difference between the first feature map and the second feature map. The average absolute value error between the two, and the average absolute value error is used as the third error value between the first feature map and the second feature map.

本申請的第二方面提供一種細胞密度分群裝置,所述裝置包括:圖像重建模組,用於將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應; 誤差值計算模組,用於將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型;密度範圍確定模組,用於確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 A second aspect of the present application provides a cell density grouping device, the device includes: an image reconstruction module, which is used for inputting images to be detected into an autoencoder of a preset number of density clustering models, and obtains a preset number of to-be-detected images. Detecting and reconstructing images, each of the density clustering models is composed of the autoencoder and the twin network model, each of the density clustering models corresponds to a density range, and each reconstructed image to be detected corresponds to a density clustering model ; The error value calculation module is used to input the image to be detected and each reconstructed image to be detected into the twin network model of the density clustering model corresponding to each reconstructed image to be detected, and calculate the The first error value between the image to be detected and each reconstructed image to be detected is composed of all the first error values to form a first error value set, and each first error value corresponds to a density clustering model; the density range determines the model. group, which is used to determine the minimum first error value in the first error value set, and use the density range corresponding to the density grouping model corresponding to the minimum first error value as the density range of the image to be detected.

本申請的第三方面提供一種電子設備,所述電子設備包括:記憶體,存儲至少一個指令;及處理器,執行所述記憶體中存儲的指令以實現所述的細胞密度分群方法。 A third aspect of the present application provides an electronic device, the electronic device includes: a memory, which stores at least one instruction; and a processor, which executes the instructions stored in the memory to implement the cell density grouping method.

本申請的第四申請提供一種電腦存儲介質,其上存儲有電腦程式,所述電腦程式被處理器執行時實現所述的細胞密度分群方法。 The fourth application of the present application provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, the cell density grouping method is implemented.

本申請可以藉由使用不同密度範圍的細胞圖像訓練自編碼器重構圖像並訓練孿生網路模型對細胞密度的範圍進行區分,可以實現對具有相同密度範圍的細胞圖像的準確劃分,提升細胞圖像密度分群的效率。 The present application can use cell images with different density ranges to train an autoencoder to reconstruct images and train a twin network model to distinguish the range of cell densities, so that accurate division of cell images with the same density range can be achieved, Improve the efficiency of cell image density clustering.

30:細胞密度分群裝置 30: Cell Density Clustering Device

301:圖像重建模組 301: Image reconstruction module

302:誤差值計算模組 302: Error value calculation module

303:密度範圍確定模組 303: Density range determination module

6:電子設備 6: Electronic equipment

61:記憶體 61: Memory

62:處理器 62: Processor

63:電腦程式 63: Computer Programs

S11~S13:步驟 S11~S13: Steps

圖1為本申請一實施方式中一種細胞密度分群方法的流程圖。 FIG. 1 is a flowchart of a cell density grouping method according to an embodiment of the present application.

圖2為本申請一實施方式中一種細胞密度分群裝置的結構圖。 FIG. 2 is a structural diagram of a cell density grouping device according to an embodiment of the present application.

圖3為本申請一實施方式中實現細胞密度分群方法的電子設備的示意圖。 FIG. 3 is a schematic diagram of an electronic device for implementing a cell density grouping method according to an embodiment of the present application.

為了能夠更清楚地理解本發明的上述目的、特徵和優點,下面結合附圖和具體實施例對本發明進行詳細描述。需要說明的是,在不衝突的情況下,本申請的實施例及實施例中的特徵可以相互組合。 In order to more clearly understand the above objects, features and advantages of the present invention, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments may be combined with each other in the case of no conflict.

在下面的描述中闡述了很多具體細節以便於充分理解本發明,所描述的實施例僅僅是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,本領域普通技術人員在沒有做出創造性勞動前提下所獲得的所有其他實施例,都屬於本發明保護的範圍。 In the following description, many specific details are set forth in order to facilitate a full understanding of the present invention, and the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

除非另有定義,本文所使用的所有的技術和科學術語與屬於本發明的技術領域的技術人員通常理解的含義相同。本文中在本發明的說明書中所使用的術語只是為了描述具體地實施例的目的,不是旨在於限制本發明。 Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used herein in the description of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention.

優選地,本申請細胞密度分群方法應用在一個或者多個電子設備中。所述電子設備是一種能夠按照事先設定或存儲的指令,自動進行數值計算和/或資訊處理的設備,其硬體包括但不限於微處理器、專用積體電路(Application Specific Integrated Circuit,ASIC)、可程式設計閘陣列(Field-Programmable Gate Array,FPGA)、數位訊號處理器(Digital Signal Processor,DSP)、嵌入式設備等。 Preferably, the cell density clustering method of the present application is applied in one or more electronic devices. The electronic device is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (ASICs) , Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.

所述電子設備可以是桌上型電腦、筆記型電腦、平板電腦及雲端伺服器等計算設備。所述電子設備可以與使用者藉由鍵盤、滑鼠、遙控器、觸控板或聲控設備等方式進行人機交互。 The electronic device may be a computing device such as a desktop computer, a notebook computer, a tablet computer, and a cloud server. The electronic device can perform human-computer interaction with the user by means of a keyboard, a mouse, a remote control, a touch pad or a voice control device.

實施例1 Example 1

圖1是本申請一實施方式中細胞密度分群方法的流程圖。所述細胞密度分群方法應用於電子設備中。根據不同的需求,所述流程圖中步驟的順序可以改變,某些步驟可以省略。 FIG. 1 is a flowchart of a cell density grouping method in an embodiment of the present application. The cell density grouping method is applied in electronic devices. According to different requirements, the order of the steps in the flowchart can be changed, and some steps can be omitted.

參閱圖1所示,所述細胞密度分群方法具體包括以下步驟: Referring to Figure 1, the cell density grouping method specifically includes the following steps:

步驟S11,將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應。 Step S11, input the image to be detected into the self-encoder of a preset number of density clustering models to obtain a preset number of reconstructed images to be detected, each of the density clustering models is composed of the self-encoder and the twin network model. Each of the density grouping models corresponds to a density range, and each reconstructed image to be detected corresponds to a density grouping model.

例如,所述預設數量可以是4,4個密度分群模型分別對應4個密度範圍(0,40%),(40%,60%),(60%,80%),(80%,100%),將待檢測圖像輸入4個密度分群模型的自編碼器,得到4個待檢測重建圖像,分別為待檢測重建圖像A、待檢測重建圖像B、待檢測重建圖像C、待檢測重建圖像D。 For example, the preset number may be 4, and 4 density clustering models correspond to 4 density ranges (0, 40%), (40%, 60%), (60%, 80%), (80%, 100 respectively) %), input the image to be detected into the autoencoder of 4 density clustering models, and obtain 4 reconstructed images to be detected, which are respectively the reconstructed image A to be detected, the reconstructed image B to be detected, and the reconstructed image C to be detected. , the reconstructed image D to be detected.

步驟S12,將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型。 Step S12, input the image to be detected and each reconstructed image to be detected into the twin network model of the density clustering model corresponding to each reconstructed image to be detected, and calculate the image to be detected With the first error value of each reconstructed image to be detected, a first error value set is composed of all the first error values, and each first error value corresponds to a density clustering model.

例如,將所述待檢測圖像和待檢測重建圖像A輸入密度範圍(0,40%)對應的密度分群模型的孿生網路模型,所述待檢測圖像和待檢測重建圖像B輸入密度範圍(40%,60%)對應的密度分群模型的孿生網路模型,所述待檢測圖像和待檢測重建圖像C輸入密度範圍(60%,80%)對應的密度分群模型的孿生網路模型,所述待檢測圖像和待檢測重建圖像D輸入密度範圍(80%,100%)對應的密度分群模型的孿生網路模型,計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型。 For example, the image to be detected and the reconstructed image A to be detected are input into the Siamese network model of the density clustering model corresponding to the density range (0, 40%), and the image to be detected and the reconstructed image B to be detected are input The twin network model of the density clustering model corresponding to the density range (40%, 60%), the image to be detected and the reconstructed image C to be detected are the twins of the density clustering model corresponding to the input density range (60%, 80%) The network model, the twin network model of the density clustering model corresponding to the input density range (80%, 100%) of the image to be detected and the reconstructed image to be detected D, calculate the image to be detected and each of the The first error value of the reconstructed image to be detected is composed of all the first error values to form a first error value set, and each first error value corresponds to a density clustering model.

步驟S13,確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 Step S13: Determine the minimum first error value in the first error value set, and use the density range corresponding to the density grouping model corresponding to the minimum first error value as the density range of the image to be detected.

例如,當所述第一誤差值集中的最小第一誤差值為10%時,將所述最小第一誤差值對應的密度分群模型對應的密度範圍(60%,80%)作為所述待檢測圖像的密度範圍。 For example, when the minimum first error value in the first error value set is 10%, the density range (60%, 80%) corresponding to the density grouping model corresponding to the minimum first error value is used as the to-be-detected The density range of the image.

在本申請的至少一個實施例中,所述方法還包括:訓練所述密度分群模型,包括:按密度範圍對細胞圖像集進行劃分,得到預設數量的細胞訓練圖像集,每個細胞訓練圖像集對應一個密度範圍; 對每個細胞訓練圖像集執行訓練操作,得到預設數量的密度分群模型,每個所述密度分群模型對應一個密度範圍,所述訓練操作包括:將所述細胞訓練圖像集中的所述細胞訓練圖像轉為細胞訓練圖像向量,由所有細胞訓練圖像向量組成細胞訓練圖像向量集;使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器;將所述細胞訓練圖像向量集輸入所述密度分群模型的訓練完成的自編碼器,得到細胞重建圖像向量集,所述細胞重建圖像向量集包括細胞重建圖像向量;使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型,所述訓練完成的自編碼器與所述訓練完成的孿生網路模型組成所述密度分群模型。 In at least one embodiment of the present application, the method further includes: training the density clustering model, including: dividing the cell image set according to the density range to obtain a preset number of cell training image sets, each cell The training image set corresponds to a density range; Perform a training operation on each cell training image set to obtain a preset number of density clustering models, each of which corresponds to a density range, and the training operation includes: The cell training image is converted into a cell training image vector, and a cell training image vector set is composed of all cell training image vectors; the autoencoder is trained by using the cell training image vector set, and the trained autoencoder is obtained; The cell training image vector set is input into the autoencoder after the training of the density clustering model is completed, and a cell reconstruction image vector set is obtained, and the cell reconstruction image vector set includes the cell reconstruction image vector; using the cell training The image vector set and the cell reconstruction image vector set train the twin network model to obtain the trained twin network model, and the trained self-encoder and the trained twin network model form the density Clustering model.

例如,按4個密度範圍(0,40%),(40%,60%),(60%,80%),(80%,100%)對細胞圖像集進行劃分,得到四個細胞訓練圖像集,對每個細胞訓練圖像集執行訓練操作,得到4個密度分群模型,每個密度分群模型對應一個密度範圍。 For example, divide the cell image set by 4 density ranges (0, 40%), (40%, 60%), (60%, 80%), (80%, 100%) to get four cell training Image set, perform the training operation on each cell training image set, and obtain 4 density clustering models, each of which corresponds to a density range.

在本申請的至少一個實施例中,所述使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述自編碼器的編碼層,得到所述細胞訓練圖像向量的隱向量;將所述隱向量輸入所述自編碼器的解碼層,得到所述細胞訓練圖像向量的重建圖像向量;使用預設的誤差函數計算所述細胞訓練圖像向量和所述重建圖像向量之間的第二誤差值,調整所述編碼層與所述解碼層的參數以最小化所述誤差值,得到所述密度分群模型的自編碼器。 In at least one embodiment of the present application, the step of using the cell training image vector set to train an autoencoder, and obtaining a trained autoencoder includes: adding the cell training map in the cell training image vector set Input the image vector into the coding layer of the autoencoder to obtain the hidden vector of the cell training image vector; input the hidden vector into the decoding layer of the autoencoder to obtain the reconstruction map of the cell training image vector image vector; use a preset error function to calculate a second error value between the cell training image vector and the reconstructed image vector, and adjust the parameters of the encoding layer and the decoding layer to minimize the error value to obtain the autoencoder of the density clustering model.

在本申請的至少一個實施例中,當所述預設的誤差函數是最小絕對值誤差函數時,使用最小絕對值函數誤差計算所述細胞訓練圖像向量和所述 重建圖像向量之間的第二誤差值,計算公式為

Figure 110107746-A0305-02-0006-10
,其中,MAE 為所述最小絕對值函數,y i 為所述細胞訓練圖像向量的第i個向量,
Figure 110107746-A0305-02-0006-3
為重建圖 像向量的第i個向量,n為所述細胞訓練圖像向量與所述重建圖像向量的向量維度。 In at least one embodiment of the present application, when the preset error function is a minimum absolute value error function, the minimum absolute value function error is used to calculate the difference between the cell training image vector and the reconstructed image vector The second error value, the calculation formula is
Figure 110107746-A0305-02-0006-10
, where MAE is the minimum absolute value function, y i is the ith vector of the cell training image vector,
Figure 110107746-A0305-02-0006-3
is the ith vector of the reconstructed image vector, and n is the vector dimension of the cell training image vector and the reconstructed image vector.

在本申請的至少一個實施例中,當所述預設的誤差函數是均方差函數時,使用均方差函數誤差計算所述細胞訓練圖像向量和所述重建圖像向量 之間的第二誤差值,計算公式為

Figure 110107746-A0305-02-0006-4
,其中,RSE為所述均方 差函數,y i 為所述細胞訓練圖像向量的第i個向量,
Figure 110107746-A0305-02-0006-5
為重建圖像向量的第i個 向量,n為所述細胞訓練圖像向量與所述重建圖像向量的向量維度。 In at least one embodiment of the present application, when the preset error function is a mean square error function, the mean square error function error is used to calculate the second error between the cell training image vector and the reconstructed image vector value, calculated as
Figure 110107746-A0305-02-0006-4
, where RSE is the mean square error function, y i is the ith vector of the cell training image vector,
Figure 110107746-A0305-02-0006-5
is the ith vector of the reconstructed image vector, and n is the vector dimension of the cell training image vector and the reconstructed image vector.

在本申請的至少一個實施例中,所述孿生網路模型包括第一神經網路和第二神經網路,所述使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型包括: 將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖;將所述細胞重建圖像向量集中的所述細胞重建圖像向量輸入所述第二神經網路,得到第二特徵圖;計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值,並根據所述第三誤差值優化所述第一神經網路和所述第二神經網路,得到所述訓練完成的孿生網路模型。 In at least one embodiment of the present application, the twin network model includes a first neural network and a second neural network, and the training is performed using the cell training image vector set and the cell reconstruction image vector set. The twin network model, the trained twin network model includes: Inputting the cell training image vector in the cell training image vector set into the first neural network to obtain a first feature map; inputting the cell reconstruction image vector in the cell reconstruction image vector set The second neural network obtains a second feature map; calculates a third error value between the first feature map and the second feature map, and optimizes the first neural network according to the third error value network and the second neural network to obtain the trained twin network model.

在本申請的至少一個實施例中,所述第一神經網路的權重和所述第二神經網路的權重相同,且所述第一神經網路的結構和所述第二神經網路的結構相同。 In at least one embodiment of the present application, the weight of the first neural network is the same as the weight of the second neural network, and the structure of the first neural network is the same as the weight of the second neural network. The structure is the same.

在本申請的至少一個實施例中,所述將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖包括:將所述細胞訓練圖像向量輸入所述第一神經網路;對所述細胞訓練圖像向量與所述第一神經網路中的矩陣進行計算,得到所述細胞訓練圖像向量的特徵,組成所述第一特徵圖。 In at least one embodiment of the present application, the inputting the cell training image vector in the cell training image vector set into the first neural network, and obtaining the first feature map includes: training the cell The image vector is input to the first neural network; the cell training image vector and the matrix in the first neural network are calculated to obtain the characteristics of the cell training image vector, forming the first neural network. feature map.

在本申請的至少一個實施例中,所述計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用平均絕對值誤差函數計算所述第一特徵圖與所述第二特徵圖之間的平均絕對值誤差,將所述平均絕對值誤差作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 In at least one embodiment of the present application, the calculating a third error value between the first feature map and the second feature map includes: using an average absolute value error function to calculate the difference between the first feature map and the second feature map. The average absolute value error between the second feature maps is determined, and the average absolute value error is used as a third error value between the first feature map and the second feature map.

在本申請的其他實施方式中,所述計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用均方差函數計算所述第一特徵圖與所述第二特徵圖之間的均方差誤差,將所述均方誤差值作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 In other embodiments of the present application, the calculating a third error value between the first feature map and the second feature map includes: calculating the first feature map and the second feature map using a mean square error function mean square error between feature maps, and the mean square error value is used as a third error value between the first feature map and the second feature map.

本申請可以藉由使用不同密度範圍的細胞圖像訓練自編碼器重構圖像並訓練孿生網路模型對細胞密度的範圍進行區分,可以實現對具有相同密度範圍的細胞圖像的準確劃分,提升細胞圖像密度分群的效率。 The present application can use cell images with different density ranges to train an autoencoder to reconstruct images and train a twin network model to distinguish the range of cell densities, so that accurate division of cell images with the same density range can be achieved, Improve the efficiency of cell image density clustering.

實施例2 Example 2

圖2為本申請一實施方式中細胞密度分群裝置30的結構圖。 FIG. 2 is a structural diagram of a cell density clustering device 30 according to an embodiment of the present application.

在一些實施例中,所述細胞密度分群裝置30運行於電子設備中。所述細胞密度分群裝置30可以包括多個由程式碼段所組成的功能模組。所述細胞密度分群裝置30中的各個程式段的程式碼可以存儲於記憶體中,並由至少一個處理器所執行。 In some embodiments, the cell density clustering device 30 operates in an electronic device. The cell density grouping device 30 may include a plurality of functional modules composed of code segments. The code of each program segment in the cell density clustering device 30 can be stored in memory and executed by at least one processor.

本實施例中,所述細胞密度分群裝置30根據其所執行的功能,可以被劃分為多個功能模組。參閱圖3所示,所述細胞密度分群裝置30可以包括圖像重建模組301、誤差值計算模組302及密度範圍確定模組303。本申請所稱的模組是指一種能夠被至少一個處理器所執行並且能夠完成固定功能的一系列電腦程式段,其存儲在記憶體中。所述在一些實施例中,關於各模組的功能將在後續的實施例中詳述。 In this embodiment, the cell density grouping device 30 can be divided into a plurality of functional modules according to the functions it performs. Referring to FIG. 3 , the cell density grouping device 30 may include an image reconstruction module 301 , an error value calculation module 302 and a density range determination module 303 . A module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can perform fixed functions, and are stored in a memory. In some embodiments, the functions of each module will be described in detail in subsequent embodiments.

所述圖像重建模組301將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應。 The image reconstruction module 301 inputs the images to be detected into the autoencoder of a preset number of density clustering models, and obtains a preset number of reconstructed images to be detected, and each of the density clustering models is determined by the autoencoder. It is composed of a twin network model, each density grouping model corresponds to a density range, and each reconstructed image to be detected corresponds to a density grouping model.

所述誤差值計算模組302將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型。 The error value calculation module 302 inputs the to-be-detected image and each of the to-be-detected reconstructed images into the twin network model of the density clustering model corresponding to each of the to-be-detected reconstructed images, and calculates the The first error value of the image to be detected and each reconstructed image to be detected is composed of all the first error values to form a first error value set, and each first error value corresponds to a density clustering model.

所述密度範圍確定模組303確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 The density range determination module 303 determines the minimum first error value in the first error value set, and uses the density range corresponding to the density grouping model corresponding to the minimum first error value as the image to be detected density range.

在本申請的至少一個實施例中,所述裝置還包括密度分群模型訓練模組。 In at least one embodiment of the present application, the apparatus further includes a density clustering model training module.

所述密度分群模型訓練模組用於訓練所述密度分群模型,包括:按密度範圍對細胞圖像集進行劃分,得到預設數量的細胞訓練圖像集,每個細胞訓練圖像集對應一個密度範圍;對每個細胞訓練圖像集執行訓練操作,得到預設數量的密度分群模型,每個所述密度分群模型對應一個密度範圍,所述訓練操作包括:將所述細胞訓練圖像集中的所述細胞訓練圖像轉為細胞訓練圖像向量,由所有細胞訓練圖像向量組成細胞訓練圖像向量集;使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器;將所述細胞訓練圖像向量集輸入所述密度分群模型的訓練完成的自編碼器,得到細胞重建圖像向量集,所述細胞重建圖像向量集包括細胞重建圖像向量;使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型,所述訓練完成的自編碼器與所述訓練完成的孿生網路模型組成所述密度分群模型。 The density clustering model training module is used to train the density clustering model, including: dividing the cell image set according to the density range to obtain a preset number of cell training image sets, each cell training image set corresponds to one Density range; perform a training operation on each cell training image set to obtain a preset number of density clustering models, each of which corresponds to a density range, and the training operation includes: collecting the cell training images into a set The cell training image is converted into a cell training image vector, and a cell training image vector set is composed of all cell training image vectors; the cell training image vector set is used to train an autoencoder, and the trained autoencoder is obtained. inputting the cell training image vector set into the autoencoder after the training of the density clustering model is completed to obtain a cell reconstruction image vector set, the cell reconstruction image vector set including the cell reconstruction image vector; using the Described cell training image vector set and described cell reconstruction image vector set training twin network model, obtains the twin network model that the training completes, and the autoencoder that the described training completes and the twin network model that the described training completes are formed The density clustering model.

在本申請的至少一個實施例中,所述密度分群模型訓練模組使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述自編碼器的編碼層,得到所述細胞訓練圖像向量的隱向量;將所述隱向量輸入所述自編碼器的解碼層,得到所述細胞訓練圖像向量的重建圖像向量;使用預設的誤差函數計算所述細胞訓練圖像向量和所述重建圖像向量之間的第二誤差值,調整所述編碼層與所述解碼層的參數以最小化所述誤差值,得到所述密度分群模型的自編碼器。 In at least one embodiment of the present application, the density clustering model training module uses the cell training image vector set to train an autoencoder, and obtaining a trained autoencoder includes: integrating the cell training image vector set The cell training image vector is input into the coding layer of the autoencoder to obtain the hidden vector of the cell training image vector; the hidden vector is input into the decoding layer of the autoencoder to obtain the cell training The reconstructed image vector of the image vector; use a preset error function to calculate the second error value between the cell training image vector and the reconstructed image vector, and adjust the parameters of the encoding layer and the decoding layer To minimize the error value, an autoencoder for the density clustering model is obtained.

在本申請的至少一個實施例中,當所述預設的誤差函數是最小絕對值誤差函數時,使用最小絕對值函數誤差計算所述細胞訓練圖像向量和所述 重建圖像向量之間的第二誤差值,計算公式為

Figure 110107746-A0305-02-0009-6
,其中,MAE 為所述最小絕對值函數,y i 為所述細胞訓練圖像向量的第i個向量,
Figure 110107746-A0305-02-0009-7
為重建圖 像向量的第i個向量,n為所述細胞訓練圖像向量與所述重建圖像向量的向量維度。 In at least one embodiment of the present application, when the preset error function is a minimum absolute value error function, the minimum absolute value function error is used to calculate the difference between the cell training image vector and the reconstructed image vector The second error value, the calculation formula is
Figure 110107746-A0305-02-0009-6
, where MAE is the minimum absolute value function, y i is the ith vector of the cell training image vector,
Figure 110107746-A0305-02-0009-7
is the ith vector of the reconstructed image vector, and n is the vector dimension of the cell training image vector and the reconstructed image vector.

在本申請的至少一個實施例中,當所述預設的誤差函數是均方差函數時,使用均方差函數誤差計算所述細胞訓練圖像向量和所述重建圖像向量 之間的第二誤差值,計算公式為

Figure 110107746-A0305-02-0009-8
,其中,RSE為所述均方 差函數,y i 為所述細胞訓練圖像向量的第i個向量,
Figure 110107746-A0305-02-0009-9
為重建圖像向量的第i個 向量,n為所述細胞訓練圖像向量與所述重建圖像向量的向量維度。 In at least one embodiment of the present application, when the preset error function is a mean square error function, the mean square error function error is used to calculate the second error between the cell training image vector and the reconstructed image vector value, calculated as
Figure 110107746-A0305-02-0009-8
, where RSE is the mean square error function, y i is the ith vector of the cell training image vector,
Figure 110107746-A0305-02-0009-9
is the ith vector of the reconstructed image vector, and n is the vector dimension of the cell training image vector and the reconstructed image vector.

在本申請的至少一個實施例中,所述孿生網路模型包括第一神經網路和第二神經網路。 In at least one embodiment of the present application, the twin network model includes a first neural network and a second neural network.

在本申請的至少一個實施例中,所述密度分群模型訓練模組使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖;將所述細胞重建圖像向量集中的所述細胞重建圖像向量輸入所述第二神經網路,得到第二特徵圖;計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值,並根據所述第三誤差值優化所述第一神經網路和所述第二神經網路,得到所述訓練完成的孿生網路模型。 In at least one embodiment of the present application, the density clustering model training module uses the cell training image vector set and the cell reconstruction image vector set to train a twin network model to obtain a trained twin network model The method includes: inputting the cell training image vector in the cell training image vector set into the first neural network to obtain a first feature map; inputting the cell reconstruction image in the cell reconstruction image vector set The vector is input into the second neural network to obtain a second feature map; the third error value between the first feature map and the second feature map is calculated, and the third error value is optimized according to the third error value. A neural network and the second neural network obtain the trained twin network model.

在本申請的至少一個實施例中,所述第一神經網路的權重和所述第二神經網路的權重相同,且所述第一神經網路的結構和所述第二神經網路的結構相同。 In at least one embodiment of the present application, the weight of the first neural network is the same as the weight of the second neural network, and the structure of the first neural network is the same as the weight of the second neural network. The structure is the same.

在本申請的至少一個實施例中,所述密度分群模型訓練模組將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖包括:將所述細胞訓練圖像向量輸入所述第一神經網路;對所述細胞訓練圖像向量與所述第一神經網路中的矩陣進行計算,得到所述細胞訓練圖像向量的特徵,組成所述第一特徵圖。 In at least one embodiment of the present application, the density clustering model training module inputs the cell training image vector in the cell training image vector set into the first neural network, and obtaining the first feature map includes the following steps: : input the cell training image vector into the first neural network; calculate the cell training image vector and the matrix in the first neural network to obtain the characteristics of the cell training image vector , forming the first feature map.

在本申請的至少一個實施例中,所述密度分群模型訓練模組計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用平均絕對值誤差函數計算所述第一特徵圖與所述第二特徵圖之間的平均絕對值誤差,將所述平均絕對值誤差作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 In at least one embodiment of the present application, calculating the third error value between the first feature map and the second feature map by the density clustering model training module includes: using a mean absolute value error function to calculate the The average absolute value error between the first feature map and the second feature map, and the average absolute value error is used as the third error value between the first feature map and the second feature map.

在本申請的其他實施方式中,所述計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用均方差函數計算所述第一特徵圖與所述第二特徵圖之間的均方差誤差,將所述均方誤差值作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 In other embodiments of the present application, the calculating a third error value between the first feature map and the second feature map includes: calculating the first feature map and the second feature map using a mean square error function mean square error between feature maps, and the mean square error value is used as a third error value between the first feature map and the second feature map.

本申請可以藉由使用不同密度範圍的細胞圖像訓練自編碼器重構圖像並訓練孿生網路模型對細胞密度的範圍進行區分,可以實現對具有相同密度範圍的細胞圖像的準確劃分,提升細胞圖像密度分群的效率。 The present application can use cell images with different density ranges to train an autoencoder to reconstruct images and train a twin network model to distinguish the range of cell densities, so that accurate division of cell images with the same density range can be achieved, Improve the efficiency of cell image density clustering.

實施例3 Example 3

圖3為本申請一實施方式中電子設備6的示意圖。 FIG. 3 is a schematic diagram of an electronic device 6 in an embodiment of the present application.

所述電子設備6包括記憶體61、處理器62以及存儲在所述記憶體61中並可在所述處理器62上運行的電腦程式63。所述處理器62執行所述電腦程式63時實現上述細胞密度分群方法實施例中的步驟,例如圖1所示的步驟S11~S13。或者,所述處理器62執行所述電腦程式63時實現上述在細胞密度分群裝置實施例中各模組/單元的功能,例如圖2中的模組301~303。 The electronic device 6 includes a memory 61 , a processor 62 and a computer program 63 stored in the memory 61 and executable on the processor 62 . When the processor 62 executes the computer program 63, the steps in the above embodiments of the cell density grouping method are implemented, for example, steps S11 to S13 shown in FIG. 1 . Alternatively, when the processor 62 executes the computer program 63, the functions of the modules/units in the above embodiments of the cell density clustering device are implemented, such as modules 301-303 in FIG. 2 .

示例性的,所述電腦程式63可以被分割成一個或多個模組/單元,所述一個或者多個模組/單元被存儲在所述記憶體61中,並由所述處理器62執行,以完成本申請。所述一個或多個模組/單元可以是能夠完成特定功能的一系列電腦程式指令段,所述指令段用於描述所述電腦程式63在所述電子設備6中的執行過程。例如,所述電腦程式63可以被分割成圖3中的圖像重建模組301、誤差值計算模組302及密度範圍確定模組303,各模組具體功能參見實施例2。 Exemplarily, the computer program 63 can be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 62 , to complete this application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 63 in the electronic device 6 . For example, the computer program 63 can be divided into the image reconstruction module 301, the error value calculation module 302 and the density range determination module 303 in FIG.

本實施方式中,所述電子設備6可以是桌上型電腦、筆記本、掌上型電腦及雲端終端裝置等計算設備。本領域技術人員可以理解,所述示意圖僅僅是電子設備6的示例,並不構成對電子設備6的限定,可以包括比圖示更多或更少的部件,或者組合某些部件,或者不同的部件,例如所述電子設備6還可以包括輸入輸出設備、網路接入設備、匯流排等。 In this embodiment, the electronic device 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud terminal device. Those skilled in the art can understand that the schematic diagram is only an example of the electronic device 6, and does not constitute a limitation to the electronic device 6, and may include more or less components than the one shown, or combine some components, or different Components such as the electronic device 6 may also include input and output devices, network access devices, bus bars, and the like.

所稱處理器62可以是中央處理模組(Central Processing Unit,CPU),還可以是其他通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現成可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件、分立門或者電晶體邏輯器件、分立硬體元件等。通用處理器可以是微處理器或者所述處理器62也可以是任何常規的處理器等,所述處理器62是所述電子設備6的控制中心,利用各種介面和線路連接整個電子設備6的各個部分。 The processor 62 may be a central processing unit (CPU), other general-purpose processors, a digital signal processor (DSP), or an application specific integrated circuit (ASIC). ), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor 62 can also be any conventional processor, etc. The processor 62 is the control center of the electronic device 6, and uses various interfaces and lines to connect the entire electronic device 6. various parts.

所述記憶體61可用於存儲所述電腦程式63和/或模組/單元,所述處理器62藉由運行或執行存儲在所述記憶體61內的電腦程式和/或模組/單元,以及調用存儲在記憶體61內的資料,實現所述電子設備6的各種功能。所述記憶體61可主要包括存儲程式區和存儲資料區,其中,存儲程式區可存儲作業系統、至少一個功能所需的應用程式(比如聲音播放功能、圖像播放功能等)等;存儲資料區可存儲根據電子設備6的使用所創建的資料(比如音訊資料、電話本等)等。此外,記憶體61可以包括高速隨機存取記憶體,還可以包括非易失性記憶體,例如硬碟、記憶體、插接式硬碟,智慧存儲卡(Smart Media Card,SMC),安全數位(Secure Digital,SD)卡,快閃記憶體卡(Flash Card)、至少一個磁碟記憶體件、快閃記憶體器件、或其他易失性固態記憶體件。 The memory 61 can be used to store the computer programs 63 and/or modules/units, and the processor 62 runs or executes the computer programs and/or modules/units stored in the memory 61, And call the data stored in the memory 61 to realize various functions of the electronic device 6 . The memory 61 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; storage data The area may store data (such as audio data, phone book, etc.) created according to the use of the electronic device 6, and the like. In addition, the memory 61 may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, Smart Media Card (SMC), Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card), at least one disk memory device, flash memory device, or other volatile solid state memory device.

所述電子設備6集成的模組/單元如果以軟體功能模組的形式實現並作為獨立的產品銷售或使用時,可以存儲在一個電腦可讀取存儲介質中。基於這樣的理解,本申請實現上述實施例方法中的全部或部分流程,也可以藉由電腦程式來指令相關的硬體來完成,所述的電腦程式可存儲於一電腦可讀存儲介質中,所述電腦程式在被處理器執行時,可實現上述各個方法實施例的步驟。其中,所述電腦程式包括電腦程式代碼,所述電腦程式代碼可以為原始程式碼形式、物件代碼形式、可執行檔或某些中間形式等。所述電腦可讀介質可以包括:能夠攜帶所述電腦程式代碼的任何實體或裝置、記錄介質、隨身碟、移動硬碟、磁碟、光碟、電腦記憶體、唯讀記憶體(ROM,Read-Only Memory)、隨機存取記憶體(RAM,Random Access Memory)、電載波信號、電信信號以及軟體分發介質等。 If the modules/units integrated in the electronic device 6 are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the present application realizes all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium, When the computer program is executed by the processor, the steps of the above method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of original code, object code, executable file, or some intermediate form. The computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory); Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc.

在本申請所提供的幾個實施例中,應該理解到,所揭露的裝置和方法,可以藉由其它的方式實現。例如,以上所描述的裝置實施例僅僅是示意性的,例如,所述模組的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式。 In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and other division methods may be used in actual implementation.

另外,在本申請各個實施例中的各功能模組可以集成在相同處理模組中,也可以是各個模組單獨物理存在,也可以兩個或兩個以上模組集成在相同模組中。上述集成的模組既可以採用硬體的形式實現,也可以採用硬體加軟體功能模組的形式實現。 In addition, each functional module in each embodiment of the present application may be integrated in the same processing module, or each module may exist physically alone, or two or more modules may be integrated in the same module. The above-mentioned integrated modules can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.

對於本領域技術人員而言,顯然本發明不限於上述示範性實施例的細節,而且在不背離本發明的精神或基本特徵的情況下,能夠以其他的具體形式實現本發明。因此,無論從哪一點來看,均應將實施例看作是示範性的,而且是非限制性的,本發明的範圍由所附請求項而不是上述說明限定,因此旨在將落在請求項的等同要件的含義和範圍內的所有變化涵括在本發明內。不應將請求項中的任何附圖標記視為限制所涉及的請求項。此外,顯然“包括”一詞不排除其他模組或步驟,單數不排除複數。電子設備請求項中陳述的多個模組或電子設備也可以由同一個模組或電子設備藉由軟體或者硬件來實現。第一,第二等詞語用來表示名稱,而並不表示任何特定的順序。 It will be apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, but that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the embodiments are to be regarded in all respects as illustrative and not restrictive, and the scope of the present invention is defined by the appended claims rather than the foregoing description, and is therefore intended to fall within the scope of the claims. All changes within the meaning and range of the equivalents of , are included in the present invention. Any reference sign in a claim should not be construed as limiting the claim to which it relates. Furthermore, it is clear that the word "comprising" does not exclude other modules or steps, and the singular does not exclude the plural. Multiple modules or electronic devices stated in the electronic device claim may also be implemented by the same module or electronic device through software or hardware. The terms first, second, etc. are used to denote names and do not denote any particular order.

綜上所述,本發明符合發明專利要件,爰依法提出專利申請。惟,以上所述僅為本發明之較佳實施方式,舉凡熟悉本案技藝之人士,在援依本案創作精神所作之等效修飾或變化,皆應包含於以下之申請專利範圍內。 To sum up, the present invention complies with the requirements of an invention patent, and a patent application can be filed in accordance with the law. However, the above descriptions are only the preferred embodiments of the present invention, and for those who are familiar with the techniques of this case, equivalent modifications or changes made in accordance with the creative spirit of this case shall be included in the scope of the following patent application.

S11~S13:步驟 S11~S13: Steps

Claims (10)

一種細胞密度分群方法,其中,所述方法包括:將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應,其中,預設數量為密度分群模型的數量,密度範圍為細胞的數量與體積在圖像中的占比;將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型;確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 A cell density grouping method, wherein the method comprises: inputting the images to be detected into an autoencoder of a preset number of density grouping models to obtain a preset number of reconstructed images to be detected, each of the density grouping models is composed of The self-encoder is composed of a twin network model, each of the density grouping models corresponds to a density range, and each reconstructed image to be detected corresponds to a density grouping model, wherein the preset number is the number of density grouping models, The density range is the proportion of the number of cells and the volume in the image; the image to be detected and each reconstructed image to be detected are input into the density clustering model corresponding to each reconstructed image to be detected. Siamese network model, and calculate the first error value between the image to be detected and each reconstructed image to be detected, and form a first error value set by all the first error values, each first error value corresponds to one Density clustering model; determining the minimum first error value in the first error value set, and using the density range corresponding to the density clustering model corresponding to the minimum first error value as the density range of the image to be detected. 如請求項1所述的細胞密度分群方法,其中,所述方法還包括:訓練所述密度分群模型,包括:按密度範圍對細胞圖像集進行劃分,得到預設數量的細胞訓練圖像集,每個細胞訓練圖像集對應一個密度範圍;對每個細胞訓練圖像集執行訓練操作,得到預設數量的密度分群模型,每個所述密度分群模型對應一個密度範圍,所述訓練操作包括:將所述細胞訓練圖像集中的所述細胞訓練圖像轉為細胞訓練圖像向量,由所有細胞訓練圖像向量組成細胞訓練圖像向量集;使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器;將所述細胞訓練圖像向量集輸入所述密度分群模型的訓練完成的自編碼器,得到細胞重建圖像向量集,所述細胞重建圖像向量集包括細胞重建圖像向量;使用所述細胞訓練圖像向量集與所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型,所述訓練完成的自編碼器與所述訓練完成的孿生網路模型組成所述密度分群模型。 The cell density grouping method according to claim 1, wherein the method further comprises: training the density grouping model, including: dividing the cell image set according to the density range to obtain a preset number of cell training image sets , each cell training image set corresponds to a density range; perform a training operation on each cell training image set to obtain a preset number of density clustering models, each of the density clustering models corresponds to a density range, and the training operation Including: converting the cell training image in the cell training image set into a cell training image vector, and forming a cell training image vector set from all cell training image vectors; using the cell training image vector set to train Autoencoder to obtain a trained autoencoder; input the cell training image vector set into the trained autoencoder of the density clustering model to obtain a cell reconstruction image vector set, the cell reconstruction image vector The set includes a cell reconstruction image vector; use the cell training image vector set and the cell reconstruction image vector set to train a twin network model, and obtain a trained twin network model, and the trained autoencoder is The trained twin network model constitutes the density clustering model. 如請求項2所述的細胞密度分群方法,其中,所述使用所述細胞訓練圖像向量集訓練自編碼器,得到訓練完成的自編碼器包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述自編碼器的編碼層,得到所述細胞訓練圖像向量的隱向量;將所述隱向量輸入所述自編碼器的解碼層,得到所述細胞訓練圖像向量的重建圖像向量;使用預設的誤差函數計算所述細胞訓練圖像向量和所述重建圖像向量之間的第二誤差值,調整所述編碼層與所述解碼層的參數以最小化所述誤差值,得到所述密度分群模型的自編碼器。 The cell density clustering method according to claim 2, wherein the training an autoencoder by using the cell training image vector set to obtain a trained autoencoder comprises: combining all the cells in the cell training image vector set The cell training image vector is input into the coding layer of the autoencoder to obtain the hidden vector of the cell training image vector; the hidden vector is input into the decoding layer of the autoencoder to obtain the cell training image The reconstructed image vector of the vector; use a preset error function to calculate the second error value between the cell training image vector and the reconstructed image vector, and adjust the parameters of the encoding layer and the decoding layer to minimize Quantize the error value to obtain the autoencoder of the density clustering model. 如請求項2所述的細胞密度分群方法,其中,所述孿生網路模型包括第一神經網路和第二神經網路,所述使用所述細胞訓練圖像向量集與 所述細胞重建圖像向量集訓練孿生網路模型,得到訓練完成的孿生網路模型包括:將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖;將所述細胞重建圖像向量集中的所述細胞重建圖像向量輸入所述第二神經網路,得到第二特徵圖;計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值,並根據所述第三誤差值優化所述第一神經網路和所述第二神經網路,得到所述訓練完成的孿生網路模型。 The cell density clustering method according to claim 2, wherein the twin network model includes a first neural network and a second neural network, and the training image vector set using the cells is the same as the The cell reconstruction image vector set trains the twin network model, and obtaining the trained twin network model includes: inputting the cell training image vector in the cell training image vector set into the first neural network, obtaining a first feature map; inputting the cell reconstruction image vector in the cell reconstruction image vector set into the second neural network to obtain a second feature map; calculating the first feature map and the second feature map The third error value between the feature maps is obtained, and the first neural network and the second neural network are optimized according to the third error value to obtain the trained twin network model. 如請求項4所述的細胞密度分群方法,其中,所述第一神經網路的權重和所述第二神經網路的權重相同,且所述第一神經網路的結構和所述第二神經網路的結構相同。 The cell density clustering method according to claim 4, wherein the weight of the first neural network and the weight of the second neural network are the same, and the structure of the first neural network is the same as the weight of the second neural network. The structure of the neural network is the same. 如請求項4所述的細胞密度分群方法,其中,所述將所述細胞訓練圖像向量集中的所述細胞訓練圖像向量輸入所述第一神經網路,得到第一特徵圖包括:將所述細胞訓練圖像向量輸入所述第一神經網路;對所述細胞訓練圖像向量與所述第一神經網路中的矩陣進行計算,得到所述細胞訓練圖像向量的特徵,組成所述第一特徵圖。 The cell density grouping method according to claim 4, wherein the inputting the cell training image vector in the cell training image vector set into the first neural network, and obtaining the first feature map comprises: The cell training image vector is input to the first neural network; the cell training image vector and the matrix in the first neural network are calculated to obtain the characteristics of the cell training image vector, which is composed of the first feature map. 如請求項4所述的細胞密度分群方法,其中,所述計算所述第一特徵圖和所述第二特徵圖之間的第三誤差值包括:使用平均絕對值誤差函數計算所述第一特徵圖與所述第二特徵圖之間的平均絕對值誤差,將所述平均絕對值誤差作為所述第一特徵圖和所述第二特徵圖之間的第三誤差值。 The cell density grouping method according to claim 4, wherein the calculating the third error value between the first feature map and the second feature map comprises: calculating the first error value using a mean absolute value error function The average absolute value error between the feature map and the second feature map, and the average absolute value error is taken as the third error value between the first feature map and the second feature map. 一種細胞密度分群裝置,其中,包括:圖像重建模組,用於將待檢測圖像輸入預設數量的密度分群模型的自編碼器,得到預設數量的待檢測重建圖像,每個所述密度分群模型由所述自編碼器與孿生網路模型組成,每個所述密度分群模型對應一個密度範圍,每個待檢測重建圖像與一個密度分群模型對應,其中,預設數量為密度分群模型的數量,密度範圍為細胞的數量與體積在圖像中的占比;誤差值計算模組,用於將所述待檢測圖像和每個所述待檢測重建圖像輸入與每個所述待檢測重建圖像對應的密度分群模型的孿生網路模型,並計算所述待檢測圖像與所述每個待檢測重建圖像的第一誤差值,由所有第一誤差值組成第一誤差值集,每個第一誤差值對應一個密度分群模型;密度範圍確定模組,用於確定所述第一誤差值集中的最小第一誤差值,並將所述最小第一誤差值對應的所述密度分群模型對應的密度範圍作為所述待檢測圖像的密度範圍。 A cell density grouping device, comprising: an image reconstruction module, which is used to input images to be detected into an autoencoder of a preset number of density grouping models to obtain a preset number of reconstructed images to be detected, each of which is The density clustering model is composed of the self-encoder and the twin network model, each density clustering model corresponds to a density range, and each reconstructed image to be detected corresponds to a density clustering model, wherein the preset number is the density The number of clustering models, the density range is the proportion of the number of cells and the volume in the image; the error value calculation module is used to input the image to be detected and each reconstructed image to be detected with each The twin network model of the density clustering model corresponding to the reconstructed image to be detected, and calculate the first error value between the reconstructed image to be detected and each reconstructed image to be detected, and all the first error values compose the first error value. An error value set, each first error value corresponds to a density clustering model; a density range determination module is used to determine the minimum first error value in the first error value set, and assign the minimum first error value to the corresponding The density range corresponding to the density clustering model is taken as the density range of the image to be detected. 一種電子設備,其中,所述電子設備包括:記憶體,存儲至少一個指令;及 處理器,執行所述記憶體中存儲的指令以實現如請求項1至7中任一項所述的細胞密度分群方法。 An electronic device, wherein the electronic device includes: a memory that stores at least one instruction; and The processor executes the instructions stored in the memory to implement the cell density grouping method according to any one of claim 1 to 7. 一種電腦存儲介質,其上存儲有電腦程式,其中:所述電腦程式被處理器執行時實現如請求項1至7中任一項所述的細胞密度分群方法。 A computer storage medium on which a computer program is stored, wherein: the computer program implements the cell density grouping method according to any one of claim 1 to 7 when the computer program is executed by a processor.
TW110107746A 2021-03-04 2021-03-04 Cell density grouping method, cell density grouping device, electronic device and storage media TWI761109B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110107746A TWI761109B (en) 2021-03-04 2021-03-04 Cell density grouping method, cell density grouping device, electronic device and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110107746A TWI761109B (en) 2021-03-04 2021-03-04 Cell density grouping method, cell density grouping device, electronic device and storage media

Publications (2)

Publication Number Publication Date
TWI761109B true TWI761109B (en) 2022-04-11
TW202236160A TW202236160A (en) 2022-09-16

Family

ID=82199149

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110107746A TWI761109B (en) 2021-03-04 2021-03-04 Cell density grouping method, cell density grouping device, electronic device and storage media

Country Status (1)

Country Link
TW (1) TWI761109B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016138041A2 (en) * 2015-02-23 2016-09-01 Cellanyx Diagnostics, Llc Cell imaging and analysis to differentiate clinically relevant sub-populations of cells
US20200372652A1 (en) * 2018-02-16 2020-11-26 Nikon Corporation Calculation device, calculation program, and calculation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016138041A2 (en) * 2015-02-23 2016-09-01 Cellanyx Diagnostics, Llc Cell imaging and analysis to differentiate clinically relevant sub-populations of cells
US20200372652A1 (en) * 2018-02-16 2020-11-26 Nikon Corporation Calculation device, calculation program, and calculation method

Also Published As

Publication number Publication date
TW202236160A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN110399487B (en) Text classification method and device, electronic equipment and storage medium
CN114764768A (en) Defect detection and classification method and device, electronic equipment and storage medium
TWI807299B (en) Image depth expanding method, image depth expanding device and electronic device
WO2022193872A1 (en) Method and apparatus for determining spatial relationship, computer device, and storage medium
US20220284720A1 (en) Method for grouping cells according to density and electronic device employing method
WO2020042580A1 (en) Personnel grouping method and device, electronic device, and storage medium
CN112131322A (en) Time series classification method and device
TWI761109B (en) Cell density grouping method, cell density grouping device, electronic device and storage media
CN114943672A (en) Image defect detection method and device, electronic equipment and storage medium
CN114281950B (en) Data retrieval method and system based on multi-graph weighted fusion
TWI777319B (en) Method and device for determining stem cell density, computer device and storage medium
CN109241146A (en) Student's intelligence aid method and system under cluster environment
TWI762193B (en) Image defect detection method, image defect detection device, electronic device and storage media
TWI769724B (en) Image feature extraction method and image feature extraction device, electronic device and storage media
TW202226065A (en) Method and device for calculating cell distribution density, electronic device, and storage unit
CN111611371A (en) Method, device, equipment and storage medium for matching FAQ based on wide and deep network
TWI778519B (en) Defective image generation method, defective image generation device, electronic device and storage media
TW202232380A (en) Image defect detection method, image defect detection device, electronic device and storage media
TWI755176B (en) Method and device for calculating cell distribution density, electronic device, and storage unit
TWI755212B (en) Method and related equipment for determinting input dimensions of model
TWI748867B (en) Image defect dection method, image defect dection device, electronic device and storage media
TWI775084B (en) Image recognition method, device, computer device and storage media
US12002272B2 (en) Method and device for classifing densities of cells, electronic device using method, and storage medium
US20220207892A1 (en) Method and device for classifing densities of cells, electronic device using method, and storage medium
TWI792134B (en) Image defect detection method, image defect detection device, electronic device, and storage media