WO2021151272A1 - Procédé et appareil de segmentation de image de cellule, ainsi que dispositif électronique et support d'enregistrement lisible - Google Patents

Procédé et appareil de segmentation de image de cellule, ainsi que dispositif électronique et support d'enregistrement lisible Download PDF

Info

Publication number
WO2021151272A1
WO2021151272A1 PCT/CN2020/098966 CN2020098966W WO2021151272A1 WO 2021151272 A1 WO2021151272 A1 WO 2021151272A1 CN 2020098966 W CN2020098966 W CN 2020098966W WO 2021151272 A1 WO2021151272 A1 WO 2021151272A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmentation
sampled
network model
convolutional
Prior art date
Application number
PCT/CN2020/098966
Other languages
English (en)
Chinese (zh)
Inventor
谢春梅
侯晓帅
李风仪
王佳平
南洋
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021151272A1 publication Critical patent/WO2021151272A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, electronic device, and readable storage medium for cell image segmentation.
  • AI technology can help doctors locate lesion cells to analyze the condition, and assist doctors in making accurate and rapid diagnosis.
  • AI applications in the field of medical imaging mainly focus on lung nodules, fundus, and liver.
  • AI technology has also been applied in digital pathological diagnosis.
  • the inventor realizes that in clinical tumor cell detection, the patient takes CT first, and the doctor judges the patient by looking at whether there are tumor cells in the CT image based on his own experience.
  • the CT image is a series of frames with a large number
  • tumor cells tend to be relatively small in the entire CT image, and the contrast is not high. Therefore, doctors need to spend a lot of time to observe and judge.
  • deep learning algorithms need to perform a large number of feature calculations, so the segmentation accuracy is not high under the premise of occupying computer computing resources.
  • the embodiments of the present application provide a cell image segmentation method, device, electronic equipment, and computer-readable storage medium.
  • a cell image segmentation method provided in this application includes:
  • the up-sampled image is preprocessed by a morphological algorithm and input into the convolutional segmentation network model for segmentation to obtain a segmented image.
  • This application also provides an electronic device, which includes:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the cell image segmentation method as described below:
  • the up-sampled image is preprocessed by a morphological algorithm, and input to the convolutional segmentation network model for segmentation to obtain a segmented image.
  • This application also provides a computer-readable storage medium, including a storage data area and a storage program area.
  • the storage data area stores data created according to the use of blockchain nodes
  • the storage program area stores a computer program, wherein the computer
  • the program is executed by the processor, the cell image segmentation method as described below is realized:
  • the up-sampled image is preprocessed by a morphological algorithm and input into the convolutional segmentation network model for segmentation to obtain a segmented image.
  • the present application also provides a cell image segmentation device, which includes:
  • the down-sampling module is used to down-sample the original cell image to obtain the down-sampled image
  • a low-resolution segmentation module configured to input the down-sampled image into a pre-built convolutional segmentation network model for segmentation to obtain a first segmentation map
  • the up-sampling segmentation module is used to up-sample the first segmentation image to the same resolution size as the original cell image according to a pre-built pixel coordinate conversion model and a bilinear interpolation algorithm to obtain a second segmentation image;
  • the cell image segmentation module is used to combine the second segmentation map and the original cell image on the corresponding color channel through a preset geometric constraint and image feature matching method to obtain an up-sampled image, and the up-sampling
  • the image is preprocessed by a morphological algorithm, and is input to the convolutional segmentation network model for segmentation to obtain a segmented image.
  • FIG. 1 is a schematic flowchart of a cell image segmentation method provided by an embodiment of the application
  • step S2 of the cell image segmentation method provided by an embodiment of the application
  • step S4 of the cell image segmentation method provided by an embodiment of the application
  • FIG. 4 is a detailed flow diagram of the preprocessing of the morphological algorithm of the cell image segmentation method provided by an embodiment of the application
  • FIG. 5 is a schematic diagram of modules of a cell image segmentation device provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the internal structure of an electronic device provided by an embodiment of the application.
  • This application provides a method for cell image segmentation.
  • FIG. 1 it is a schematic flowchart of a cell image segmentation method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and or hardware.
  • the cell image segmentation method includes:
  • the original cell image may be a CT image of a pathological site obtained through a machine scan of the radiology department of a hospital, such as a CT image of a tumor site.
  • X-rays are emitted by the machine of the radiology department of the hospital. It is captured by the X-ray detector, and the CT image of the tumor site is obtained according to the difference between the X-ray transmittance of the tumor and the X-ray transmittance of other organs.
  • the performing down-sampling operation on the original cell image to obtain the down-sampled image includes: performing down-sampling operation on the original cell image with a size of M ⁇ N according to a set down-sampling ratio s to obtain a size of Of the down-sampled image, where s is the common divisor of M and N.
  • the resolution size of the original cell image is 1000*1000, after the downsampling operation of the downsampling ratio of 10, the resolution size of the down-sampled image obtained becomes 100*100.
  • the convolutional segmentation network model is a two-cascade network model constructed based on an improved full convolutional neural network (Convolutional Networks for Biomedical Image Segmentation, U-net network for short).
  • the improved U-net network mainly adds a low-resolution fully connected layer to the traditional U-net network to achieve the purpose of roughly segmenting the down-sampled image, and then cascade the standard convolutional neural network model Perform finer segmentation to obtain the first segmentation map.
  • the construction process of the pre-built convolutional segmentation network model includes: cascading fully connected layers in the fully convolutional neural network according to the cascading rules set in the preview, and adding the multi-layer convolutional neural network to the The fully convolutional neural network of the fully connected layer is cascaded to obtain the convolutional segmentation network model.
  • VGG Very deep convolutional networks
  • the VGG network is a standard convolutional neural network, which is often used in feature extraction and image segmentation. Among them, the most widely used are VGG16 and VGG19, which represent 16 and 19 layers of convolutional network respectively.
  • the input of the downsampled image into a pre-built convolutional segmentation network model for segmentation to obtain a first segmentation map includes:
  • S21 Perform a convolution operation on the down-sampled image through the convolutional segmentation network model to generate a down-sampled convolution feature map
  • a l is the output value of the convolution operation
  • f( ⁇ ) is the activation function of the convolution operation
  • w l is the convolution kernel
  • * represents the convolution operation
  • b l is the bias parameter
  • a l -1 is the pixel value of the down-sampled image.
  • the deconvolution is also called transposed, and its calculation process is just the opposite of the convolution operation.
  • softmax classification function is:
  • m represents the number of pixels in the deconvolution feature map
  • represents the preset weight value
  • x represents the deconvolution feature map
  • K represents the preset number of divided regions
  • I ⁇ is an indicative function
  • Y (i) represents the probability value of the i-th segmented region.
  • the embodiment of the present application can segment the down-sampled image according to the probability value of each segmentation area.
  • the down-sampled image is calculated to be divided into 10 segmentation areas, and pass The above operation generates 100 segmented regions and probability values corresponding to the 100 segmented regions, and extracts the 10 segmented regions with the highest probability value to obtain the first segmentation map.
  • the first segmentation image is up-sampled to the same resolution size as the original cell image to obtain a second segmentation image.
  • the coordinate conversion of the original cell image is performed by using the pixel point coordinate conversion model.
  • the embodiment of the present application uses the currently disclosed bilinear interpolation algorithm to insert the pixel point after the pixel point coordinate conversion is completed into the first segmentation image to obtain the second segmentation image.
  • SIFT Scale-invariant feature transform
  • the matching rule can have many rule settings, such as taking one SIFT key point A1 in the second segmentation image, and finding the first two SIFT key points B1 and B2 that are closest to the Euclidean distance in the original cell image , Get two matched pairs A1-B1 and A1-B2.
  • the ratio value obtained by dividing the shortest distance B1 by the second short distance B2 in Euclidean distance is used. If the ratio value is less than the threshold T, then these two groups are accepted For the matching pairs A1-B1 and A1-B2, if the ratio value is greater than the threshold T, then these two sets of matching pairs A1-B1 and A1-B2 will be eliminated.
  • the fundamental matrix (Fundamental matrix) is generally a 3 ⁇ 3 matrix that represents the correspondence between pixels.
  • the calculation of the fundamental matrix can use the currently published random sampling consistency and minimum Two multiplication.
  • the rank number of the basic matrix is the number of vectors contained in the linearly independent maximal group in the basic matrix, that is, the rank number. Through the comparison between the rank number and the rank threshold ⁇ , the basic matrix is eliminated if the basic matrix is not satisfied. The matching pairs of, so as to obtain the standard matching pair set.
  • the above two sets of matching pairs A1-B1 and A1-B2 are in the standard matching pair set, then the two SIFT key points B1 and B2 corresponding to the pixels in the original cell image are extracted , Insert the corresponding pixel into the second segmentation image, and when the insertion operation of all standard matching pairs in the standard matching pair set is completed, the up-sampled image is obtained.
  • the above-mentioned up-sampled image may also be stored in a node of a blockchain.
  • S5. Perform preprocessing of the up-sampled image with a morphological algorithm, and input it into the convolutional segmentation network model for segmentation to obtain a segmented image.
  • the preprocessing of the up-sampled image on the morphological algorithm includes:
  • S52 Perform a binarization operation and a reversal operation on the first pre-processed image by using a preset large law threshold to generate a second pre-processed image;
  • the big law threshold is also called the maximum between-class variance method (otsu), which is an adaptive threshold segmentation method. It is mainly assumed that the image is divided into two categories, and then an optimal threshold is calculated to divide the image into two The class maximizes the variance between the classes. For example, this application uses the large law threshold to divide the image into black and white, that is, the binarization operation.
  • the morphological expansion operation is to obtain the maximum value of the local (such as the boundary of the application) of the second preprocessed image, and perform the replacement operation on the boundary of the application according to the maximum value, and so on, the morphology Learning the corrosion operation is to find the local minimum value of the second pre-processed image (such as the boundary of this application), and perform a replacement operation on the image boundary according to the minimum value.
  • the preprocessed up-sampled image is input into the convolutional segmentation network model for segmentation, and the segmentation method is the same as the above-mentioned S2 operation step, until the segmented image is obtained.
  • the embodiment of the application first performs a down-sampling operation on the original cell image to obtain a down-sampled image.
  • the down-sampling operation reduces the resolution of the original cell image and reduces the subsequent calculation pressure, and at the same time, in order to prevent the reduction of the resolution of the original cell image from affecting the subsequent Cell image segmentation accuracy, first use the pre-built convolutional segmentation network model for the first segmentation to obtain the first segmentation image, merge the first segmentation image with the original cell image to obtain an up-sampled image, and use the convolutional segmentation network
  • the model performs the second segmentation to obtain the segmented image.
  • the convolutional segmentation network model does not require a lot of calculations.
  • the original cell image and the first The segmentation maps are merged to provide a segmentation direction for the second segmentation, so the accuracy of cell image segmentation is improved. Therefore, the vehicle damage assessment method, device, and computer-readable storage medium proposed in this application can solve the problem of high resolution of CT images and low algorithm calculation efficiency, which affects calculation speed and style accuracy.
  • FIG. 5 it is a functional block diagram of the cell image segmentation device of the present application.
  • the cell image segmentation device 100 described in this application can be installed in an electronic device.
  • the cell image segmentation device may include a down-sampling module 101, a low-resolution segmentation module 102, an up-sampling segmentation module 103, and a cell image segmentation module 104.
  • the module in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of an electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the down-sampling module 101 is used to perform down-sampling operations on the original cell image to obtain a down-sampled image
  • the low-resolution segmentation module 102 is configured to input the down-sampled image into a pre-built convolutional segmentation network model for segmentation to obtain a first segmentation map;
  • the up-sampling segmentation module 103 is used to up-sample the first segmentation image to the same resolution size as the original cell image according to the pre-built pixel coordinate conversion model and the bilinear interpolation algorithm to obtain a second segmentation image ;
  • the cell image segmentation module 104 is configured to combine the second segmentation map and the original cell image on the corresponding color channel by using a preset geometric constraint and image feature matching method to obtain an up-sampled image, and to upgrade the
  • the sampled image is preprocessed by a morphological algorithm, and is input to the convolutional segmentation network model for segmentation to obtain a segmented image. It should be emphasized that, in order to further ensure the privacy and security of the above-mentioned up-sampled image, the above-mentioned up-sampled image may also be stored in a node of a blockchain.
  • each module of the cell image segmentation device can refer to the description of the relevant steps in the embodiment corresponding to FIG. 1, which will not be repeated here.
  • FIG. 6 it is a schematic diagram of the structure of the electronic device of the present application.
  • the electronic device 1 may include a processor 10, a memory 11, and a bus, and may also include a computer program stored in the memory 11 and running on the processor 10, such as a cell image segmentation program 12.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (such as SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, for example, a mobile hard disk of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart media card (SMC), and a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the electronic device 1, such as cell image segmentation codes, etc., but also to temporarily store data that has been output or will be output.
  • the processor 10 may be composed of integrated circuits in some embodiments, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combinations of central processing unit (CPU), microprocessor, digital processing chip, graphics processor, and various control chips, etc.
  • the processor 10 is the control unit of the electronic device, which uses various interfaces and lines to connect the various components of the entire electronic device, and runs or executes programs or modules stored in the memory 11 (such as executing Cell image segmentation, etc.), and call data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • the bus may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection and communication between the memory 11 and at least one processor 10 and the like.
  • FIG. 6 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 6 does not constitute a limitation on the electronic device 1, and may include fewer or more components than shown in the figure. Components, or a combination of certain components, or different component arrangements.
  • the electronic device 1 may also include a power source (such as a battery) for supplying power to various components.
  • the power source may be logically connected to the at least one processor 10 through a power management device, thereby controlling power
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include any components such as one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, and power status indicators.
  • the electronic device 1 may also include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface.
  • the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may also include a user interface.
  • the user interface may be a display (Display) and an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the electronic device 1 and to display a visualized user interface.
  • the cell image segmentation 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions. When running in the processor 10, it can realize:
  • the up-sampled image is preprocessed by a morphological algorithm and input into the convolutional segmentation network model for segmentation to obtain a segmented image.
  • the specific method for the processor 10 to implement the foregoing instructions includes:
  • Step 1 Obtain an original cell image, and perform a down-sampling operation on the original cell image to obtain a down-sampled image.
  • the original cell image may be a CT image of a pathological site obtained through a machine scan of the radiology department of a hospital, such as a CT image of a tumor site.
  • X-rays are emitted by the machine of the radiology department of the hospital. It is captured by the X-ray detector, and the CT image of the tumor site is obtained according to the difference between the X-ray transmittance of the tumor and the X-ray transmittance of other organs.
  • the performing down-sampling operation on the original cell image to obtain the down-sampled image includes: performing down-sampling operation on the original cell image with a size of M ⁇ N according to a set down-sampling ratio s to obtain a size of Of the down-sampled image, where s is the common divisor of M and N.
  • the resolution size of the original cell image is 1000*1000, after the downsampling operation of the downsampling ratio of 10, the resolution size of the down-sampled image obtained becomes 100*100.
  • Step 2 Input the down-sampled image into a pre-built convolutional segmentation network model for segmentation to obtain a first segmentation image.
  • the convolutional segmentation network model is a two-cascade network model constructed based on an improved full convolutional neural network (Convolutional Networks for Biomedical Image Segmentation, U-net network for short).
  • the improved U-net network mainly adds a low-resolution fully connected layer to the traditional U-net network to achieve the purpose of roughly segmenting the down-sampled image, and then cascade the standard convolutional neural network model Perform finer segmentation to obtain the first segmentation map.
  • the construction process of the pre-built convolutional segmentation network model includes: cascading fully connected layers in the fully convolutional neural network according to the cascading rules set in the preview, and adding the multi-layer convolutional neural network to the The fully convolutional neural network of the fully connected layer is cascaded to obtain the convolutional segmentation network model.
  • VGG Very deep convolutional networks
  • the VGG network is a standard convolutional neural network, which is often used in feature extraction and image segmentation. Among them, the most widely used are VGG16 and VGG19, which represent 16 and 19 layers of convolutional network respectively.
  • the inputting the down-sampled image into a pre-built convolutional segmentation network model for segmentation to obtain a first segmentation map includes:
  • Step A Perform a convolution operation on the down-sampled image through the convolutional segmentation network model to generate a down-sampled convolution feature map;
  • Step B Perform a deconvolution operation on the down-sampled convolution feature map to obtain a deconvolution feature map
  • Step C Input the deconvolution feature map into the softmax classification function in the convolution segmentation network model, and calculate the probability value of each segmentation area of the downsampled image;
  • Step D Segment the down-sampled image according to the probability value of each segmented region to generate the first segmentation map.
  • a l is the output value of the convolution operation
  • f( ⁇ ) is the activation function of the convolution operation
  • w l is the convolution kernel
  • * represents the convolution operation
  • b l is the bias parameter
  • a l -1 is the pixel value of the down-sampled image.
  • the deconvolution is also called transposed, and its calculation process is just the opposite of the convolution operation.
  • softmax classification function is:
  • m represents the number of pixels in the deconvolution feature map
  • represents the preset weight value
  • x represents the deconvolution feature map
  • K represents the preset number of divided regions
  • I ⁇ is an indicative function
  • Y (i) represents the probability value of the i-th segmented region.
  • the embodiment of the present application can segment the down-sampled image according to the probability value of each segmentation area.
  • the down-sampled image is calculated to be divided into 10 segmentation areas, and pass The above operation generates 100 segmented regions and probability values corresponding to the 100 segmented regions, and extracts the 10 segmented regions with the highest probability value to obtain the first segmentation map.
  • Step 3 According to the pre-built pixel coordinate conversion model and the bilinear interpolation algorithm, the first segmentation image is up-sampled to the same resolution size as the original cell image to obtain a second segmentation image.
  • the coordinate conversion of the original cell image is performed by using the pixel point coordinate conversion model.
  • the embodiment of the present application uses the currently disclosed bilinear interpolation algorithm to insert the pixel point after the pixel point coordinate conversion is completed into the first segmentation image to obtain the second segmentation image.
  • Step 4 Using a preset geometric constraint and image feature matching method, the second segmentation image and the original cell image are combined on the corresponding color channel to obtain an up-sampled image.
  • the combination of the second segmentation map and the original cell image on the corresponding color channel through a preset geometric constraint and image feature matching method to obtain an up-sampled image includes:
  • Step a According to preset matching rules, select SIFT (Scale-invariant feature transform) feature points from the second segmentation map, and sequentially match them with the SIFT feature points of the original cell image. Get the original matching pair set.
  • SIFT Scale-invariant feature transform
  • Step b Calculate the interior point rate of each matching pair in the original matching pair set, and eliminate the matching pairs whose interior point rate is less than the preset value ⁇ to obtain the primary matching pair set.
  • the ratio value obtained by dividing the shortest distance B1 by the second short distance B2 in Euclidean distance is used. If the ratio value is less than the threshold T, then these two groups are accepted For the matching pairs A1-B1 and A1-B2, if the ratio value is greater than the threshold T, then these two sets of matching pairs A1-B1 and A1-B2 will be eliminated.
  • Step c Calculate the basic matrix of the primary matched pair set according to the primary matched pair set, calculate the corresponding rank according to the basic matrix, and eliminate the matched pairs whose rank is greater than the preset rank threshold ⁇ to obtain a standard match Right set.
  • the fundamental matrix (Fundamental matrix) is generally a 3 ⁇ 3 matrix that represents the correspondence between pixels.
  • the calculation of the fundamental matrix can use the currently published random sampling consistency and minimum Two multiplication.
  • the rank number of the basic matrix is the number of vectors contained in the linearly independent maximal group in the basic matrix, that is, the rank number. Through the comparison between the rank number and the rank threshold ⁇ , the basic matrix is eliminated if the basic matrix is not satisfied. The matching pairs of, so as to obtain the standard matching pair set.
  • Step d Insert the standard matching pair set into the second segmentation map according to a preset insertion rule to obtain the up-sampled image.
  • Step 5 The up-sampled image is preprocessed by a morphological algorithm, and input into the convolutional segmentation network model for segmentation to obtain a segmented image.
  • the preprocessing of the up-sampled image with a morphological algorithm includes:
  • the boundary of the second pre-processed image is smoothed through a morphological erosion operation, and the hole formed by the second pre-processed image during the smoothing process is filled in through a morphological expansion operation to obtain the pre-processed image Upsample the image.
  • the big law threshold is also called the maximum between-class variance method (otsu), which is an adaptive threshold segmentation method. It is mainly assumed that the image is divided into two categories, and then an optimal threshold is calculated to divide the image into two The class maximizes the variance between the classes. For example, this application uses the large law threshold to divide the image into black and white, that is, the binarization operation.
  • the morphological expansion operation is to obtain the maximum value of the local (such as the boundary of the application) of the second preprocessed image, and perform the replacement operation on the boundary of the application according to the maximum value, and so on, the morphology Learning the corrosion operation is to find the local minimum value of the second pre-processed image (such as the boundary of this application), and perform a replacement operation on the image boundary according to the minimum value.
  • the pre-processed up-sampled image is input into the convolutional segmentation network model for segmentation, and the segmentation method is the same as the above step 2 until the segmented image is obtained.
  • the integrated module/unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) ).
  • the computer-readable storage medium may be non-volatile or volatile.
  • modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional modules.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de segmentation d'image de cellule, comprenant les étapes consistant : à effectuer une opération de sous-échantillonnage sur une image de cellule d'origine pour obtenir une image sous-échantillonnée (S1), à entrer l'image sous-échantillonnée dans un modèle de réseau de segmentation à convolution pré-construit pour effectuer une segmentation afin d'obtenir une première image segmentée (S2), sur la base d'un modèle de transformation de coordonnées de pixel pré-construit et d'un algorithme d'interpolation bilinéaire, à sur-échantillonner la première image segmentée à la même taille/résolution que l'image de cellule d'origine pour obtenir une seconde image segmentée (S3), au moyen d'une contrainte géométrique prédéfinie et d'un procédé de mise en correspondance de caractéristiques d'image, à fusionner la seconde image segmentée avec l'image de cellule d'origine sur des canaux chromatiques correspondants pour acquérir une image suréchantillonnée (S4) ; et à réaliser un prétraitement par algorithme morphologique sur l'image suréchantillonnée et à entrer le résultat dans le modèle de réseau de segmentation à convolution pour effectuer la segmentation afin d'obtenir une image segmentée (S5). Le procédé a également trait au domaine de la technologie des chaînes de blocs et l'image suréchantillonnée est stockée dans une chaîne de blocs. L'invention permet de résoudre les problèmes constitués par une résolution d'image de cellule trop élevée et un faible rendement de calcul d'algorithme, qui ont un impact sur la vitesse de calcul et la précision de segmentation.
PCT/CN2020/098966 2020-05-20 2020-06-29 Procédé et appareil de segmentation de image de cellule, ainsi que dispositif électronique et support d'enregistrement lisible WO2021151272A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010435101.7A CN111696084B (zh) 2020-05-20 2020-05-20 细胞图像分割方法、装置、电子设备及可读存储介质
CN202010435101.7 2020-05-20

Publications (1)

Publication Number Publication Date
WO2021151272A1 true WO2021151272A1 (fr) 2021-08-05

Family

ID=72478060

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098966 WO2021151272A1 (fr) 2020-05-20 2020-06-29 Procédé et appareil de segmentation de image de cellule, ainsi que dispositif électronique et support d'enregistrement lisible

Country Status (2)

Country Link
CN (1) CN111696084B (fr)
WO (1) WO2021151272A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240978A (zh) * 2022-03-01 2022-03-25 珠海横琴圣澳云智科技有限公司 基于自适应形态学的细胞边缘分割方法和装置
CN114757847A (zh) * 2022-04-24 2022-07-15 汕头市超声仪器研究所股份有限公司 多信息提取的扩展U-Net及其在低剂量X射线成像的应用方法
CN115235991A (zh) * 2022-08-30 2022-10-25 华创威新材料(广东)有限公司 基于纤维套管的耐磨性智能检测方法及装置
CN115375626A (zh) * 2022-07-25 2022-11-22 浙江大学 基于物理分辨率的医学图像分割方法、系统、介质及设备
CN117455935A (zh) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 基于腹部ct医学图像融合及器官分割方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597852B (zh) * 2020-12-15 2024-05-24 深圳大学 细胞分类方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090904A (zh) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 一种医学图像实例分割方法和装置
CN110120047A (zh) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 图像分割模型训练方法、图像分割方法、装置、设备及介质
CN110363780A (zh) * 2019-07-23 2019-10-22 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机可读存储介质和计算机设备
US20190371018A1 (en) * 2018-05-29 2019-12-05 Korea Advanced Institute Of Science And Technology Method for processing sparse-view computed tomography image using neural network and apparatus therefor
CN111145209A (zh) * 2019-12-26 2020-05-12 北京推想科技有限公司 一种医学图像分割方法、装置、设备及存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976399A (zh) * 2016-04-29 2016-09-28 北京航空航天大学 一种基于sift特征匹配的运动目标检测方法
CN107085707A (zh) * 2017-04-14 2017-08-22 河海大学 一种基于交通监控视频的车牌定位方法
WO2018222755A1 (fr) * 2017-05-30 2018-12-06 Arterys Inc. Détection automatisée de lésion, segmentation et identification longitudinale
US10282589B2 (en) * 2017-08-29 2019-05-07 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN108961162A (zh) * 2018-03-12 2018-12-07 北京林业大学 一种无人机林区航拍图像拼接方法和系统
CN108806793A (zh) * 2018-04-17 2018-11-13 平安科技(深圳)有限公司 病变监测方法、装置、计算机设备和存储介质
CN108765369B (zh) * 2018-04-20 2023-05-02 平安科技(深圳)有限公司 肺结节的检测方法、装置、计算机设备和存储介质
CN109087327B (zh) * 2018-07-13 2021-07-06 天津大学 一种级联全卷积神经网络的甲状腺结节超声图像分割方法
CN109598728B (zh) * 2018-11-30 2019-12-27 腾讯科技(深圳)有限公司 图像分割方法、装置、诊断系统及存储介质
CN109961049B (zh) * 2019-03-27 2022-04-26 东南大学 一种复杂场景下香烟品牌识别方法
CN109993735A (zh) * 2019-03-29 2019-07-09 成都信息工程大学 基于级联卷积的图像分割方法
CN110929789A (zh) * 2019-11-22 2020-03-27 北京理工大学 基于多期ct影像分析的肝肿瘤自动分类方法及装置
CN111028242A (zh) * 2019-11-27 2020-04-17 中国科学院深圳先进技术研究院 一种肿瘤自动分割系统、方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090904A (zh) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 一种医学图像实例分割方法和装置
US20190371018A1 (en) * 2018-05-29 2019-12-05 Korea Advanced Institute Of Science And Technology Method for processing sparse-view computed tomography image using neural network and apparatus therefor
CN110120047A (zh) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 图像分割模型训练方法、图像分割方法、装置、设备及介质
CN110363780A (zh) * 2019-07-23 2019-10-22 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机可读存储介质和计算机设备
CN111145209A (zh) * 2019-12-26 2020-05-12 北京推想科技有限公司 一种医学图像分割方法、装置、设备及存储介质

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240978A (zh) * 2022-03-01 2022-03-25 珠海横琴圣澳云智科技有限公司 基于自适应形态学的细胞边缘分割方法和装置
CN114757847A (zh) * 2022-04-24 2022-07-15 汕头市超声仪器研究所股份有限公司 多信息提取的扩展U-Net及其在低剂量X射线成像的应用方法
CN115375626A (zh) * 2022-07-25 2022-11-22 浙江大学 基于物理分辨率的医学图像分割方法、系统、介质及设备
CN115375626B (zh) * 2022-07-25 2023-06-06 浙江大学 基于物理分辨率的医学图像分割方法、系统、介质及设备
CN115235991A (zh) * 2022-08-30 2022-10-25 华创威新材料(广东)有限公司 基于纤维套管的耐磨性智能检测方法及装置
CN115235991B (zh) * 2022-08-30 2023-03-07 华创威新材料(广东)有限公司 基于纤维套管的耐磨性智能检测方法及装置
CN117455935A (zh) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 基于腹部ct医学图像融合及器官分割方法及系统
CN117455935B (zh) * 2023-12-22 2024-03-19 中国人民解放军总医院第一医学中心 基于腹部ct医学图像融合及器官分割方法及系统

Also Published As

Publication number Publication date
CN111696084B (zh) 2024-05-31
CN111696084A (zh) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2021151272A1 (fr) Procédé et appareil de segmentation de image de cellule, ainsi que dispositif électronique et support d'enregistrement lisible
WO2021217851A1 (fr) Méthode et appareil de marquage automatique de cellules anormales, dispositif électronique et support d'enregistrement
CN110321920B (zh) 图像分类方法、装置、计算机可读存储介质和计算机设备
WO2020108525A1 (fr) Procédé et appareil de segmentation d'images, système de diagnostic, support de stockage, et dispositif informatique
CN107665491B (zh) 病理图像的识别方法及系统
WO2021189912A1 (fr) Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage
WO2022121156A1 (fr) Procédé et appareil permettant de détecter un objet cible dans une image, dispositif électronique et support de stockage lisible
US11929174B2 (en) Machine learning method and apparatus, program, learned model, and discrimination apparatus using multilayer neural network
WO2021189901A1 (fr) Procédé et appareil de segmentation d'image, dispositif électronique et support d'informations lisible par ordinateur
WO2021189910A1 (fr) Procédé et appareil de reconnaissance d'image, dispositif électronique et support d'informations lisible par ordinateur
WO2021189909A1 (fr) Procédé et appareil de détection et d'analyse de lésion, dispositif électronique et support de stockage informatique
CN110276408B (zh) 3d图像的分类方法、装置、设备及存储介质
WO2021189913A1 (fr) Procédé et appareil de segmentation d'objet cible dans une image, et dispositif électronique et support d'enregistrement
Tan et al. Automated vessel segmentation in lung CT and CTA images via deep neural networks
Ghayvat et al. AI-enabled radiologist in the loop: novel AI-based framework to augment radiologist performance for COVID-19 chest CT medical image annotation and classification from pneumonia
WO2021151338A1 (fr) Procédé d'analyse d'images médicales, appareil, dispositif électronique et support de stockage lisible
WO2022032824A1 (fr) Procédé et appareil de segmentation d'image, dispositif, et support de stockage
CN111862096A (zh) 图像分割方法、装置、电子设备及存储介质
WO2023039163A1 (fr) Marquage, visualisation et quantification volumétrique de gliome cérébral de haute qualité à partir d'images irm
WO2020110774A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
WO2021189914A1 (fr) Dispositif électronique, procédé et appareil de génération d'index d'image médicale, et support de stockage
WO2021189856A1 (fr) Procédé et appareil de vérification de certificat, et dispositif électronique et support
CN111932563B (zh) 图片区域分割方法、装置、电子设备及存储介质
Yang et al. RAU-Net: U-Net network based on residual multi-scale fusion and attention skip layer for overall spine segmentation
Shi et al. Dual dense context-aware network for hippocampal segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20916612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20916612

Country of ref document: EP

Kind code of ref document: A1