CN117372274A - Scanned image refocusing method, apparatus, electronic device and storage medium - Google Patents

Scanned image refocusing method, apparatus, electronic device and storage medium Download PDF

Info

Publication number
CN117372274A
CN117372274A CN202311438903.3A CN202311438903A CN117372274A CN 117372274 A CN117372274 A CN 117372274A CN 202311438903 A CN202311438903 A CN 202311438903A CN 117372274 A CN117372274 A CN 117372274A
Authority
CN
China
Prior art keywords
images
refocusing
image
scanning
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311438903.3A
Other languages
Chinese (zh)
Inventor
吕行
邝英兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Original Assignee
Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengqin Shengao Yunzhi Technology Co ltd filed Critical Zhuhai Hengqin Shengao Yunzhi Technology Co ltd
Priority to CN202311438903.3A priority Critical patent/CN117372274A/en
Publication of CN117372274A publication Critical patent/CN117372274A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Abstract

The invention provides a scanned image refocusing method, a device, electronic equipment and a storage medium, wherein a plurality of scanned images are fused to obtain a scanned fused image, and refocusing is carried out on the scanned fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanned images; or, the feature extraction is respectively carried out on the plurality of scanning images based on the second refocusing model, after the feature matrixes corresponding to the plurality of scanning images are obtained, the feature matrixes corresponding to the plurality of scanning images are fused, the fused feature matrixes are obtained, refocusing images corresponding to the plurality of scanning images are obtained based on the fused feature matrixes, the refocusing of the images can be carried out on the basis of a small number of scanning images, then the refocusing is carried out on the small number of scanning images through fusion and then the deep learning network is used for refocusing, or the feature extraction and the fusion are carried out on the small number of scanning images through the deep learning network and then the refocusing is carried out, and therefore the refocusing performance and efficiency of the images are improved.

Description

Scanned image refocusing method, apparatus, electronic device and storage medium
Technical Field
The present invention relates to the field of image analysis technologies, and in particular, to a scanned image refocusing method, apparatus, electronic device, and storage medium.
Background
The detection of circulating tumor cells (Circulating Tumor Cells, CTCs) was based on fluorescence in situ hybridization (Fluorescence In Situ Hybridization, FISH) for early lung cancer screening. FISH technology realizes the rapid detection of cellular components by labeling specific DNA sequences, and is an important tool for cytogenetic analysis. However, FISH technology requires accurate positioning of the position and contour of each cell, and is complicated in operation process, which is prone to positioning errors. Wherein, when using a microscope to collect cell images, 15 layers of images are usually continuously shot after pre-positioning, wherein 3-4 layers are focused, and other virtual focus is adopted. Because it is not possible to determine which layers are in focus during the process, the conventional algorithm is to synthesize a new image based on the 15-layer image for subsequent image analysis. However, the synthetic image generated in this way has problems such as local blurring and uneven contrast, and is disadvantageous in terms of effect and adverse effect on subsequent cell analysis processing. Therefore, there is a need for a system that can generate refocused images on a small number of images, even a single image, to obtain a cell image with better sharpness, contrast, and other effects.
Disclosure of Invention
The invention provides a scanned image refocusing method, a scanned image refocusing device, electronic equipment and a storage medium, which are used for solving the defects of local blurring and uneven contrast of a synthesized image in the prior art.
The invention provides a scanned image refocusing method, which comprises the following steps:
acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope;
fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images;
or respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix;
the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
According to the scanned image refocusing method provided by the invention, the first refocusing model or the second refocusing model is trained based on the following steps:
inputting a model input image corresponding to the current round of iteration into the first refocusing model or the second refocusing model to obtain a test refocusing image output by the first refocusing model or the second refocusing model, calculating model loss based on the test refocusing image and a corresponding sample refocusing image, and adjusting model parameters of the first refocusing model or the second refocusing model based on the model loss;
the model input images corresponding to the first refocusing model are sample scanning fusion images of a plurality of sample scanning images, and the model input images corresponding to the second refocusing model are a plurality of sample scanning images; the model loss is calculated based on a first loss function before the training node is preset, and the model loss is calculated based on a first loss function and a second loss function after the training node is preset, wherein the first loss function is used for determining differences between the test refocusing image and the corresponding sample refocusing image, and the second loss function is used for calculating differences between a characteristic image of the test refocusing image and a characteristic image of the sample refocusing image output by a pre-trained characteristic extraction network.
According to the scanned image refocusing method provided by the invention, the first refocusing model comprises a downsampling network and an upsampling network; the second refocusing model includes a plurality of downsampling networks and upsampling networks corresponding to the number of scanned images; wherein the downsampling network is constructed based on a pre-trained image feature extraction model.
According to the scanned image refocusing method provided by the invention, any downsampling layer in each downsampling network of the second refocusing model is connected with an upsampling layer with the same size of the feature map output in the upsampling network as that of the feature map output by any downsampling layer.
According to the scan image refocusing method provided by the invention, the method for fusing the plurality of scan images to obtain a scan fused image specifically comprises the following steps:
converting the plurality of scanning images into gray level images to obtain a plurality of scanning gray level images;
calculating an average pixel value of each pixel point based on the pixel values of the same pixel points in the plurality of scanning gray images, and generating a fusion gray image based on the average pixel value of each pixel point;
and converting the fusion gray level image into a color image to obtain the scanning fusion image.
According to the scanned image refocusing method provided by the invention, the method further comprises the following steps:
performing target segmentation on the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on a target segmentation model to obtain respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
determining a segmentation effect evaluation value of the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
respectively determining the definition of refocusing images corresponding to the plurality of scanning images;
and determining focus effect evaluation values of refocusing images corresponding to the plurality of scanning images based on the plurality of scanning images and the segmentation effect evaluation values and the definition of refocusing images corresponding to the plurality of scanning images.
According to the scanned image refocusing method provided by the invention, the determining of the focus effect evaluation value of the refocusing image corresponding to the plurality of scanned images based on the plurality of scanned images and the segmentation effect evaluation value and the definition of the refocusing image corresponding to the plurality of scanned images specifically comprises:
Calculating image effect evaluation values of the plurality of scanned images based on the respective division effect evaluation values and the sharpness of the plurality of scanned images, respectively;
calculating image effect evaluation values of refocus images corresponding to the plurality of scan images based on the segmentation effect evaluation values and the definition of refocus images corresponding to the plurality of scan images;
and determining focus effect evaluation values of refocus images corresponding to the plurality of scan images based on differences between the image effect evaluation values of refocus images corresponding to the plurality of scan images and the image effect evaluation values of the plurality of scan images.
The invention also provides a scanned image refocusing device, which comprises:
the scanning image acquisition unit is used for acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by the microscope;
the first refocusing unit is used for fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images;
or the second refocusing unit is used for respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images respectively, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix;
The first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the scanned image refocusing method as described above when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a scanned image refocusing method as described in any one of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a scanned image refocusing method as described in any one of the above.
According to the scanning image refocusing method, the device, the electronic equipment and the storage medium, a plurality of scanning images are fused to obtain a scanning fusion image, and refocusing is carried out on the scanning fusion image based on the first refocusing model to obtain refocusing images corresponding to the plurality of scanning images; or, the feature extraction is respectively carried out on the plurality of scanning images based on the second refocusing model, after the feature matrixes corresponding to the plurality of scanning images are obtained, the feature matrixes corresponding to the plurality of scanning images are fused to obtain the fused feature matrixes, and the refocusing images corresponding to the plurality of scanning images are obtained based on the fused feature matrixes, so that the image refocusing can be carried out on the basis of a small number of scanning images, the scanning gap can be reduced in the same scanning interval, the scanning cost is reduced, and then the refocusing is carried out on a small number of scanning images through fusion and then the deep learning network, or the feature extraction and the fusion are carried out on a small number of scanning images through the deep learning network, so that the refocusing performance and the refocusing efficiency of the image are improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a scanned image refocusing method provided by the invention;
FIG. 2 is a schematic flow chart of an image focusing effect evaluation method provided by the invention;
FIG. 3 is a schematic diagram of a scanned image refocusing device according to the present invention;
fig. 4 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a scanned image refocusing method provided by the invention, as shown in fig. 1, the method includes:
step 110, acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope;
step 120, fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images;
or, step 130, performing feature extraction on the plurality of scanned images based on a second refocusing model, obtaining feature matrixes corresponding to the plurality of scanned images, fusing the feature matrixes corresponding to the plurality of scanned images to obtain fused feature matrixes, and obtaining refocusing images corresponding to the plurality of scanned images based on the fused feature matrixes;
the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
Specifically, multiple layers of scanned images of different Z-axis positions under the same field of view continuously photographed by a microscope can be acquired. In some embodiments, the scan image may be a DAPI image, and the number of scan images may be set according to an actual application scenario, for example, may be 3 layers, which is not limited in particular by the embodiment of the present invention.
In one mode, a plurality of scan images may be fused to obtain a scan fused image corresponding to the plurality of scan images, and then the scan fused image is input to a first refocus model to refocus the scan fused image, so as to obtain refocus images corresponding to the plurality of scan images output by the first refocus model. In some embodiments, to integrate the plurality of scanned images, the plurality of scanned images may be converted into a gray scale image to obtain a plurality of scanned gray scale images. Then, based on the pixel values of the same pixel points in the plurality of scanning gray-scale images (i.e., the pixel points at the same position in the image), an average pixel value of each pixel point is calculated, and a fusion gray-scale image is generated based on the average pixel value of each pixel point. Then, the fused gray-scale image is converted into a color image by using an image processing library (e.g., opencv), and a scan fused image corresponding to the plurality of scan images is obtained. In addition, the scanned fusion image can be normalized and then input into the first refocusing model, and then the refocusing image output by the first refocusing model is subjected to a normalization removing operation.
The first refocus model can be constructed based on a downsampling network and an upsampling network, and model parameters of the first refocus model are reversely adjusted through model loss obtained by calculating sample scanning fusion images based on a plurality of sample scanning images and corresponding sample refocus images, so that the first refocus model has the capability of extracting and recovering details of the scanning fusion images, and refocus of the scanning fusion images is realized. In some embodiments, the first refocus model may be constructed based on the unet model, on the basis that, to further improve the efficiency and effect of model training, the downsampling network in the unet model may be replaced by a pre-trained image feature extraction model, such as Resnet and Densenet, so as to improve the convergence speed of the first refocus model by using the image feature extraction capability of the pre-trained image feature extraction model that the pre-trained image feature extraction model currently has.
In another mode, the plurality of scan images may be input to a second refocusing model, feature extraction is performed on the plurality of scan images by using the second refocusing model, after feature matrixes corresponding to the respective scan images are obtained, the feature matrixes corresponding to the respective scan images are fused into a fused feature matrix, and then refocusing images corresponding to the plurality of scan images are obtained based on the fused feature matrix. The second refocusing model can be constructed based on a plurality of downsampling networks and upsampling networks, wherein the downsampling networks are respectively in one-to-one correspondence with the scanning images and are used for extracting feature matrixes of the corresponding scanning images, then the feature matrixes of the scanning images are fused (for example, spliced according to channels) to obtain a fused feature matrix, and the fused feature matrix is processed by the upsampling networks to obtain refocusing images corresponding to the scanning images. Compared with the first mode, the method can fully extract and fuse the characteristics of the original multiple scanning images, can retain more image information, and can realize better image refocusing effect. In addition, each scanned image may be normalized and then input to the second refocusing model, and then the refocused image output from the second refocusing model may be subjected to a denormalization operation.
Here, similar to the first refocus model, the model parameters of the first refocus model can be reversely adjusted through model loss obtained by calculating based on a plurality of sample scanning images and corresponding sample refocus images, so that the second refocus model has the capability of extracting and recovering details of the scanning fusion image, and refocus of the scanning fusion image is realized. In some embodiments, the second refocusing model may also be constructed based on the unet model, and on this basis, a plurality of downsampling networks with identical structures may be set for performing feature extraction processing on the corresponding scanned images. In addition, the downsampling network in the unet model can be replaced by a pre-trained image feature extraction model, and the convergence speed and the training effect of the second refocusing model are improved by utilizing the image feature extraction capability of the pre-trained image feature extraction model. In other embodiments, a jump connection mode is adopted between any downsampling layer in each downsampling network of the second refocusing model and an upsampling layer with the same size of the feature map output by the downsampling layer, namely, the downsampling network and the upsampling layer with the same resolution are connected, so that the second refocusing model can fuse the feature information extracted in the downsampling process in the upsampling process, and the refocusing effect of the image is optimized.
In some embodiments, to promote refocusing performance of the first and second refocusing models for a small number of scanned images, the first and second refocusing models may be trained as follows:
and inputting a model input image corresponding to the current round of iteration into the first refocusing model or the second refocusing model to obtain a test refocusing image output by the first refocusing model or the second refocusing model, calculating model loss based on the test refocusing image and a corresponding sample refocusing image, and adjusting model parameters of the first refocusing model or the second refocusing model based on the model loss. The model input images corresponding to the first refocusing model are sample scanning fusion images of a plurality of sample scanning images, and the model input images corresponding to the second refocusing model are a plurality of sample scanning images; the model loss is calculated based on a first loss function before the training node is preset, and the model loss is calculated based on a first loss function and a second loss function after the training node is preset, wherein the first loss function is used for determining differences between the test refocusing image and the corresponding sample refocusing image, and the second loss function is used for calculating differences between a characteristic image of the test refocusing image and a characteristic image of the sample refocusing image output by a pre-trained characteristic extraction network.
Specifically, for the first refocus model and the second refocus model, a two-stage training approach may be employed. Taking the first refocus model as an example, in the early stage of training, a model input image corresponding to the current round of iteration (that is, a sample scanning fusion image corresponding to a group of sample scanning images, the fusion mode is the same as that given in the above embodiment, and is not described in detail here), which is input to the first refocus model, so as to obtain a test refocus image output by the first refocus model. Model losses are then calculated based on the test refocused image and the corresponding sample refocused image using the first loss function, and model parameters of the first refocused model are adjusted based on the model losses. Wherein the first loss function is used to determine a difference between the test refocused image and the corresponding sample refocused image, for example, an absolute difference between the test refocused image and the corresponding sample refocused image may be calculated using L1 loss.
In the later training stage, after the model input image corresponding to the current iteration is input to the first refocus model to obtain a test refocus image output by the first refocus model, the first loss function and the second loss function can be respectively utilized, the first model loss and the second model loss are calculated based on the test refocus image and the corresponding sample refocus image, the fusion result of the first model loss and the second model loss is used as the model loss, and the model parameters of the first refocus model are adjusted based on the model loss. Wherein the second loss function is used to calculate a difference between a feature map of the test refocused image output by the pre-trained feature extraction network (e.g., VGG network) and a feature map of the sample refocused image. In some embodiments, the test refocusing image and the sample refocusing image may be input to the pre-trained feature extraction network separately, and then feature maps output by the first several (e.g., first 3) network layers in the feature extraction network are obtained. And then, calculating the difference between the characteristic diagram of the test refocusing image and the characteristic diagram of the sample refocusing image output by the same network layer, for example, calculating the absolute difference between the characteristic diagram and the characteristic diagram of the sample refocusing image based on L1 loss, and accumulating the single-layer losses corresponding to the network layers as a second model loss. Further, when the model loss is obtained by fusing the first model loss and the second model loss, it may be weighted and summed, wherein the weight of the first model loss is smaller than the weight of the second model loss, for example, the weight of the first model loss may be set to 0.1 and the weight of the second model loss may be set to 1.
It should be noted that, the preset training nodes for dividing the pre-training period and the post-training period may be determined based on the model loss calculated based on the first loss function, if the difference between the model loss corresponding to the continuous multiple iterations and the preset threshold is within the preset range, which indicates that the model reaches the steady state by implementing model learning through the model loss calculated based on the first loss function, the preset training node may be determined to be reached, and the model loss may be calculated by using the first loss function and the second loss function during the next iteration to adjust the model parameters, so that the model may further learn, thereby improving the image refocusing capability thereof. The training process of the second refocus model is similar to that of the first refocus model, except that the model input image of the second refocus model is a set of sample scan images, and thus will not be described herein.
In summary, according to the method provided by the embodiment of the invention, the scan fusion image is obtained by fusing the plurality of scan images, and refocusing is performed on the scan fusion image based on the first refocusing model, so as to obtain refocusing images corresponding to the plurality of scan images; or, the feature extraction is respectively carried out on the plurality of scanning images based on the second refocusing model, after the feature matrixes corresponding to the plurality of scanning images are obtained, the feature matrixes corresponding to the plurality of scanning images are fused to obtain the fused feature matrixes, and the refocusing images corresponding to the plurality of scanning images are obtained based on the fused feature matrixes, so that the image refocusing can be carried out on the basis of a small number of scanning images, the scanning gap can be reduced in the same scanning interval, the scanning cost is reduced, and then the refocusing is carried out on a small number of scanning images through fusion and then the deep learning network, or the feature extraction and the fusion are carried out on a small number of scanning images through the deep learning network, so that the refocusing performance and the refocusing efficiency of the image are improved.
Based on the above-described embodiments, the conventional evaluation generation image quality index such as PSNR, SSIM, FID is limited to calculating the difference from an ideal image, and the evaluation performance for an image refocus scene is not optimal. In this regard, in order to more accurately evaluate the refocus effect of the refocus image so as to optimize the above-described image refocus method, the image focus effect may be evaluated in combination with the AI segmentation result. Specifically, as shown in fig. 2, the method further includes:
step 210, performing object segmentation on the plurality of scan images and refocusing images corresponding to the plurality of scan images based on an object segmentation model, so as to obtain respective object segmentation results of the plurality of scan images and object segmentation results of refocusing images corresponding to the plurality of scan images;
step 220, determining segmentation effect evaluation values of the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
step 230, determining the definition of the plurality of scanned images and refocused images corresponding to the plurality of scanned images respectively;
Step 240, determining a focusing effect evaluation value of the refocus image corresponding to the plurality of scan images based on the plurality of scan images and the segmentation effect evaluation value and the definition of the refocus image corresponding to the plurality of scan images.
Specifically, the plurality of scan images and refocused images corresponding to the plurality of scan images may be input to a pre-trained object segmentation model, respectively, to perform object segmentation. Any segmentation model, such as Mask-RCNN, may be used as the target segmentation model, which is not specifically limited in the embodiment of the present invention. When the target segmentation model is trained, a target to be segmented can be determined based on an application scene of the refocused image, so that corresponding sample segmentation images and corresponding sample segmentation results thereof are collected. For example, when the downstream task of refocusing the image is an image processing task related to cells, such as cell classification or cell segmentation, the target to be segmented can be determined to be a cell, so that the cell image collected by the microscope can be collected as a sample segmentation image, and marked, so as to obtain a corresponding sample segmentation result. Here, 15 layers of cell images continuously photographed by a microscope may be fused into one fused image as a sample divided image. And dividing targets to be segmented in the plurality of scanning images and refocusing images corresponding to the plurality of scanning images by utilizing a pre-trained target segmentation model to obtain respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images.
Then, based on the target division results of the respective plurality of scan images and the target division results of refocus images corresponding to the plurality of scan images, division effect evaluation values of the respective scan images and refocus images are determined. The segmentation effect evaluation value is used to represent the segmentation accuracy of the corresponding target segmentation result, and may be calculated by using any image segmentation evaluation index, for example, mAP (mean Average Precision, average accuracy mean), which is not particularly limited in the embodiment of the present invention. Further, the sharpness of the plurality of scanned images and the refocused image corresponding to the plurality of scanned images are respectively determined. The sharpness may be calculated based on an image sharpness evaluation index, and for example, the sharpness of the plurality of scanned images and refocused images corresponding to the plurality of scanned images may be calculated using a tenangrad function.
Based on the plurality of scanning images and the segmentation effect evaluation values and the definition of refocusing images corresponding to the plurality of scanning images, the focusing effect evaluation values of refocusing images corresponding to the plurality of scanning images are comprehensively determined, and accuracy of refocusing effect evaluation is improved. Wherein the image effect evaluation values of the plurality of scanned images may be calculated based on the respective division effect evaluation values and the sharpness of the plurality of scanned images. For example, for any one of the scanned images, the division effect evaluation value and the sharpness thereof may be weighted and summed to obtain the image effect evaluation value of the scanned image. Wherein the weight of the division effect evaluation value is larger than the weight of the definition, for example, the weight of the division effect evaluation value may be set to 0.9, and the weight of the definition may be set to 0.1. Similarly, an image effect evaluation value of a refocus image may be calculated based on the segmentation effect evaluation value and sharpness of the refocus image. Then, a focus effect evaluation value of the refocus image is determined based on a difference between the image effect evaluation value of the refocus image and the image effect evaluation values of the plurality of scan images. Wherein, the larger the difference between the image effect evaluation value of the refocus image and the image effect evaluation values of the above-described plurality of scan images, the better the refocus effect of the refocus image is indicated, and thus the larger the focus effect evaluation value of the refocus image can be.
The scanned image refocusing device provided by the invention is described below, and the scanned image refocusing device described below and the scanned image refocusing method described above can be referred to correspondingly with each other.
Based on any of the above embodiments, fig. 3 is a schematic structural diagram of a scanned image refocusing device provided by the present invention, as shown in fig. 3, the device includes:
a scan image acquiring unit 310, configured to acquire a plurality of scan images of different Z-axis positions under the same field of view acquired by the microscope;
the first refocus unit 320 is configured to fuse the plurality of scan images to obtain a scan fused image, and refocus the scan fused image based on a first refocus model to obtain refocus images corresponding to the plurality of scan images;
or, the second refocusing unit 330 is configured to perform feature extraction on the multiple scan images based on a second refocusing model, obtain feature matrices corresponding to the multiple scan images, fuse the feature matrices corresponding to the multiple scan images to obtain a fused feature matrix, and obtain refocused images corresponding to the multiple scan images based on the fused feature matrix;
The first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
According to the device provided by the embodiment of the invention, the scanning fusion image is obtained by fusing the plurality of scanning images, and refocusing is carried out on the scanning fusion image based on the first refocusing model, so that refocusing images corresponding to the plurality of scanning images are obtained; or, the feature extraction is respectively carried out on the plurality of scanning images based on the second refocusing model, after the feature matrixes corresponding to the plurality of scanning images are obtained, the feature matrixes corresponding to the plurality of scanning images are fused to obtain the fused feature matrixes, and the refocusing images corresponding to the plurality of scanning images are obtained based on the fused feature matrixes, so that the image refocusing can be carried out on the basis of a small number of scanning images, the scanning gap can be reduced in the same scanning interval, the scanning cost is reduced, and then the refocusing is carried out on a small number of scanning images through fusion and then the deep learning network, or the feature extraction and the fusion are carried out on a small number of scanning images through the deep learning network, so that the refocusing performance and the refocusing efficiency of the image are improved.
Based on any of the above embodiments, the first refocusing model or the second refocusing model is trained based on the following steps:
inputting a model input image corresponding to the current round of iteration into the first refocusing model or the second refocusing model to obtain a test refocusing image output by the first refocusing model or the second refocusing model, calculating model loss based on the test refocusing image and a corresponding sample refocusing image, and adjusting model parameters of the first refocusing model or the second refocusing model based on the model loss;
the model input images corresponding to the first refocusing model are sample scanning fusion images of a plurality of sample scanning images, and the model input images corresponding to the second refocusing model are a plurality of sample scanning images; the model loss is calculated based on a first loss function before the training node is preset, and the model loss is calculated based on a first loss function and a second loss function after the training node is preset, wherein the first loss function is used for determining differences between the test refocusing image and the corresponding sample refocusing image, and the second loss function is used for calculating differences between a characteristic image of the test refocusing image and a characteristic image of the sample refocusing image output by a pre-trained characteristic extraction network.
Based on any of the above embodiments, the first refocusing model includes a downsampling network and an upsampling network; the second refocusing model includes a plurality of downsampling networks and upsampling networks corresponding to the number of scanned images; wherein the downsampling network is constructed based on a pre-trained image feature extraction model.
Based on any of the above embodiments, any downsampling layer in each downsampling network of the second refocusing model is connected with an upsampling layer in which a feature map size output in the upsampling network is the same as a feature map size output by the any downsampling layer.
Based on any one of the above embodiments, the fusing the plurality of scan images to obtain a scan fused image specifically includes:
converting the plurality of scanning images into gray level images to obtain a plurality of scanning gray level images;
calculating an average pixel value of each pixel point based on the pixel values of the same pixel points in the plurality of scanning gray images, and generating a fusion gray image based on the average pixel value of each pixel point;
and converting the fusion gray level image into a color image to obtain the scanning fusion image.
Based on any one of the above embodiments, the apparatus further includes a refocus effect evaluation unit configured to:
Performing target segmentation on the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on a target segmentation model to obtain respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
determining a segmentation effect evaluation value of the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
respectively determining the definition of refocusing images corresponding to the plurality of scanning images;
and determining focus effect evaluation values of refocusing images corresponding to the plurality of scanning images based on the plurality of scanning images and the segmentation effect evaluation values and the definition of refocusing images corresponding to the plurality of scanning images.
Based on any one of the above embodiments, the determining the focus effect evaluation value of the refocus image corresponding to the plurality of scan images based on the plurality of scan images and the segmentation effect evaluation value and the sharpness of the refocus image corresponding to the plurality of scan images specifically includes:
Calculating image effect evaluation values of the plurality of scanned images based on the respective division effect evaluation values and the sharpness of the plurality of scanned images, respectively;
calculating image effect evaluation values of refocus images corresponding to the plurality of scan images based on the segmentation effect evaluation values and the definition of refocus images corresponding to the plurality of scan images;
and determining focus effect evaluation values of refocus images corresponding to the plurality of scan images based on differences between the image effect evaluation values of refocus images corresponding to the plurality of scan images and the image effect evaluation values of the plurality of scan images.
Fig. 4 is a schematic structural diagram of an electronic device according to the present invention, as shown in fig. 4, the electronic device may include: processor 410, memory 420, communication interface (Communications Interface) 430, and communication bus 440, wherein processor 410, memory 420, and communication interface 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 420 to perform a scanned image refocusing method comprising: acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope; fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images; or respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix; the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
Further, the logic instructions in the memory 420 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a method of refocusing a scanned image provided by the methods described above, the method comprising: acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope; fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images; or respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix; the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the above provided scanned image refocusing methods, the method comprising: acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope; fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images; or respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix; the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A scanned image refocusing method, comprising:
acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by a microscope;
fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images;
or respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix;
The first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
2. The scanned image refocusing method of claim 1, wherein the first refocusing model or the second refocusing model is trained based on the steps of:
inputting a model input image corresponding to the current round of iteration into the first refocusing model or the second refocusing model to obtain a test refocusing image output by the first refocusing model or the second refocusing model, calculating model loss based on the test refocusing image and a corresponding sample refocusing image, and adjusting model parameters of the first refocusing model or the second refocusing model based on the model loss;
the model input images corresponding to the first refocusing model are sample scanning fusion images of a plurality of sample scanning images, and the model input images corresponding to the second refocusing model are a plurality of sample scanning images; the model loss is calculated based on a first loss function before the training node is preset, and the model loss is calculated based on a first loss function and a second loss function after the training node is preset, wherein the first loss function is used for determining differences between the test refocusing image and the corresponding sample refocusing image, and the second loss function is used for calculating differences between a characteristic image of the test refocusing image and a characteristic image of the sample refocusing image output by a pre-trained characteristic extraction network.
3. The scanned image refocusing method of claim 1, wherein the first refocusing model comprises a downsampling network and an upsampling network; the second refocusing model includes a plurality of downsampling networks and upsampling networks corresponding to the number of scanned images; wherein the downsampling network is constructed based on a pre-trained image feature extraction model.
4. A scanned image refocusing method according to claim 3 wherein any downsampling layer in each downsampling network of the second refocusing model is connected to an upsampling layer in the upsampling network having the same feature map size as the feature map output by the any downsampling layer.
5. The method for refocusing a scanned image according to claim 1, wherein the fusing the plurality of scanned images to obtain a scanned fused image specifically comprises:
converting the plurality of scanning images into gray level images to obtain a plurality of scanning gray level images;
calculating an average pixel value of each pixel point based on the pixel values of the same pixel points in the plurality of scanning gray images, and generating a fusion gray image based on the average pixel value of each pixel point;
And converting the fusion gray level image into a color image to obtain the scanning fusion image.
6. The scanned image refocusing method of any one of claims 1 to 5, wherein the method further comprises:
performing target segmentation on the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on a target segmentation model to obtain respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
determining a segmentation effect evaluation value of the plurality of scanning images and refocusing images corresponding to the plurality of scanning images respectively based on respective target segmentation results of the plurality of scanning images and target segmentation results of refocusing images corresponding to the plurality of scanning images;
respectively determining the definition of refocusing images corresponding to the plurality of scanning images;
and determining focus effect evaluation values of refocusing images corresponding to the plurality of scanning images based on the plurality of scanning images and the segmentation effect evaluation values and the definition of refocusing images corresponding to the plurality of scanning images.
7. The method according to claim 6, wherein the determining the focus effect evaluation value of the refocus image corresponding to the plurality of scan images based on the plurality of scan images and the segmentation effect evaluation value and the sharpness of the refocus image corresponding to the plurality of scan images specifically includes:
calculating image effect evaluation values of the plurality of scanned images based on the respective division effect evaluation values and the sharpness of the plurality of scanned images, respectively;
calculating image effect evaluation values of refocus images corresponding to the plurality of scan images based on the segmentation effect evaluation values and the definition of refocus images corresponding to the plurality of scan images;
and determining focus effect evaluation values of refocus images corresponding to the plurality of scan images based on differences between the image effect evaluation values of refocus images corresponding to the plurality of scan images and the image effect evaluation values of the plurality of scan images.
8. A scanned image refocusing device, comprising:
the scanning image acquisition unit is used for acquiring a plurality of scanning images of different Z-axis positions under the same visual field acquired by the microscope;
The first refocusing unit is used for fusing the plurality of scanning images to obtain a scanning fused image, and refocusing the scanning fused image based on a first refocusing model to obtain refocusing images corresponding to the plurality of scanning images;
or the second refocusing unit is used for respectively extracting the characteristics of the plurality of scanning images based on a second refocusing model, fusing the characteristic matrixes corresponding to the plurality of scanning images after obtaining the characteristic matrixes corresponding to the plurality of scanning images respectively, obtaining a fused characteristic matrix, and obtaining refocusing images corresponding to the plurality of scanning images based on the fused characteristic matrix;
the first refocusing model is obtained by training based on sample scanning fusion images of a plurality of sample scanning images and corresponding sample refocusing images thereof, and the second refocusing model is obtained by training based on a plurality of sample scanning images and corresponding sample refocusing images thereof.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the scanned image refocusing method according to any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the scanned image refocusing method according to any one of claims 1 to 7.
CN202311438903.3A 2023-10-31 2023-10-31 Scanned image refocusing method, apparatus, electronic device and storage medium Pending CN117372274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311438903.3A CN117372274A (en) 2023-10-31 2023-10-31 Scanned image refocusing method, apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311438903.3A CN117372274A (en) 2023-10-31 2023-10-31 Scanned image refocusing method, apparatus, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117372274A true CN117372274A (en) 2024-01-09

Family

ID=89403878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311438903.3A Pending CN117372274A (en) 2023-10-31 2023-10-31 Scanned image refocusing method, apparatus, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117372274A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490968A (en) * 2019-07-18 2019-11-22 西安理工大学 Based on the light field axial direction refocusing image super-resolution method for generating confrontation network
CN111723875A (en) * 2020-07-16 2020-09-29 哈尔滨工业大学 SAR three-dimensional rotating ship target refocusing method based on CV-RefocusNet
CN112069735A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学 Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
WO2021121108A1 (en) * 2019-12-20 2021-06-24 北京金山云网络技术有限公司 Image super-resolution and model training method and apparatus, electronic device, and medium
CN113837947A (en) * 2021-11-29 2021-12-24 南开大学 Processing method for obtaining optical coherence tomography large focal depth image
CN114240978A (en) * 2022-03-01 2022-03-25 珠海横琴圣澳云智科技有限公司 Cell edge segmentation method and device based on adaptive morphology
WO2022183078A1 (en) * 2021-02-25 2022-09-01 California Institute Of Technology Computational refocusing-assisted deep learning
CN115165710A (en) * 2022-09-08 2022-10-11 珠海圣美生物诊断技术有限公司 Rapid scanning method and device for cells
US20220368877A1 (en) * 2021-05-13 2022-11-17 Canon Kabushiki Kaisha Image processing method, image processing apparatus, storage medium, manufacturing method of learned model, and image processing system
CN116823611A (en) * 2023-06-29 2023-09-29 上海航天电子通讯设备研究所 Multi-focus image-based referenced super-resolution method
CN116847209A (en) * 2023-08-29 2023-10-03 中国测绘科学研究院 Log-Gabor and wavelet-based light field full-focusing image generation method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490968A (en) * 2019-07-18 2019-11-22 西安理工大学 Based on the light field axial direction refocusing image super-resolution method for generating confrontation network
WO2021121108A1 (en) * 2019-12-20 2021-06-24 北京金山云网络技术有限公司 Image super-resolution and model training method and apparatus, electronic device, and medium
CN111723875A (en) * 2020-07-16 2020-09-29 哈尔滨工业大学 SAR three-dimensional rotating ship target refocusing method based on CV-RefocusNet
CN112069735A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学 Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
WO2022183078A1 (en) * 2021-02-25 2022-09-01 California Institute Of Technology Computational refocusing-assisted deep learning
US20220368877A1 (en) * 2021-05-13 2022-11-17 Canon Kabushiki Kaisha Image processing method, image processing apparatus, storage medium, manufacturing method of learned model, and image processing system
CN113837947A (en) * 2021-11-29 2021-12-24 南开大学 Processing method for obtaining optical coherence tomography large focal depth image
CN114240978A (en) * 2022-03-01 2022-03-25 珠海横琴圣澳云智科技有限公司 Cell edge segmentation method and device based on adaptive morphology
CN115165710A (en) * 2022-09-08 2022-10-11 珠海圣美生物诊断技术有限公司 Rapid scanning method and device for cells
CN116823611A (en) * 2023-06-29 2023-09-29 上海航天电子通讯设备研究所 Multi-focus image-based referenced super-resolution method
CN116847209A (en) * 2023-08-29 2023-10-03 中国测绘科学研究院 Log-Gabor and wavelet-based light field full-focusing image generation method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANXING XU等: "Modality-Collaborative AI Model Ensemble for Lung Cancer Early Diagnosis", 《CMMCA》, 31 December 2022 (2022-12-31), pages 91 - 99 *
XINGYU HU等: "ZMFF: Zero-shot multi-focus image fusion", 《INFORMATION FUSION》, 23 November 2022 (2022-11-23), pages 127 - 138 *
朱红: "光场相机的重聚焦算法研究", 《万方数据库》, 19 September 2022 (2022-09-19), pages 1 - 62 *

Similar Documents

Publication Publication Date Title
CN111007661B (en) Microscopic image automatic focusing method and device based on deep learning
CN107301383B (en) Road traffic sign identification method based on Fast R-CNN
CN108961180B (en) Infrared image enhancement method and system
CN110070517B (en) Blurred image synthesis method based on degradation imaging mechanism and generation countermeasure mechanism
CN111462076A (en) Method and system for detecting fuzzy area of full-slice digital pathological image
CN111462075A (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy area
CN111161272B (en) Embryo tissue segmentation method based on generation of confrontation network
CN111798469A (en) Digital image small data set semantic segmentation method based on deep convolutional neural network
CN109671031B (en) Multispectral image inversion method based on residual learning convolutional neural network
CN112200887B (en) Multi-focus image fusion method based on gradient sensing
CN111931857A (en) MSCFF-based low-illumination target detection method
KR20210127069A (en) Method of controlling performance of fusion model neural network
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN110060208B (en) Method for improving reconstruction performance of super-resolution algorithm
CN112489103B (en) High-resolution depth map acquisition method and system
Geng et al. Cervical cytopathology image refocusing via multi-scale attention features and domain normalization
CN117372274A (en) Scanned image refocusing method, apparatus, electronic device and storage medium
CN111612803A (en) Vehicle image semantic segmentation method based on image definition
CN110443755B (en) Image super-resolution method based on high-low frequency signal quantity
WO2021067507A1 (en) Building computational transfer functions on 3d light microscopy images using deep learning
CN116993644B (en) Multi-focus image fusion method and device based on image segmentation
Liu et al. Bokeh rendering based on adaptive depth calibration network
CN116797613B (en) Multi-modal cell segmentation and model training method, device, equipment and storage medium
CN116228797B (en) Shale scanning electron microscope image segmentation method based on attention and U-Net
CN117689892B (en) Remote sensing image focal plane discriminating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination