CN114004854A - System and method for processing and displaying slice image under microscope in real time - Google Patents

System and method for processing and displaying slice image under microscope in real time Download PDF

Info

Publication number
CN114004854A
CN114004854A CN202111096321.2A CN202111096321A CN114004854A CN 114004854 A CN114004854 A CN 114004854A CN 202111096321 A CN202111096321 A CN 202111096321A CN 114004854 A CN114004854 A CN 114004854A
Authority
CN
China
Prior art keywords
image
focus
visual field
current
microscope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111096321.2A
Other languages
Chinese (zh)
Other versions
CN114004854B (en
Inventor
师丽
朱承泽
王松伟
张小安
曾宪旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202111096321.2A priority Critical patent/CN114004854B/en
Publication of CN114004854A publication Critical patent/CN114004854A/en
Application granted granted Critical
Publication of CN114004854B publication Critical patent/CN114004854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/0012Surgical microscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for processing and displaying a slice image under a microscope in real time, wherein the system comprises a microscope system, an image processing system and an auxiliary display system; the microscope system is used for acquiring a section visual field image under a microscope; the image processing system is used for splicing the multiple slice view images to obtain a current historical view spliced image; the microscope visual field image with the focus label is obtained by extracting the edge of a focus connected domain in the identified focus identification image and superposing the extracted edge on the corresponding section visual field image; the microscope visual field image with the focus label is spliced to obtain a current historical visual field focus image; and the auxiliary display system is used for displaying the microscope visual field image with the lesion label, the current historical visual field spliced image and the current historical visual field lesion image in real time. The invention can provide image data support for pathological diagnosis and improve the diagnosis efficiency and the accuracy of diagnosis results.

Description

System and method for processing and displaying slice image under microscope in real time
Technical Field
The invention relates to the field of medical image processing, in particular to a system and a method for processing and displaying a slice image under a microscope in real time.
Background
The pathological report is a diagnosis report formed by a pathologist after the biopsy tissue of a patient is processed and observed by a microscope, is the most reliable and accurate diagnosis means in clinical diagnosis and is called clinical diagnosis golden standard.
In practice, pathologists spend a great deal of time each day microscopically diagnosing various disease sections. According to statistics, the number of doctors in the pathology department in China is about 1.5w, and 1w of talent gaps still exist, so that the current situation that the talent gaps of doctors in the pathology department are large and the pathological detection efficiency is low in China is directly caused. Especially, most small hospitals are under great pressure for pathological diagnosis because of less advanced medical equipment and lack of corresponding experienced pathologists.
With the application of deep learning, especially the application of semantic segmentation networks in medical image focus identification tasks, the semantic segmentation networks are gradually proved to be capable of effectively improving the diagnosis accuracy and efficiency of pathologists, and are increasingly and widely applied to the focus identification tasks of polyps, esophageal cancers, CT images, nuclear magnetic resonance images, slice images and the like. However, most of the current deep learning-based projects are in the research stage, and there are few devices and methods which can be directly used for clinical diagnosis, and a real-time processing system for the slice images under the microscope, which can effectively combine a network model and the actual clinical situation, is also lacking.
Disclosure of Invention
The invention aims to provide a system and a method for processing and displaying a microscopic section image in real time, which can provide a microscopic field image with a focus label, a current historical field spliced image and a current historical field focus image in real time based on a section field image acquired by a microscope, provide image data support for pathological diagnosis of a pathologist and improve the diagnosis efficiency and the accuracy of a diagnosis result.
The invention adopts the following technical scheme:
a microscopic section image real-time processing display system comprises a microscope system, an image processing system and an auxiliary display system;
the microscope system is used for collecting a section visual field image under a microscope;
the image processing system is used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the microscope visual field image processing system is also used for carrying out focus identification on each section visual field image, extracting the edge of a focus connected domain in the identified focus identification image, superposing the extracted edge on the corresponding section visual field image to obtain a microscope visual field image with a focus label and sending the microscope visual field image with the focus label to the auxiliary display system; the microscope visual field image processing system is also used for sequentially splicing a plurality of microscope visual field images with focus labels according to a time sequence to obtain a current historical visual field focus image and sending the current historical visual field focus image to the auxiliary display system;
the auxiliary display system is used for displaying the microscope visual field image with the focus label, the current historical visual field splicing image and the current historical visual field focus image in real time.
The image processing system comprises an image splicing module, a focus identification module and an image superposition module;
the image splicing module is used for sequentially acquiring the section visual field images acquired by the microscope system and sequentially transmitting the acquired section visual field images to the lesion identification module; the image splicing module is further used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the image splicing module is also used for sequentially acquiring the microscope visual field images with the focus labels transmitted by the image superposition module, sequentially splicing the plurality of microscope visual field images with the focus labels according to a time sequence, and finally obtaining the current historical visual field focus image and transmitting the current historical visual field focus image to the auxiliary display system;
the focus identification module is used for respectively identifying focuses of each slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sending the obtained focus identification image to the image superposition module;
and the image superposition module is used for sequentially acquiring the section visual field image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of a focus connected domain in the focus identification image, superposing the extracted edge of the focus connected domain in the focus identification image on the corresponding section visual field image, acquiring the microscope visual field image with a focus label, and transmitting the microscope visual field image to the image splicing module and the auxiliary display system.
When the image processing system splices two images, the method comprises the following steps:
a: setting the current image acquired by the image processing system as f1(x, y) the previous image is f2(x, y), then f2(x,y)=f1(x-dx, y-dy), i.e. f1(x, y) is represented by f2(x, y) translating (dx, dy) to obtain; fourier transformation is respectively carried out on the current image and the previous image to obtain a frequency domain image F1(u, v) and frequency domain image F2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy)
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f. of1(x, y) represents the gray value of the coordinate pixel point of the current image (x, y), wherein (x, y) is the coordinate position of the image pixel point; f. of2(x, y) is the gray value of the pixel point of the coordinate of the previous image (x, y); f1(u, v) is the value of the frequency domain image of the current image in the (u, v) frequency domain coordinates, where (u, v) is the frequency domain coordinates of the frequency domain image, F2(u, v) is a value of the frequency domain image of the previous image on the (u, v) frequency domain coordinate, i represents a complex number symbol, dx is a moving distance between the two images in the x-axis direction, and dy is a moving distance between the two images in the y-axis direction;
b: for frequency domain image F2Conjugation is carried out to obtain conjugationThe latter frequency domain image
Figure BDA0003265514350000031
Then the conjugated frequency domain image is processed
Figure BDA0003265514350000032
And frequency domain image F1After multiplication, normalization processing is carried out to obtain cross-power spectrums H (u, v),
Figure BDA0003265514350000033
c: carrying out Fourier inversion on the cross-power spectrum H (u, v) to obtain a real-domain graph Fe(x,y),Fe(x, y) is a pulse function image; finding Fe(x, y) the coordinates of the peak position in the (x, y) are used as the displacement amount (dx, dy) of the front and rear images, and then the front image is respectively placed at four positions of the upper left, lower left, upper right and lower right of the current image, and the moving distance between the two images in the x-axis direction is dx and the moving distance in the y-axis direction is dy at the four positions; then, respectively calculating the mean absolute value of the gray value difference values of the overlapping areas of the front image and the rear image at the four positions, wherein the smallest mean absolute value is the correct splicing position relation of the front image and the rear image, and then splicing the images according to the displacement (dx, dy) and the position relation to obtain a current historical visual field spliced image or a current historical visual field focus image;
d: and c, according to the method of the steps a to c, splicing each current image acquired by the image processing system with the previous current image, and completing the splicing of all the images by the image processing system by combining the spliced current history view spliced image or the current history view focus image to obtain the current history view spliced image or the current history view focus image when the last image is cut off.
In the process of training the focus identification module, a step-by-step network training method is adopted;
firstly, training a network model by using a first loss function, wherein only the network model with the best pixel level IoU index on a verification set Valid is saved as an intermediate network model during training;
then, based on the intermediate network model, training the intermediate network model by using a second loss function, wherein only the network model with the best pixel level IoU index on the validation set Valid is saved as a final network model during training; wherein the first loss function and the second loss function are different loss functions.
In the process of training the lesion recognition module, the second loss function adopts a composite loss function aiming at multiple evaluation indexes, and the concrete formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Figure BDA0003265514350000041
Figure BDA0003265514350000042
wherein CompundLoss is a loss function, LesionLoss is lesion-level loss, PixelLoss is pixel-level loss, alpha is a weighting coefficient, and F is the weight of the pixel-level lossfocal(x) Represents the focal function, k and gamma are fixed parameters, and x represents the variable of the focal function; β is a weighting factor, Pre _ PixelLoss represents the pixel level precision loss, Rec _ PixelLoss represents the pixel level recall loss, T1Representing the lesion area, P, of the real label map1Indicates the predicted focal region of the label map, T1∩P1Represents T1And P1Smooth is a very small amount for preventing the denominator from being 0; pre _ LesionLoss represents lesion-level accuracy loss, Rec _ LesionLoss represents lesion-level recall loss, T2Represents the set of connected domains of the real label map lesions, P2Represents the set of the predicted label graph focus connected domain, | N (P)2,T2) I represents the number of the focus with accurate prediction。
The image processing system also comprises an edge extension module, a focus identification module and a video processing module, wherein the edge extension module is used for acquiring a current historical view spliced image from the image splicing module, filling an extension area of the current slice view image in the current historical view spliced image, and sending the current slice view image after edge extension to the focus identification module;
when image filling is carried out, if the expansion area of the current slice view image has real image content in the current historical view spliced image, filling the expansion area of the current slice view image by using the real image content; if the expansion area of the current slice view image does not have real image content in the current historical view spliced image, mirror image copying is carried out on the real image content on one side of the current slice view image adjacent to the expansion area to obtain mirror image content, and the mirror image content is used for filling the expansion area of the current slice view image; and finally obtaining the current slice view image after edge expansion after filling.
The image processing system utilizes the current section view image after the edge expansion to carry out focus identification, cuts the obtained focus identification image according to the size of the current section view image before the edge expansion, namely deletes the expansion area of the current section view image, then extracts the edge of a focus communication area in the cut focus identification image and superposes the edge on the corresponding section view image to obtain a microscope view image with a focus label.
The image processing system sends the extracted edge of the focus connected domain to the augmented reality module, and the augmented reality module directly superposes the edge of the focus connected domain on a corresponding focus in the visual field of the microscope eyepiece through light path conduction.
A real-time processing and displaying method of microscopic section images by using the real-time processing and displaying system of any one of claims 1 to 8, comprising the following steps in sequence:
a: a microscope system collects a section visual field image under a microscope;
b: the image splicing module acquires a section visual field image acquired by the microscope system and transmits the acquired section visual field image to the lesion identification module;
c: the focus identification module carries out focus identification on the slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sends the obtained focus identification image to the image superposition module;
d: the image superposition module extracts the edge of a focus connected domain in the focus identification image, superposes the extracted edge of the focus connected domain in the focus identification image on the corresponding section view image to obtain a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope visual field image with a focus label in real time;
e: the image splicing module judges whether a previous slice view image and/or a microscope view image with a focus label exists or not, if yes, the previous and next slice view images and/or the previous and next microscope view images with the focus label are spliced, the current historical view spliced image and/or the current historical view focus image obtained through splicing are sent to an auxiliary display system, and then the step A is returned; if not, directly returning to the step A;
and then displaying the current historical visual field spliced image and/or the current historical visual field lesion image in real time by an auxiliary display system.
In the step B, the image splicing module sends the acquired slice field image acquired by the microscope system to the edge extension module, the edge extension module fills an extension area of the current slice field image in the current historical field spliced image, and sends the current slice field image after edge extension to the focus identification module;
in the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the lesion recognition module performs lesion recognition by using a lesion recognition network to obtain a lesion recognition image including a lesion connected domain, then cuts the obtained lesion recognition image by the size of the current slice view image before edge expansion, that is, deletes the expansion region of the current slice view image, and then sends the cut lesion recognition image to the image superposition module.
The invention can effectively combine the focus recognition network model and the microscopic section image under the actual clinical situation, and provides the microscopic view image with the focus label, the current historical view splicing image and the current historical view focus image in real time after processing based on the section view image collected by the microscope, thereby providing image data support for pathological diagnosis and improving the diagnosis efficiency and the accuracy of the diagnosis result.
Drawings
FIG. 1 is a schematic block diagram of a microscopic section image real-time processing display system according to the present invention;
FIG. 2 is a schematic structural diagram of a method for processing and displaying a microscopic section image in real time according to the present invention;
FIG. 3 is a current slice view image at edge extension;
FIG. 4 is a slice view image after it has been populated with real image content from a current history view stitched image;
FIG. 5 is a slice view image of the portion of FIG. 4 without real image content filled with mirror image content;
fig. 6 is a current historical visual field lesion image.
Detailed Description
The invention is described in detail below with reference to the following figures and examples:
as shown in fig. 1 to 6, the system for processing and displaying a slice image under a microscope in real time according to the present invention includes a microscope system, an image processing system and an auxiliary display system;
the microscope system is used for collecting a section visual field image under a microscope;
the image processing system is used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the microscope visual field image processing system is also used for carrying out focus identification on each section visual field image, extracting the edge of a focus connected domain in the identified focus identification image, superposing the extracted edge on the corresponding section visual field image to obtain a microscope visual field image with a focus label and sending the microscope visual field image with the focus label to the auxiliary display system; the microscope visual field image processing system is also used for sequentially splicing a plurality of microscope visual field images with focus labels according to a time sequence to obtain a current historical visual field focus image and sending the current historical visual field focus image to the auxiliary display system;
the auxiliary display system is used for displaying the microscope visual field image with the focus label, the current historical visual field splicing image and the current historical visual field focus image in real time.
In the invention, a microscope system collects slice visual field images under a microscope in real time and sequentially transmits the slice visual field images to an image processing system according to a time sequence;
in this embodiment, the microscope system can adopt the binocular microscope of the CX31 model of Olympus company, from taking the microscopic imaging system interface, can attach the microscope camera on the eyepiece of binocular observation section of thick bamboo through the adapter, realize the collection function of the section field of vision image under the microscope. The microscope camera model can adopt G1UD05C, and can provide a 620 multiplied by 460 RGB image sequence of 2 frames per second for an image processing system in an upper computer through a USB transmission line.
In the invention, the image processing system comprises an image splicing module, a focus identification module and an image superposition module;
the image splicing module is used for sequentially acquiring the section visual field images acquired by the microscope system and sequentially transmitting the acquired section visual field images to the lesion identification module; the image splicing module is further used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the image splicing module is also used for sequentially acquiring the microscope visual field images with the focus labels transmitted by the image superposition module, sequentially splicing the plurality of microscope visual field images with the focus labels according to a time sequence, and finally obtaining the current historical visual field focus image and transmitting the current historical visual field focus image to the auxiliary display system;
the focus identification module is used for respectively identifying focuses of each slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sending the obtained focus identification image to the image superposition module;
the image superposition module is used for sequentially acquiring the section visual field image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of a focus connected domain in the focus identification image, superposing the extracted edge of the focus connected domain in the focus identification image on the corresponding section visual field image to obtain a microscope visual field image with a focus label and transmitting the microscope visual field image with the focus label to the image splicing module and the auxiliary display system;
when an image splicing module in an image processing system acquires a first section view image, the first section view image is sent to a focus identification module, the focus identification module utilizes a focus identification network to identify a focus, then the obtained focus identification image containing a focus communication domain is sent to an image superposition module, the image superposition module extracts the edge of the focus communication domain in the focus identification image, then the extracted edge of the focus communication domain in the focus identification image is superposed on the first section view image, and then the obtained microscope view image with a focus label is sent to the image splicing module and an auxiliary display system; and displaying the microscope visual field image with the lesion label by an auxiliary display system. The image splicing module can directly send the first slice view images which are not spliced to the auxiliary display system, and the first slice view images are displayed by the auxiliary display system.
After an image splicing module in the image processing system acquires a second slice view image, splicing the first slice view image and the second slice view image according to a time sequence to obtain a current historical view spliced image (namely the spliced image of the first slice view image and the second slice view image) and sending the current historical view spliced image to an auxiliary display system; meanwhile, the image splicing module also sends the second section view image to a focus identification module, the focus identification module utilizes a focus identification network to identify the focus, the obtained focus identification image containing a focus communication domain is sent to the image superposition module, the image superposition module extracts the edge of the focus communication domain in the focus identification image, the extracted edge of the focus communication domain in the focus identification image is superposed on the second section view image, and the obtained microscope view image with the focus label is sent to the image splicing module and the auxiliary display system; after the image splicing module acquires a second microscope view image with a focus label, splicing the first microscope view image with the focus label and the second microscope view image with the focus label according to a time sequence to finally obtain a current historical view focus image (the spliced image of the first microscope view image with the focus label and the spliced image of the second microscope view image with the focus label) and sending the current historical view focus image to the auxiliary display system; and the auxiliary display system displays the microscope visual field image with the focus label, the current historical visual field spliced image and the current historical visual field focus image in real time.
Similarly, after the image splicing module in the image processing system sequentially acquires the subsequent slice view images, the image splicing module sequentially splices the multiple slice view images according to the method and the time sequence to obtain a current historical view spliced image and sends the current historical view spliced image to the auxiliary display system; the image superposition module also sequentially sends the microscope visual field image with the focus label to the image splicing module and the auxiliary display system. Meanwhile, the image splicing module sequentially splices a plurality of microscope visual field images with focus labels according to a time sequence to finally obtain a current historical visual field focus image and send the current historical visual field focus image to an auxiliary display system; and finally, displaying the microscope field image with the focus label, the current historical field spliced image and the current historical field focus image in real time by an auxiliary display system. The invention can provide image data support for pathological diagnosis of doctors in the pathology department, and display the number of focuses and the specific positions of the focuses in the historical path of the microscope lens so as to improve the diagnosis efficiency and the accuracy of diagnosis results.
In this embodiment, when the image stitching module in the image processing system stitches two images (taking a slice view image as an example), the method is performed as follows:
a: setting a current slice view image acquired by an image splicing module in an image processing system as f1(x, y) the previous slice view image is f2(x, y), then f2(x,y)=f1(x-dx, y-dy), i.e. f1(x, y) is represented by f2(x, y) translating (dx, dy) to obtain; respectively carrying out Fourier transform on the current slice visual field image and the previous slice visual field image to obtain a frequency domain image F1(u, v) and frequency domain image F2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy)
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f. of1(x, y) represents the gray value of a coordinate pixel point of the current slice view image (x, y), wherein (x, y) is the coordinate position of the image pixel point; f. of2(x, y) is the gray value of the coordinate pixel point of the previous slice visual field image (x, y); f1(u, v) is the value of the frequency domain image of the current slice view image in the (u, v) frequency domain coordinates, where (u, v) is the frequency domain coordinates of the frequency domain image, F2(u, v) is a value of the frequency domain image of the previous slice-view image in the (u, v) frequency domain coordinate, i represents a complex number symbol, dx is a moving distance between the two slice-view images in the x-axis direction, and dy is a moving distance between the two slice-view images in the y-axis direction.
b: for frequency domain image F2Conjugation is carried out to obtain a conjugated frequency domain image
Figure BDA0003265514350000101
Then the conjugated frequency domain image is processed
Figure BDA0003265514350000102
And frequency domain image F1After multiplication, normalization processing is carried out to obtain cross-power spectrums H (u, v),
Figure BDA0003265514350000103
c: carrying out Fourier inversion on the cross-power spectrum H (u, v) to obtain a real-domain graph Fe(x,y),Fe(x, y) is a pulse function image; finding Fe(x, y) coordinates of the peak position in the image are used as displacement amounts (dx, dy) of the two previous and next slice view images, and then the previous slice view image is respectively placed at four positions of the top left, bottom left, top right and bottom right of the current slice view image, and the moving distance between the two slice view images in the x-axis direction is dx and the moving distance in the y-axis direction is dy at the four positions; then, respectively calculating the mean absolute value of the gray value difference values of the overlapping areas of the front and rear two slice view images at the four positions, wherein the smallest mean absolute value is the correct splicing position relation of the front and rear images, and then splicing the images according to the displacement (dx, dy) and the position relation to obtain the current historical view spliced image;
d: and c, according to the method of the steps a to c, splicing each current slice view image acquired by the image splicing module with the previous current slice view image, and completing the splicing of all the slice view images by the image splicing module by combining the spliced current history view spliced images to obtain the current history view spliced image when the last slice view image is cut off.
Similarly, the image stitching module sequentially stitches a plurality of microscope visual field images with focus labels according to the method to obtain the current historical visual field focus image when the current historical visual field focus image is cut off to the last microscope visual field image with the focus label.
The image splicing method adopted by the invention has strong robustness, and can effectively avoid the condition that the splicing is seriously lost in the existing splicing algorithm (such as an affine transformation-based image splicing algorithm and a feature matching-based image splicing algorithm).
In the invention, the focus identification module can adopt various focus identification networks constructed based on the existing neural networks, such as fully-connected neural networks, villus networks or proliferation networks disclosed in ZL202010825928.9, ZL202010828700.5, ZL202010828696.2 and ZL202010826873.3, and then train the focus identification networks by utilizing training data of different focus images so as to improve the identification accuracy of the focus identification networks. In this embodiment, the lesion identification module may employ an image segmentation network based on deplab v3 +.
DeepLab provides a concept of cavity convolution, the cavity convolution acts on 9 feature points of 3 multiplied by 3 with mutually spaced rates, the convolution of features on different scales can be realized, DeepLabV3+ combines the structural characteristics of UNet on the basis of the DeepLab, and an encoder-decoder structure is added. The cavity convolution structure ensures the capability of enlarging receptive field and capturing multi-scale focus information, the encoder-decoder structure reserves shallow characteristic image information and deep characteristic image information, and the more abundant and more-scale characteristic image information ensures the excellent semantic segmentation capability of deep LabV3 +.
In the network training process, the invention provides a brand-new composite loss function aiming at multiple evaluation indexes by combining with a specific scene of medical image focus identification, and the specific formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Figure BDA0003265514350000111
Figure BDA0003265514350000112
wherein CompundLoss is a loss function, LesionLoss is lesion-level loss, PixelLoss is pixel-level loss, alpha is a weighting coefficient, and F is the weight of the pixel-level lossfocal(x) Represents the focal function, k and gamma are fixed parameters, and x represents the variable of the focal function; β is a weighting factor, Pre _ PixelLoss represents the pixel level precision loss, Rec _ PixelLoss represents the pixel level recall loss, T1Representing the lesion area, P, of the real label map1Indicates the predicted focal region of the label map, T1∩P1Represents T1And P1Smooth is a very small amount for preventing the denominator from being 0; pre _ LesionLoss representationLesion-level loss of precision, Rec _ LesionLoss denotes lesion-level loss of recall, T2Represents the set of connected domains of the real label map lesions, P2Represents the set of the predicted label graph focus connected domain, | N (P)2,T2) And | represents the number of lesions accurately predicted.
The composite loss function adopted in the invention combines pixel-level loss and focus-level loss, the pixel-level loss ensures the improvement of pixel-level evaluation indexes, and the focus-level loss ensures the focus-level evaluation indexes on one hand; on the other hand, the lesion level loss calculation takes a single lesion connected domain as a unit, and a lesion with a small area and a lesion with a large area have the same weight, so that the identification of the small lesion can be effectively enhanced. The composite loss function provides possibility of obtaining network models with different requirements by adding two weighting coefficients of alpha and beta, the network model obtained by training alpha <0.5 can have more excellent recall rate and can provide richer potential focus areas for doctors of the pathology department, the network model obtained by training alpha <0.5 can give consideration to the recall rate and the accuracy rate and has more comprehensive performance, in practical application, the intersection of the focus areas obtained by the identification of the two network models represents that the focus areas have the maximum probability, and the difference set of the focus areas obtained by the identification of the two network models represents that a certain probability is the focus areas.
In the process of network model training, a step-by-step network training method based on various loss functions is specially designed by combining the composite loss function provided by the invention:
first, a network model is trained by using a first loss function, wherein the first loss function can adopt an IoULoss loss function or a BCEWithLogitsLoss loss function. Only the network model with the best pixel level IoU index on the verification set Valid is saved during training and is used as an intermediate network model;
and then, based on the intermediate network model, using a second loss function, wherein the second loss function can adopt the composite loss function to train the intermediate network model, and only the network model with the best pixel-level IoU index on the validation set Valid is saved as a final network model during training so as to maintain the pixel-level evaluation index and simultaneously improve the focus-level evaluation index.
According to the invention, the multi-step network training method based on multiple loss functions can be combined with the advantages of the multiple loss functions on different evaluation indexes, and a better network model can be obtained through training.
Because the focus identification effect at the edge of the slice visual field image is poor in the process of focus identification by the focus identification module, the focus image at the edge cannot be accurately observed in the slice visual field image. Therefore, in the invention, the image processing system is also specially designed with an edge extension module.
The edge extension module is used for acquiring a current historical visual field spliced image from the image splicing module, filling an extension area of the current slice visual field image in the current historical visual field spliced image, and sending the current slice visual field image after edge extension to the focus identification module;
when image filling is carried out, if the expansion area of the current slice view image has real image content in the current historical view spliced image, filling the expansion area of the current slice view image by using the real image content; if the expansion area of the current slice view image does not have real image content in the current historical view spliced image, mirror image copying is carried out on the real image content on one side of the current slice view image adjacent to the expansion area to obtain mirror image content, and the mirror image content is used for filling the expansion area of the current slice view image; finally obtaining a current slice view image after edge expansion after filling;
as shown in fig. 3 to 5, fig. 3 is a current slice view image, fig. 4 is a slice view image filled with real image content in a current history view stitched image, wherein image portions except for the existing image in fig. 3 in fig. 4 are all real image content in the current history view stitched image, a black portion is a portion where an extended region of the current slice view image has no real image content in the current history view stitched image, and fig. 5 is a slice view image filled with mirror image content in a portion where no real image content in fig. 4 is present, and finally, the current slice view image with an expanded edge is obtained. Fig. 6 is an example of the current historical visual field lesion image, and the area enclosed by black lines in the image is the identified lesion.
According to the invention, the edge extension module can effectively enrich the focus information at the edge of the current slice visual field image, so that the focus identification module can identify the edge focus more accurately, and the identification accuracy of the focus identification module is improved. In the process of edge extension, the filled real image content can effectively improve the identification accuracy of the focus identification module. The black image can obviously reduce the identification accuracy of the focus identification module, and the mirror image content filled in the invention can replace the black image without the real image content part in the expansion region, thereby avoiding the influence of the black image on the identification accuracy of the focus identification module.
After receiving the current slice visual field image after edge expansion sent by the edge expansion module, the focus identification module identifies the focus by using a focus identification network to obtain a focus identification image containing a focus communication domain, then cuts the obtained focus identification image according to the size of the current slice visual field image before edge expansion, namely deletes the expansion area of the current slice visual field image, and then sends the cut focus identification image to the image superposition module;
in the invention, after receiving a focus identification image transmitted by a focus identification module, an image superposition module extracts the edge of a focus connected domain in the focus identification image and superposes the extracted edge of the focus connected domain in the focus identification image on a corresponding slice visual field image. The extraction of the edge of the focus connected domain and the superposition of the edge of the focus connected domain on the corresponding slice view image are conventional techniques in the field of image processing, and are not described herein again.
In the invention, an augmented reality module can be additionally arranged, and the augmented reality module and the augmented reality technology are mature prior art. The image superposition module sends the extracted edge of the focus connected domain to the augmented reality module through data transmission, and the augmented reality module directly superposes the edge of the focus connected domain on a corresponding focus in the visual field of the microscope eyepiece through light path conduction, so that a pathologist can directly observe the visual field of the microscope with a focus label (namely, the edge display of the focus connected domain) through the microscope eyepiece in real time, and the pathologist can conveniently and directly observe.
As shown in fig. 2, the method for processing and displaying the microscopic section image in real time implemented by using the system for processing and displaying the microscopic section image in real time sequentially comprises the following steps:
a: collecting a section visual field image under a microscope by a microscope system;
b: the image splicing module acquires a section visual field image acquired by the microscope system and transmits the acquired section visual field image to the lesion identification module;
c: the focus identification module carries out focus identification on the slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sends the obtained focus identification image to the image superposition module;
d: the image superposition module extracts the edge of a focus connected domain in the focus identification image, superposes the extracted edge of the focus connected domain in the focus identification image on the corresponding section view image to obtain a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope visual field image with a focus label in real time;
e: the image splicing module judges whether a previous section view image and/or a microscope view image with a focus label exists or not, if yes, the previous and next section view images and/or the previous and next microscope view images with the focus label are spliced, wherein the previous and next section view images are spliced into a current historical view spliced image, the previous and next microscope view images with the focus label are spliced into a current historical view focus image, the current historical view spliced image and/or the current historical view focus image obtained through splicing are sent to an auxiliary display system, and then the step A is returned, and the next section view image is continuously obtained until all the section view images are obtained; if not, directly returning to the step A, and continuously acquiring the next section view image until all the section view images are acquired;
and then displaying the current historical visual field spliced image and/or the current historical visual field lesion image in real time by an auxiliary display system.
In order to more accurately identify the edge focus and improve the identification accuracy of the focus identification module, the invention also uses the edge extension module to carry out edge extension.
In the step B, the image splicing module sends the acquired slice visual field image acquired by the microscope system to the edge extension module, the edge extension module carries out image filling on an extension area of the current slice visual field image in the current historical visual field spliced image, and the current slice visual field image after edge extension is sent to the focus identification module;
when image filling is carried out, if the expansion area of the current slice view image has real image content in the current historical view spliced image, filling the expansion area of the current slice view image by using the real image content; if the expansion area of the current slice view image does not have real image content in the current historical view spliced image, mirror image copying is carried out on the real image content on one side of the current slice view image adjacent to the expansion area to obtain mirror image content, and the mirror image content is used for filling the expansion area of the current slice view image; finally obtaining a current slice view image after edge expansion after filling;
in the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the lesion recognition module performs lesion recognition by using a lesion recognition network to obtain a lesion recognition image including a lesion connected domain, then cuts the obtained lesion recognition image by the size of the current slice view image before edge expansion, that is, deletes the expansion region of the current slice view image, and then sends the cut lesion recognition image to the image superposition module.

Claims (10)

1. A microscopic section image real-time processing display system is characterized in that: comprises a microscope system, an image processing system and an auxiliary display system;
the microscope system is used for collecting a section visual field image under a microscope;
the image processing system is used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the microscope visual field image processing system is also used for carrying out focus identification on each section visual field image, extracting the edge of a focus connected domain in the identified focus identification image, superposing the extracted edge on the corresponding section visual field image to obtain a microscope visual field image with a focus label and sending the microscope visual field image with the focus label to the auxiliary display system; the microscope visual field image processing system is also used for sequentially splicing a plurality of microscope visual field images with focus labels according to a time sequence to obtain a current historical visual field focus image and sending the current historical visual field focus image to the auxiliary display system;
the auxiliary display system is used for displaying the microscope visual field image with the focus label, the current historical visual field splicing image and the current historical visual field focus image in real time.
2. The system for real-time processing and displaying of the microscopic section image according to claim 1, wherein: the image processing system comprises an image splicing module, a focus identification module and an image superposition module;
the image splicing module is used for sequentially acquiring the section visual field images acquired by the microscope system and sequentially transmitting the acquired section visual field images to the lesion identification module; the image splicing module is further used for sequentially splicing the plurality of slice view images according to a time sequence to obtain a current historical view spliced image and sending the current historical view spliced image to the auxiliary display system; the image splicing module is also used for sequentially acquiring the microscope visual field images with the focus labels transmitted by the image superposition module, sequentially splicing the plurality of microscope visual field images with the focus labels according to a time sequence, and finally obtaining the current historical visual field focus image and transmitting the current historical visual field focus image to the auxiliary display system;
the focus identification module is used for respectively identifying focuses of each slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sending the obtained focus identification image to the image superposition module;
and the image superposition module is used for sequentially acquiring the section visual field image acquired by the microscope system and the focus identification image transmitted by the focus identification module, extracting the edge of a focus connected domain in the focus identification image, superposing the extracted edge of the focus connected domain in the focus identification image on the corresponding section visual field image, acquiring the microscope visual field image with a focus label, and transmitting the microscope visual field image to the image splicing module and the auxiliary display system.
3. The system for real-time processing and displaying of the microscopic section image according to claim 1, wherein the image processing system performs the following steps when stitching the two images:
a: setting the current image acquired by the image processing system as f1(x, y) the previous image is f2(x, y), then f2(x,y)=f1(x-dx, y-dy), i.e. f1(x, y) is represented by f2(x, y) translating (dx, dy) to obtain; fourier transformation is respectively carried out on the current image and the previous image to obtain a frequency domain image F1(u, v) and frequency domain image F2(u,v),F2(u,v)=F1(u,v)e-i·2π(u·dx+v·dy)
Wherein the translation comprises a parallel movement in a horizontal direction and/or a vertical direction; f. of1(x, y) represents the gray value of the coordinate pixel point of the current image (x, y), wherein (x, y) is the coordinate position of the image pixel point; f. of2(x, y) is the gray value of the pixel point of the coordinate of the previous image (x, y); f1(u, v) is the value of the frequency domain image of the current image in the (u, v) frequency domain coordinates, where (u, v) is the frequency domain coordinates of the frequency domain image, F2(u, v) is the value of the frequency domain image of the previous image in (u, v) frequency domain coordinates, i represents a complex symbol, and dx is the twoThe moving distance between the images in the x-axis direction, and dy is the moving distance between the two images in the y-axis direction;
b: for frequency domain image F2Conjugation is carried out to obtain a conjugated frequency domain image
Figure FDA0003265514340000021
Then the conjugated frequency domain image is processed
Figure FDA0003265514340000022
And frequency domain image F1After multiplication, normalization processing is carried out to obtain cross-power spectrums H (u, v),
Figure FDA0003265514340000023
c: carrying out Fourier inversion on the cross-power spectrum H (u, v) to obtain a real-domain graph Fe(x,y),Fe(x, y) is a pulse function image; finding Fe(x, y) the coordinates of the peak position in the (x, y) are used as the displacement amount (dx, dy) of the front and rear images, and then the front image is respectively placed at four positions of the upper left, lower left, upper right and lower right of the current image, and the moving distance between the two images in the x-axis direction is dx and the moving distance in the y-axis direction is dy at the four positions; then, respectively calculating the mean absolute value of the gray value difference values of the overlapping areas of the front image and the rear image at the four positions, wherein the smallest mean absolute value is the correct splicing position relation of the front image and the rear image, and then splicing the images according to the displacement (dx, dy) and the position relation to obtain a current historical visual field spliced image or a current historical visual field focus image;
d: and c, according to the method of the steps a to c, splicing each current image acquired by the image processing system with the previous current image, and completing the splicing of all the images by the image processing system by combining the spliced current history view spliced image or the current history view focus image to obtain the current history view spliced image or the current history view focus image when the last image is cut off.
4. The system for real-time processing and displaying of the microscopic section image according to claim 2, wherein: in the process of training the focus identification module, a step-by-step network training method is adopted;
firstly, training a network model by using a first loss function, wherein only the network model with the best pixel level IoU index on a verification set Valid is saved as an intermediate network model during training;
then, based on the intermediate network model, training the intermediate network model by using a second loss function, wherein only the network model with the best pixel level IoU index on the validation set Valid is saved as a final network model during training; wherein the first loss function and the second loss function are different loss functions.
5. The system for real-time processing and displaying of the microscopic section image according to claim 4, wherein: in the process of training the lesion recognition module, the second loss function adopts a composite loss function aiming at multiple evaluation indexes, and the concrete formula of the composite loss function is as follows:
CompoundLoss=Ffocal(α×LesionLoss+(1-α)×PixelLoss);
Ffocal(x)=-k×(1-x)γ×log(x);
Figure FDA0003265514340000031
Figure FDA0003265514340000032
wherein CompundLoss is a loss function, LesionLoss is lesion-level loss, PixelLoss is pixel-level loss, alpha is a weighting coefficient, and F is the weight of the pixel-level lossfocal(x) Represents the focal function, k and gamma are fixed parameters, and x represents the variable of the focal function; beta is a weighting coefficient, and Pre _ PixelLoss indicates pixel level precisionRate loss, Rec _ PixelLoss denotes pixel level recall loss, T1Representing the lesion area, P, of the real label map1Indicates the predicted focal region of the label map, T1∩P1Represents T1And P1Smooth is a very small amount for preventing the denominator from being 0; pre _ LesionLoss represents lesion-level accuracy loss, Rec _ LesionLoss represents lesion-level recall loss, T2Represents the set of connected domains of the real label map lesions, P2Represents the set of the predicted label graph focus connected domain, | N (P)2,T2) And | represents the number of lesions accurately predicted.
6. The system for real-time processing and displaying of the microscopic section image according to claim 2, wherein: the image processing system also comprises an edge extension module, a focus identification module and a video processing module, wherein the edge extension module is used for acquiring a current historical view spliced image from the image splicing module, filling an extension area of the current slice view image in the current historical view spliced image, and sending the current slice view image after edge extension to the focus identification module;
when image filling is carried out, if the expansion area of the current slice view image has real image content in the current historical view spliced image, filling the expansion area of the current slice view image by using the real image content; if the expansion area of the current slice view image does not have real image content in the current historical view spliced image, mirror image copying is carried out on the real image content on one side of the current slice view image adjacent to the expansion area to obtain mirror image content, and the mirror image content is used for filling the expansion area of the current slice view image; and finally obtaining the current slice view image after edge expansion after filling.
7. The system for real-time processing and displaying of the microscopic section image according to claim 6, wherein: the image processing system utilizes the current section view image after the edge expansion to carry out focus identification, cuts the obtained focus identification image according to the size of the current section view image before the edge expansion, namely deletes the expansion area of the current section view image, then extracts the edge of a focus communication area in the cut focus identification image and superposes the edge on the corresponding section view image to obtain a microscope view image with a focus label.
8. The system for real-time processing and displaying of the microscopic section image according to claim 1, wherein: the image processing system sends the extracted edge of the focus connected domain to the augmented reality module, and the augmented reality module directly superposes the edge of the focus connected domain on a corresponding focus in the visual field of the microscope eyepiece through light path conduction.
9. A real-time processing and displaying method for a microscopic section image by using the real-time processing and displaying system of any one of claims 1 to 8, characterized by comprising the following steps in sequence:
a: a microscope system collects a section visual field image under a microscope;
b: the image splicing module acquires a section visual field image acquired by the microscope system and transmits the acquired section visual field image to the lesion identification module;
c: the focus identification module carries out focus identification on the slice visual field image transmitted by the image splicing module through a focus identification network to obtain a focus identification image containing a focus communication domain, and then sends the obtained focus identification image to the image superposition module;
d: the image superposition module extracts the edge of a focus connected domain in the focus identification image, superposes the extracted edge of the focus connected domain in the focus identification image on the corresponding section view image to obtain a microscope view image with a focus label and sends the microscope view image to the image splicing module and the auxiliary display system; the auxiliary display system displays a microscope visual field image with a focus label in real time;
e: the image splicing module judges whether a previous slice view image and/or a microscope view image with a focus label exists or not, if yes, the previous and next slice view images and/or the previous and next microscope view images with the focus label are spliced, the current historical view spliced image and/or the current historical view focus image obtained through splicing are sent to an auxiliary display system, and then the step A is returned; if not, directly returning to the step A;
and then displaying the current historical visual field spliced image and/or the current historical visual field lesion image in real time by an auxiliary display system.
10. The method for real-time processing and displaying of the microscopic section image according to claim 9, wherein:
in the step B, the image splicing module sends the acquired slice field image acquired by the microscope system to the edge extension module, the edge extension module fills an extension area of the current slice field image in the current historical field spliced image, and sends the current slice field image after edge extension to the focus identification module;
in the step C, after receiving the current slice view image after edge expansion sent by the edge expansion module, the lesion recognition module performs lesion recognition by using a lesion recognition network to obtain a lesion recognition image including a lesion connected domain, then cuts the obtained lesion recognition image by the size of the current slice view image before edge expansion, that is, deletes the expansion region of the current slice view image, and then sends the cut lesion recognition image to the image superposition module.
CN202111096321.2A 2021-09-16 2021-09-16 Real-time processing display system and method for slice image under microscope Active CN114004854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111096321.2A CN114004854B (en) 2021-09-16 2021-09-16 Real-time processing display system and method for slice image under microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111096321.2A CN114004854B (en) 2021-09-16 2021-09-16 Real-time processing display system and method for slice image under microscope

Publications (2)

Publication Number Publication Date
CN114004854A true CN114004854A (en) 2022-02-01
CN114004854B CN114004854B (en) 2024-06-07

Family

ID=79921803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111096321.2A Active CN114004854B (en) 2021-09-16 2021-09-16 Real-time processing display system and method for slice image under microscope

Country Status (1)

Country Link
CN (1) CN114004854B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764796A (en) * 2022-04-25 2022-07-19 杭州迪英加科技有限公司 Method for displaying film viewing track of microscope
CN115620852A (en) * 2022-12-06 2023-01-17 深圳市宝安区石岩人民医院 Tumor section template information intelligent management system based on big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
CN110619318A (en) * 2019-09-27 2019-12-27 腾讯科技(深圳)有限公司 Image processing method, microscope, system and medium based on artificial intelligence
CN111784711A (en) * 2020-07-08 2020-10-16 麦克奥迪(厦门)医疗诊断系统有限公司 Lung pathology image classification and segmentation method based on deep learning
WO2021093109A1 (en) * 2019-11-14 2021-05-20 武汉兰丁智能医学股份有限公司 Mobile phone-based miniature microscopic image acquisition device, image splicing method, and image recognition method
US20210241109A1 (en) * 2019-03-26 2021-08-05 Tencent Technology (Shenzhen) Company Limited Method for training image classification model, image processing method, and apparatuses

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127451A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Image recognition method and cloud system
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN109785300A (en) * 2018-12-27 2019-05-21 华南理工大学 A kind of cancer medical image processing method, system, device and storage medium
US20210241109A1 (en) * 2019-03-26 2021-08-05 Tencent Technology (Shenzhen) Company Limited Method for training image classification model, image processing method, and apparatuses
CN110619318A (en) * 2019-09-27 2019-12-27 腾讯科技(深圳)有限公司 Image processing method, microscope, system and medium based on artificial intelligence
CN110458249A (en) * 2019-10-10 2019-11-15 点内(上海)生物科技有限公司 A kind of lesion categorizing system based on deep learning Yu probability image group
WO2021093109A1 (en) * 2019-11-14 2021-05-20 武汉兰丁智能医学股份有限公司 Mobile phone-based miniature microscopic image acquisition device, image splicing method, and image recognition method
CN111784711A (en) * 2020-07-08 2020-10-16 麦克奥迪(厦门)医疗诊断系统有限公司 Lung pathology image classification and segmentation method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张琦;张荣梅;陈彬;: "基于深度学习的医疗影像识别技术研究综述", 河北省科学院学报, no. 03, 15 September 2020 (2020-09-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764796A (en) * 2022-04-25 2022-07-19 杭州迪英加科技有限公司 Method for displaying film viewing track of microscope
CN115620852A (en) * 2022-12-06 2023-01-17 深圳市宝安区石岩人民医院 Tumor section template information intelligent management system based on big data

Also Published As

Publication number Publication date
CN114004854B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN108388882B (en) Gesture recognition method based on global-local RGB-D multi-mode
WO2020182078A1 (en) Image analysis method, microscope video stream processing method, and related apparatus
CN111227864B (en) Device for detecting focus by using ultrasonic image and computer vision
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
CN111214255B (en) Medical ultrasonic image computer-aided method
CN114004854B (en) Real-time processing display system and method for slice image under microscope
CN111292324B (en) Multi-target identification method and system for brachial plexus ultrasonic image
CN111916206B (en) CT image auxiliary diagnosis system based on cascade connection
CN114170537A (en) Multi-mode three-dimensional visual attention prediction method and application thereof
CN116703837B (en) MRI image-based rotator cuff injury intelligent identification method and device
CN116993699A (en) Medical image segmentation method and system under eye movement auxiliary training
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
CN114360695B (en) Auxiliary system, medium and equipment for breast ultrasonic scanning and analyzing
CN114332858A (en) Focus detection method and device and focus detection model acquisition method
CN114283178A (en) Image registration method and device, computer equipment and storage medium
CN113469962A (en) Feature extraction and image-text fusion method and system for cancer lesion detection
CN117726822B (en) Three-dimensional medical image classification segmentation system and method based on double-branch feature fusion
Zhang et al. Semantic feature attention network for liver tumor segmentation in large-scale CT database
CN117528131B (en) AI integrated display system and method for medical image
CN116524546B (en) Low-resolution human body posture estimation method based on heterogeneous image cooperative enhancement
CN116580446B (en) Iris characteristic recognition method and system for vascular diseases
Zhu et al. A real-time computer-aided diagnosis method for hydatidiform mole recognition using deep neural network
CN117710868B (en) Optimized extraction system and method for real-time video target
Liu et al. A semantic segmentation algorithm supported by image processing and neural network
Yasrab et al. Automating the Human Action of First-Trimester Biometry Measurement from Real-World Freehand Ultrasound

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant