CN117036878A - Method and system for fusing artificial intelligent prediction image and digital pathological image - Google Patents

Method and system for fusing artificial intelligent prediction image and digital pathological image Download PDF

Info

Publication number
CN117036878A
CN117036878A CN202310889735.3A CN202310889735A CN117036878A CN 117036878 A CN117036878 A CN 117036878A CN 202310889735 A CN202310889735 A CN 202310889735A CN 117036878 A CN117036878 A CN 117036878A
Authority
CN
China
Prior art keywords
image
pixel
fusion
matching
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310889735.3A
Other languages
Chinese (zh)
Other versions
CN117036878B (en
Inventor
王书浩
牛鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Thorough Future Technology Co ltd
Original Assignee
Beijing Thorough Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Thorough Future Technology Co ltd filed Critical Beijing Thorough Future Technology Co ltd
Priority to CN202310889735.3A priority Critical patent/CN117036878B/en
Publication of CN117036878A publication Critical patent/CN117036878A/en
Application granted granted Critical
Publication of CN117036878B publication Critical patent/CN117036878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a method and a system for fusing an artificial intelligent predictive image and a digital pathological image, wherein the method comprises the following steps: preprocessing a first image predicted by artificial intelligence; extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points; according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image; the fusion method of the pixel level comprises methods such as maximum value and the like; post-operation processing is carried out on the third image, a processed fourth image is output, and the fourth image is the fused image; the system comprises: the device comprises an image preprocessing module, a feature extraction module, a feature matching module, an image fusion module, a post-processing module and a result output module. The invention improves the working efficiency of identifying images by pathologists through pixel-by-pixel fusion.

Description

Method and system for fusing artificial intelligent prediction image and digital pathological image
Technical Field
The invention relates to the technical field of image fusion processing, in particular to a method and a system for fusing an artificial intelligent prediction image and a digital pathological image.
Background
In recent years, pathology has entered the digital age, and full-slice pathology images (whole slide image, WSI) are the basis of digital pathology, and high-quality pathology images provide convenience for aspects such as pathology Artificial Intelligence (AI) diagnosis, remote consultation, pathology data sharing and teaching. Meanwhile, along with the deep development of artificial intelligence in the medical field, the AI predictive diagnosis of pathological sections is also gradually becoming a research hotspot. The method has higher requirements on the display of the predicted AI diagnostic data and the data image of the original digital pathological section, can better realize the fusion display of the digital pathological image and the AI predicted diagnostic data, can enable a user to intuitively know the AI predicted result and the change of the illness state, and is the sum of various medical imaging devices, processing devices and fusion software. Because of the different imaging mechanisms of different medical imaging devices, the image quality, spatial and temporal characteristics of the imaging devices are greatly different. Therefore, to achieve fusion of medical images, image data conversion, image data correlation, image database and data understanding are key technologies to be solved. The image data conversion includes format conversion, three-dimensional direction adjustment, scale conversion and the like of different images, and aims to ensure that images/voxels of the multi-source images express an actual space region with the same size and ensure consistency of the multi-source images on space description of organs. Image data correlation is mainly to complete the alignment of the correlated image. Ideally, the image fusion should be able to achieve accurate point-to-point correspondence of the image under study, however in practical applications, the higher the image resolution, the more image details, and the more difficult it is to achieve point-to-point-correspondence; and, in fact, it is impossible to obtain 100% real organ information with medical images due to various objective or human factors, and medical image devices are constantly perfecting to find the real situation that the obtained images can more closely approximate the organs. The implementation of image registration techniques is a difficulty in medical image fusion. The image database completes the profiling, management and information extraction of typical cases and typical image data, and is the data support of image fusion. Data understanding is the ultimate goal of medical image fusion. The potential of image fusion is to comprehensively process information obtained by applying various imaging devices to obtain new information contributing to clinical diagnosis. However, after the existing pathology image is digitized, the AI prediction of pathology is performed, and the generated data and the original slice data are in insufficient positioning and insufficient edge processing after intersection and reproduction, and the main defects are as follows:
(1) Color distortion: in the double image fusion process, if the color distribution of two images is inconsistent, the color distortion phenomenon of the fused images is easy to occur.
(2) Information loss: in the double image fusion process, if a fusion algorithm is not reasonably selected, important information is easily lost, and the quality and the content of an image are affected.
(3) The algorithm complexity is high: some dual image fusion schemes require high computational resources and time, and are difficult to be widely used in practical applications.
(4) Alignment problem: because the shooting angles, illumination and other conditions of the two images are different, the two images need to be aligned, so that the two images can correspond to each other at the pixel level, and in the double-image fusion process, if the two images are not aligned well, the problems of blurring, distortion and the like of the fused images can be caused.
(5) The transparency is not uniform: in the double image fusion process, transparency parameters are often required to be set, but different transparency can be caused by different parameters, so that consistency is difficult to maintain, and fusion effect is affected.
First, application number: CN201810677958.2 discloses an image fusion method and system, a medical device and an image fusion terminal, the method comprises: acquiring a first stereoscopic image and a second stereoscopic image; according to the result of integrating each ray of the first stereoscopic image, acquiring the position at the maximum gray value of the first stereoscopic image and a first sampling color; sampling the second stereo image to a position corresponding to the maximum gray value of the first stereo image to obtain a second sampling color; according to the first sampling color and the second sampling color, fusing to obtain a third sampling color; taking the corresponding position as a starting point, and sampling the second stereoscopic image to a light end point according to the light rays of the corresponding position to obtain a sampling result; and fusing the sampling result with the third sampling color to obtain a fourth sampling color corresponding to the fused image. Although the accuracy of the fusion image can be improved, the stereoscopic image is not preprocessed, information in the image is easy to lose, and alignment of two images is not facilitated.
Second prior art, application number: CN201710466329.0 discloses an intelligent medical system based on image fusion, which comprises a magnetic resonance tomography scanner, an X-ray computed tomography scanner, a medical image processor and a medical diagnosis terminal, wherein the magnetic resonance tomography scanner is used for scanning a patient position of a patient to obtain an original MRI image of the patient position; the X-ray computer tomography scanner is used for scanning the affected part of the patient to obtain an original CT image of the affected part; the medical image processor is used for carrying out noise reduction processing on the original MRI image and the original CT image, and carrying out image fusion on the noise-reduced MRI image and the CT image to obtain a fused medical image. Although the MRI image and the CT image are subjected to image fusion by using a software means for medical diagnosis, the medical cost is reduced, and meanwhile, the patient can be comprehensively detected, and the accuracy of medical diagnosis is improved; but its algorithm complexity is high, and the dual image fusion scheme requires higher computational resources and time.
Third, application number: CN202011025198.0 discloses a multi-modal image fusion system and image fusion method, the system comprising: the system comprises a video controller, medical image equipment, a binocular camera, a human body surface reference surface positioning module, a handheld device positioning module, a display, a touch screen and a keyboard mouse, wherein the video controller obtains an image fusion reference surface in advance according to the placement positions of first positioning pellets in the human body surface reference surface positioning module, obtains corresponding medical images to be fused according to an obtained image selection instruction, and fuses the medical images to be fused on the image fusion reference surface based on the relative positions of the first positioning pellets on the image fusion reference surface to obtain a multi-mode fusion image. Although the medical staff can diagnose diseases according to the multi-mode fusion images by fusing the medical images to be fused of the single-mode images, the efficiency of defining the operation range is improved, the time spent in the diagnosis process is shortened, but color distortion and information loss are easily caused in the double-image fusion process, and the quality and content of the images are affected.
The fusion effect of the images in the prior art I, the prior art II and the prior art III is poor, and color distortion and information loss are easy to cause; meanwhile, the problem of complex algorithm in the fusion process is solved, so that the invention provides a method and a system for fusing an artificial intelligent prediction image and a digital pathological image, and the method and the system can fuse two different images into a clearer and more meaningful image to generate a fused image with high quality and high resolution by adopting a novel double-image fusion method under the condition of not losing any image information based on image processing and computer vision algorithm.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for fusing an artificial intelligent prediction image and a digital pathological image, which comprises the following steps:
preprocessing a first image predicted by artificial intelligence;
extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points;
according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image;
And performing post-operation processing on the third image, outputting a processed fourth image, wherein the fourth image is the fused image, and performing quality evaluation on the fused fourth image to determine whether the fusion effect meets the requirement.
Optionally, the image features include colors, textures, and shapes of the image after artificial intelligence prediction.
Optionally, the pretreatment process comprises the following steps:
receiving a first image with an artificial intelligent prediction pixel value, acquiring horizontal resolution and vertical resolution of a pathological section area in the first image, and calculating a difference value between the horizontal resolution and the vertical resolution and a preset pixel value; readjusting the size factor according to the pixel value, and obtaining the adjusted first image by the horizontal resolution and the vertical resolution of the pathological section area;
converting the adjusted first image into a gray level image, separating color channels of the gray level image, performing gray level processing on each channel independently, and converting the gray level image into an HSV color mode to obtain a first image after color space conversion;
selecting a wavelet base and a decomposition layer number to carry out wavelet decomposition on the first image after the conversion of the noisy color space to obtain a corresponding wavelet decomposition coefficient; for the decomposition scale, selecting a threshold value to perform threshold quantization on the high-frequency coefficient to obtain an estimated wavelet coefficient; and reconstructing according to the wavelet decomposed low-frequency coefficient and the high-frequency coefficient after threshold quantization of each layer to obtain a denoised first image.
Optionally, the pixel value resizing factor is configured to increase the size of the horizontal resolution and the vertical resolution based on a curve function of original sizes of the horizontal resolution and the vertical resolution when increased by a different pixel value being larger than a preset pixel value, such that the horizontal resolution and the vertical resolution increase with the pixel value resizing factor when the increasing is performed.
Optionally, the process of matching the extracted image features includes the following steps:
collecting image features of a second image to be fused, wherein the image features comprise colors, textures and shapes of the second image, forming a graph according to the boundaries of pixel sets of communication areas of the image features, obtaining areas or objects corresponding to the communication areas, and judging whether the areas corresponding to the communication areas of the image features conform to the areas or objects in the first image;
matching the image features in the first image with the image features of the second image;
obtaining image features of matching positions from the matching positions of the areas or objects corresponding to the communication areas in the second image as starting positions, wherein the matching degree of each matching position corresponds to the area or object in the first image;
And recording the matching position with the matching degree reaching the threshold as the image characteristic corresponding to the image characteristic in the first image, and taking the matching position as a fusion point.
Optionally, the process of fusing at the pixel level includes the steps of:
obtaining an image feature map according to the matching result of the image features of the first image and the second image;
calculating pixel gray values of corresponding image features at fusion points according to the image feature map;
arranging the gray values of the pixels, and selecting the pixel with the largest gray value as the pixel fused at the fusion point by a Gaussian mixture model of the fusion density peak value;
and communicating the pixels of the fusion points with large gray values to obtain a third image.
Optionally, the process of selecting pixels at the fusion point includes the steps of:
calculating the arranged pixel gray values to obtain average pixel gray values, obtaining covariance matrixes of corresponding classes, and constructing initial parameters of a Gaussian mixture model fusing density peaks;
estimating through a maximum expected algorithm, and constructing a Gaussian mixture model fusing density peaks;
and checking the probability of the arranged pixel gray values according to a Bayesian rule, and inputting the arranged pixel gray values into a Gaussian mixture model of the fusion density peak values to obtain the pixel gray values in the fusion density peak values.
Optionally, the process of connecting the pixels of the fusion point with the largest gray value comprises the following steps:
selecting a pathological section edge identification area and a pathological section periphery identification area according to the fusion point pixel with the maximum gray value to obtain an initial third image;
taking any point on an edge line corresponding to the pathological section edge identification area and the pathological section periphery identification area of the initial third image as an origin;
scanning pixel lines along a first direction and a second direction which are opposite to each other in a direction parallel to the edge lines, and obtaining a pixel point with a first pixel gray value which meets a preset condition in the first direction and a pixel point with a different pixel gray value of the edge lines as a first direction connecting point, and a pixel point with a first pixel gray value which meets the preset condition in the second direction and a pixel point with a different pixel gray value of the edge lines as a second direction connecting point;
and connecting the first direction connecting point and the second direction connecting point through a connecting line with the pixel gray value of 0 or 255, and completing the extraction of the third image outline to obtain a third image.
The invention provides a system for fusing an artificial intelligent predictive image and a digital pathological image, which comprises the following components:
the image preprocessing module is responsible for preprocessing a first image predicted by the artificial intelligence AI;
The feature extraction module is responsible for extracting color, texture and shape image features from the second image to be fused and the preprocessed first image;
the feature matching module is responsible for matching the extracted image features, and determining similar areas or objects in the first image and the second image so as to fix fusion points;
the image fusion module is responsible for carrying out pixel level fusion on the first image and the second image according to the matching result of the first image and the second image to obtain a third image;
the post-processing module is responsible for carrying out color adjustment, sharpening and artifact removal post-processing on the fused third image to obtain a fourth image, wherein the fourth image is the fused image;
and the result output module is responsible for outputting the fused fourth image.
Optionally, the image fusion module includes:
the feature map acquisition sub-module is responsible for acquiring an image feature map according to the matching result of the image features of the first image and the second image;
the pixel gray value calculation submodule is responsible for calculating the pixel gray value of the corresponding image feature at the fusion point according to the image feature map;
the model construction submodule is responsible for arranging the gray values of pixels, and selecting the pixel with the largest gray value as the pixel at the fusion point through a Gaussian mixture model of the fusion density peak value;
And the pixel communication sub-module is responsible for communicating pixels of each fusion point with large gray value to obtain a third image.
Firstly, preprocessing a first image predicted by artificial intelligence; wherein the pretreatment comprises: adjusting the size, converting the color space, denoising and the like; secondly, extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points; wherein the image features comprise colors, textures, shapes and the like of the image predicted by the artificial intelligence; then, according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image; the fusion method of the pixel level comprises methods such as maximum value and the like; finally, carrying out post-operation treatment on the third image, and outputting a fourth image after treatment, wherein the fourth image is a fused image; post-operation processing comprises operations such as color adjustment, sharpening, artifact removal and the like; according to the scheme, two or more images are fused pixel by pixel, so that a new image is generated, and the new image has the image characteristics and information of the original image; comparing the value of each pixel with the pixel values of other images at corresponding positions, and selecting the optimal pixel value for output; meanwhile, the generated diagnosis slice data based on AI depth prediction in the pathology field is used as a fusion data source, and the data process generated by deep learning has more reliable guarantee on image enhancement, denoising and super-resolution in terms of data quality; the diagnosis accuracy is improved, and the pathological condition can be more comprehensively known by comprehensively integrating the slice images predicted by the AI, so that the disease can be more accurately diagnosed; the working efficiency of identifying images by pathologists is improved: pathologists view and analyze pathological images rapidly through an image precision fusion technology, so that time and energy are saved, and working efficiency is improved; promoting medical research: the pathological section prediction image fusion can help doctors and scientists to better understand the development and change rule of diseases, so that the progress of medical research is promoted; improving the treatment quality of patients: by diagnosing diseases and knowing pathological conditions more accurately, doctors can better formulate treatment schemes, and the treatment quality of patients is improved; shortening the diagnosis time: the AI predictive pathology image fusion can present multidimensional information in the original digital slice image, thereby reducing the time for a doctor to check the slice and shortening the diagnosis time.
In summary, the method and the device perform denoising processing on the first image before fusion, so that the image is clearer; in the process of matching the extracted image features, an image pair algorithm is adopted to align the first image and the second image, so that the problem that the matching effect is affected due to different shooting angles, illumination and other conditions of the first image and the second image is solved, and the first image and the second image can be corresponding at the pixel level; according to the invention, the two aligned first images and the second image are fused together, so that the fused fourth image can simultaneously retain the information of the two images; and carrying out quality evaluation on the fused fourth image to determine whether the fusion effect meets the requirement.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart showing a method for fusing an artificial intelligent predictive image with a digital pathological image in embodiment 1 of the invention;
FIG. 2 is a diagram showing the pretreatment process in example 2 of the present invention;
FIG. 3 is a process diagram of matching extracted image features in embodiment 3 of the present invention;
FIG. 4 is a process diagram of performing pixel level fusion in embodiment 4 of the present invention;
FIG. 5 is a diagram showing a process of selecting pixels at a fusion point according to embodiment 5 of the present invention;
FIG. 6 is a process diagram of connecting pixels of a fusion point with the largest gray level in embodiment 6 of the present invention;
FIG. 7 is a block diagram showing a system for fusing an artificial intelligent predictive image with a digital pathology image according to embodiment 7 of the present invention;
fig. 8 is a block diagram of an image fusion module in embodiment 8 of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims. In the description of the present application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Example 1: as shown in fig. 1, an embodiment of the present invention provides a method for fusing an artificial intelligence prediction image and a digital pathology image, which includes the following steps:
s100: preprocessing a first image predicted by artificial intelligence; wherein the pretreatment comprises: adjusting the size, converting the color space, denoising and the like;
s200: extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points; wherein the image features comprise colors, textures, shapes and the like of the image predicted by the artificial intelligence;
s300: according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image; the fusion method of the pixel level comprises methods such as maximum value and the like;
s400: performing post-operation processing on the third image, outputting a processed fourth image, wherein the fourth image is a fused image, and performing quality evaluation on the fused fourth image to determine whether the fusion effect meets the requirement; post-operation processing comprises operations such as color adjustment, sharpening, artifact removal and the like;
The working principle and beneficial effects of the technical scheme are as follows: the first image predicted by the artificial intelligence is preprocessed; wherein the pretreatment comprises: adjusting the size, converting the color space, denoising and the like; secondly, extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points; wherein the image features comprise colors, textures, shapes and the like of the image predicted by the artificial intelligence; then, according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image; the fusion method of the pixel level comprises methods such as maximum value and the like; finally, carrying out post-operation treatment on the third image, and outputting a fourth image after treatment, wherein the fourth image is a fused image; post-operation processing comprises operations such as color adjustment, sharpening, artifact removal and the like; according to the scheme, two or more images are fused pixel by pixel, so that a new image is generated, and the new image has the image characteristics and information of the original image; comparing the value of each pixel with the pixel values of other images at corresponding positions, and selecting the optimal pixel value for output; meanwhile, the generated diagnosis slice data based on AI depth prediction in the pathology field is used as a fusion data source, and the data process generated by deep learning has more reliable guarantee on image enhancement, denoising and super-resolution in terms of data quality; the diagnosis accuracy is improved, and the pathological condition can be more comprehensively known by comprehensively integrating the slice images predicted by the AI, so that the disease can be more accurately diagnosed; the working efficiency of identifying images by pathologists is improved: pathologists view and analyze pathological images rapidly through an image precision fusion technology, so that time and energy are saved, and working efficiency is improved; promoting medical research: the pathological section prediction image fusion can help doctors and scientists to better understand the development and change rule of diseases, so that the progress of medical research is promoted; improving the treatment quality of patients: by diagnosing diseases and knowing pathological conditions more accurately, doctors can better formulate treatment schemes, and the treatment quality of patients is improved; shortening the diagnosis time: the AI predictive pathology image fusion can present multidimensional information in the original digital slice image, thereby reducing the time for a doctor to check the slice and shortening the diagnosis time.
In summary, the method and the device perform denoising processing on the first image before fusion, so that the image is clearer; in the process of matching the extracted image features, an image pair algorithm is adopted to align the first image and the second image, so that the problem that the matching effect is affected due to different shooting angles, illumination and other conditions of the first image and the second image is solved, and the first image and the second image can be corresponding at the pixel level; according to the invention, the two aligned first images and the second image are fused together, so that the fused fourth image can simultaneously retain the information of the two images; and carrying out quality evaluation on the fused fourth image to determine whether the fusion effect meets the requirement.
Example 2: as shown in fig. 2, on the basis of embodiment 1, the pretreatment process provided in the embodiment of the present invention includes the following steps:
s101: receiving a first image with an artificial intelligent prediction pixel value, acquiring horizontal resolution and vertical resolution of a pathological section area in the first image, and calculating a difference value between the horizontal resolution and the vertical resolution and a preset pixel value; readjusting the size factor according to the pixel value, and obtaining the adjusted first image by the horizontal resolution and the vertical resolution of the pathological section area;
The pixel value resizing factor is configured to increase the sizes of the horizontal resolution and the vertical resolution based on a curve function of original sizes of the horizontal resolution and the vertical resolution when increased by different pixel values larger than a preset pixel value, and to cause the horizontal resolution and the vertical resolution to increase with the pixel value resizing factor when the increasing is performed;
s102: converting the adjusted first image into a gray level image, separating color channels of the gray level image, performing gray level processing on each channel independently, and converting the gray level image into an HSV (Hue), saturation and brightness (Value) color mode to obtain a first image after color space conversion;
the expression of converting the adjusted first image into the HSV color mode is as follows:
wherein R represents the color space red primary color value size of the adjusted first image, G represents the color space green primary color value size of the adjusted first image, B represents the color space blue primary color value size of the adjusted first image, H represents the hue primary color value size in the HSV color mode, S represents the saturation primary color value size in the HSV color mode, and V represents the brightness primary color value size in the HSV color mode; MAX is R, G and bmax, MIN is R, G and bmax;
S103: selecting a wavelet base and a decomposition layer number to carry out wavelet decomposition on the first image after the conversion of the noisy color space to obtain a corresponding wavelet decomposition coefficient; for the decomposition scale, selecting a threshold value to perform threshold quantization on the high-frequency coefficient to obtain an estimated wavelet coefficient; reconstructing according to the wavelet decomposed low-frequency coefficient and the high-frequency coefficient after threshold quantization of each layer to obtain a denoised first image;
the working principle and beneficial effects of the technical scheme are as follows: the method comprises the steps of firstly, receiving a first image with an artificial intelligent prediction pixel value, obtaining horizontal resolution and vertical resolution of a pathological section area in the first image, and calculating a difference value between the horizontal resolution and the vertical resolution and a preset pixel value; readjusting the size factor according to the pixel value, and obtaining the adjusted first image by the horizontal resolution and the vertical resolution of the pathological section area; the pixel value resizing factor is configured to increase the sizes of the horizontal resolution and the vertical resolution based on a curve function of original sizes of the horizontal resolution and the vertical resolution when increased by different pixel values larger than a preset pixel value, and to cause the horizontal resolution and the vertical resolution to increase with the pixel value resizing factor when the increasing is performed; secondly, converting the adjusted first image into a gray level image, separating color channels of the gray level image, performing gray level processing on each channel independently, and converting the gray level image into an HSV (Hue), saturation and brightness (Value) color mode to obtain a first image after color space conversion; finally, selecting a wavelet base and a decomposition layer number to carry out wavelet decomposition on the first image after the conversion of the noisy color space to obtain a corresponding wavelet decomposition coefficient; for the decomposition scale, selecting a threshold value to perform threshold quantization on the high-frequency coefficient to obtain an estimated wavelet coefficient; reconstructing according to the wavelet decomposed low-frequency coefficient and the high-frequency coefficient after threshold quantization of each layer to obtain a denoised first image; according to the scheme, the horizontal resolution and the vertical resolution of the pathological section area are readjusted according to the pixel value, so that an adjusted first image is obtained, the requirement of the resolution of the input first image is ensured, the definition of the first image is ensured, the identifiable precision of the pathological section area is improved, and more reliable data are provided for matching the image characteristics; the influence of illumination yin deficiency is eliminated through color space conversion, and the influence of illumination factors on the processing result of the first image is reduced; the wavelet denoising can remove noise and improve peak signal-to-noise ratio, and meanwhile, the image is not like soft threshold denoising smooth blurring; by preprocessing the first image, the definition and the identifiable precision of the first image are further improved, and good images are provided for fusion of the images.
Extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points; where image features include color, texture, shape, etc. of the image after artificial intelligence prediction.
Example 3: as shown in fig. 3, on the basis of embodiment 1, the process for matching the extracted image features provided in the embodiment of the present invention includes the following steps:
s201: collecting image features of a second image to be fused, wherein the image features comprise colors, textures, shapes and the like of the second image, forming a graph according to the boundaries of pixel sets of communication areas of the image features, obtaining areas or objects corresponding to the communication areas, and judging whether the areas corresponding to the communication areas of the image features conform to the areas or the objects in the first image;
s202: matching the image features in the first image with the image features of the second image;
s203: obtaining image features of matching positions from the matching positions of the areas or objects corresponding to the communication areas in the second image as starting positions, wherein the matching degree of each matching position corresponds to the area or object in the first image;
S204: recording the matching position with the matching degree reaching the threshold as the image characteristic corresponding to the image characteristic in the first image, and taking the matching position as a fusion point;
the working principle and beneficial effects of the technical scheme are as follows: firstly, collecting image features of a second image to be fused, wherein the image features comprise colors, textures, shapes and the like of the second image, forming a graph according to the boundaries of pixel sets of communication areas of the image features, obtaining areas or objects corresponding to the communication areas, and judging whether the areas corresponding to the communication areas of the image features conform to the areas or the objects in the first image; secondly, matching the image features in the first image with the image features of the second image; then, taking the matching position of the area or the object corresponding to the communication area in the second image as a starting position, and obtaining the image characteristics of the matching position, wherein the matching degree of each matching position corresponds to the area or the object in the first image; finally, recording the matching position with the matching degree reaching the threshold as the image characteristic corresponding to the image characteristic in the first image, and taking the matching position as a fusion point; according to the scheme, the image features are extracted from the second image to be fused and the preprocessed first image, the extracted image features are matched, the similar areas or objects in the first image and the second image are determined, the image features are matched, the success rate of the matching of the first image and the second image is improved through the determination of the matching positions of the areas or the objects, a foundation is laid for the fusion of the image features, fusion points comprising nuclear morphology, cell density and the like are fixed, the image features with higher matching degree are further defined, and the precision of the image fusion is guaranteed.
Example 4: as shown in fig. 4, on the basis of embodiment 1, the process for performing pixel level fusion provided in the embodiment of the present invention includes the following steps:
s301: obtaining an image feature map according to the matching result of the image features of the first image and the second image;
s302: calculating pixel gray values of corresponding image features at fusion points according to the image feature map;
s303: arranging the gray values of the pixels, and selecting the pixel with the largest gray value as the pixel fused at the fusion point by a Gaussian mixture model of the fusion density peak value;
s304: the pixels of the fusion points with large gray values are communicated, and a third image is obtained;
the working principle and beneficial effects of the technical scheme are as follows: in the embodiment, firstly, an image characteristic diagram is obtained according to a matching result of image characteristics of a first image and a second image; secondly, calculating pixel gray values of corresponding image features at fusion points according to the image feature map; then arranging the gray values of the pixels, and selecting the pixel with the largest gray value as the pixel fused at the fusion point by a Gaussian mixture model of the fusion density peak value; finally, the pixels of the fusion points with the maximum gray values are communicated to obtain a third image; the scheme carries out pixel level fusion to obtain a third image; the pixel level fusion method adopts a maximum value method, takes the pixel value with larger fusion points as a communication point, effectively marks the pathological section area except the pathological section area, is favorable for more intuitively obtaining the front-back change of the pathological section area and improves the fusion effect of pathological images.
Example 5: as shown in fig. 5, on the basis of embodiment 4, the process for selecting a pixel at a fusion point according to the embodiment of the present invention includes the following steps:
s3031: calculating the arranged pixel gray values to obtain average pixel gray values, obtaining covariance matrixes of corresponding classes, and constructing initial parameters of a Gaussian mixture model fusing density peaks;
s3032: estimating through a maximum expected algorithm, and constructing a Gaussian mixture model fusing density peaks;
s3033: checking the probability of the arranged pixel gray values according to a Bayesian rule, and inputting the arranged pixel gray values into a Gaussian mixture model of the fusion density peak values to obtain the pixel gray values in the fusion density peak values;
the working principle and beneficial effects of the technical scheme are as follows: in the embodiment, firstly, the arranged pixel gray values are calculated to obtain average pixel gray values, covariance matrixes of corresponding classes are obtained, and initial parameters of a Gaussian mixture model fusing density peaks are constructed; secondly, estimating through a maximum expected algorithm, and constructing a Gaussian mixture model fusing density peaks; finally, checking the probability of the arranged pixel gray values according to a Bayesian rule, and inputting the arranged pixel gray values into a Gaussian mixture model of the fusion density peak values to obtain the pixel gray values in the fusion density peak values; according to the scheme, the selection of the pixel gray value is realized through the Gaussian mixture model of the fusion density peak value, the pixels fused at the fusion point of the pixel level are determined, accurate data are provided for the contour selection of the pathological section, the contour extraction precision is improved, and the image fusion effect is effectively improved.
Example 6: as shown in fig. 6, on the basis of embodiment 4, the process of connecting the pixels of the fusion point with the largest gray value provided in the embodiment of the present invention includes the following steps:
s3041: selecting a pathological section edge identification area and a pathological section periphery identification area according to the fusion point pixel with the maximum gray value to obtain an initial third image;
s3042: taking any point on an edge line corresponding to the pathological section edge identification area and the pathological section periphery identification area of the initial third image as an origin;
s3043: scanning pixel lines along a first direction and a second direction which are opposite to each other in a direction parallel to the edge lines, and obtaining a pixel point with a first pixel gray value which meets a preset condition in the first direction and a pixel point with a different pixel gray value of the edge lines as a first direction connecting point, and a pixel point with a first pixel gray value which meets the preset condition in the second direction and a pixel point with a different pixel gray value of the edge lines as a second direction connecting point;
s3044: connecting the first direction connecting point and the second direction connecting point through a connecting line with the pixel gray value of 0 or 255, and completing the extraction of the third image outline to obtain a third image;
the working principle and beneficial effects of the technical scheme are as follows: according to the embodiment, firstly, according to the fusion point pixels with the maximum gray values, a pathological section edge identification area and a pathological section periphery identification area are selected, so that an initial third image is obtained; secondly, taking any point on an edge line corresponding to the pathological section edge identification area and the pathological section peripheral identification area non-closed section area of the initial third image as an origin; then scanning pixel lines along a first direction and a second direction which are opposite to each other in a direction parallel to the edge lines, and obtaining a pixel point with a first pixel gray value which meets a preset condition in the first direction and a pixel gray value which is different from the pixel gray value of the edge lines as a first direction connecting point, and a pixel point with a first pixel gray value which meets the preset condition in the second direction and a pixel point with a different pixel gray value of the edge lines as a second direction connecting point; finally, connecting the first direction connecting point and the second direction connecting point through a connecting line with the pixel gray value of 0 or 255, and completing the extraction of the third image contour to obtain a third image; according to the scheme, the origin of the third image contour is determined through the edge lines corresponding to the pathological section edge identification area and the pathological section peripheral identification area, so that the accurate demarcation of the third image contour is realized, the pixel line scanning is performed in the opposite first direction and the second direction along the direction parallel to the edge lines, the setting of other contour points of the third image is realized, and the extraction efficiency and the extraction precision of the third image are further improved.
Example 7: as shown in fig. 7, on the basis of embodiment 1 to embodiment 6, the system for fusing an artificial intelligent prediction image and a digital pathology image provided in the embodiment of the present invention includes:
the image preprocessing module is responsible for preprocessing a first image predicted by the artificial intelligence AI;
the feature extraction module is in charge of extracting image features such as color, texture, shape and the like from the second image to be fused and the preprocessed first image;
the feature matching module is responsible for matching the extracted image features, and determining similar areas or objects in the first image and the second image so as to fix fusion points;
the image fusion module is responsible for carrying out pixel level fusion on the first image and the second image according to the matching result of the first image and the second image to obtain a third image;
the post-processing module is responsible for carrying out post-processing such as color adjustment, sharpening, artifact removal and the like on the fused third image to obtain a fourth image, wherein the fourth image is the fused image;
the result output module is in charge of outputting the fused fourth image;
the working principle and beneficial effects of the technical scheme are as follows: the image preprocessing module of the embodiment preprocesses a first image predicted by the artificial intelligence AI; the feature extraction module extracts image features such as color, texture, shape and the like from the second image to be fused and the preprocessed first image; the feature matching module is used for matching the extracted image features and determining similar areas or objects in the first image and the second image so as to fix fusion points; the image fusion module fuses the first image and the second image at the pixel level according to the matching result of the first image and the second image to obtain a third image; the post-processing module performs post-processing such as color adjustment, sharpening, artifact removal and the like on the fused third image to obtain a fourth image, wherein the fourth image is the fused image; the result output module outputs the fused fourth image pretreatment; according to the scheme, the image preprocessing module is used for carrying out denoising, enhancement, segmentation and other treatments on the original image, so that the quality and the information readability of the first image are improved; feature extraction is performed on different types of pathological sections, and feature points such as nuclear morphology, cell density and the like are extracted; image registration, namely registering an image of which pathological section information is predicted by the artificial intelligence AI with an original section of the image, so that the image is spatially aligned; and fusing the registered images by adopting an appropriate algorithm and a data source to obtain more comprehensive and accurate pathological information.
Example 8: as shown in fig. 8, on the basis of embodiment 8, an image fusion module provided in an embodiment of the present invention includes:
the feature map acquisition sub-module is responsible for acquiring an image feature map according to the matching result of the image features of the first image and the second image;
the pixel gray value calculation submodule is responsible for calculating the pixel gray value of the corresponding image feature at the fusion point according to the image feature map;
the model construction submodule is responsible for arranging the gray values of pixels, and selecting the pixel with the largest gray value as the pixel at the fusion point through a Gaussian mixture model of the fusion density peak value;
the pixel communication sub-module is responsible for communicating pixels of each fusion point with large gray value to obtain a third image;
the working principle and beneficial effects of the technical scheme are as follows: the feature map obtaining sub-module of the embodiment obtains an image feature map according to the matching result of the image features of the first image and the second image; the pixel gray value calculating submodule calculates the pixel gray value of the corresponding image feature at the fusion point according to the image feature map; the model construction submodule arranges the gray values of the pixels, and selects the pixel with the largest gray value as the pixel fused at the fusion point by a Gaussian mixture model fused with the density peak value; the pixel communication sub-module is used for communicating the pixels of each fusion point with large gray value to obtain a third image; the scheme carries out pixel level fusion to obtain a third image; the pixel level fusion method adopts a maximum value method, takes the pixel value with larger fusion points as a communication point, effectively marks the pathological section area except the pathological section area, is favorable for more intuitively obtaining the front-back change of the pathological section area and improves the fusion effect of pathological images.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for fusing an artificial intelligence predictive image with a digital pathology image, comprising the steps of:
preprocessing a first image predicted by artificial intelligence;
extracting image features from a second image to be fused and the preprocessed first image, matching the extracted image features, determining similar areas or objects in the first image and the second image, and fixing fusion points;
according to the matching result of the image characteristics of the first image and the second image, fusing the pixel levels to obtain a third image;
and performing post-operation processing on the third image, outputting a processed fourth image, wherein the fourth image is the fused image, and performing quality evaluation on the fused fourth image to determine whether the fusion effect meets the requirement.
2. The method of claim 1, wherein the image features comprise colors, textures and shapes of the image after artificial intelligence prediction.
3. The method of fusion of an artificial intelligence predictive image with a digital pathology image according to claim 1, wherein the preprocessing process comprises the steps of:
receiving a first image with an artificial intelligent prediction pixel value, acquiring horizontal resolution and vertical resolution of a pathological section area in the first image, and calculating a difference value between the horizontal resolution and the vertical resolution and a preset pixel value; readjusting the size factor according to the pixel value, and obtaining the adjusted first image by the horizontal resolution and the vertical resolution of the pathological section area;
converting the adjusted first image into a gray level image, separating color channels of the gray level image, performing gray level processing on each channel independently, and converting the gray level image into an HSV color mode to obtain a first image after color space conversion;
selecting a wavelet base and a decomposition layer number to carry out wavelet decomposition on the first image after the conversion of the noisy color space to obtain a corresponding wavelet decomposition coefficient; for the decomposition scale, selecting a threshold value to perform threshold quantization on the high-frequency coefficient to obtain an estimated wavelet coefficient; and reconstructing according to the wavelet decomposed low-frequency coefficient and the high-frequency coefficient after threshold quantization of each layer to obtain a denoised first image.
4. A method of fusing an artificial intelligence predictive image with a digital pathology image according to claim 3, wherein the pixel value resizing factor is configured such that when increased by a different pixel value greater than the preset pixel value, the original size based on the horizontal and vertical resolutions increases in size with the horizontal and vertical resolutions, the increase in size being performed such that the horizontal and vertical resolutions increase with the pixel value resizing factor.
5. The method of fusing an artificial intelligence predictive image with a digital pathology image according to claim 1, wherein the process of matching the extracted image features comprises the steps of:
collecting image features of a second image to be fused, wherein the image features comprise colors, textures and shapes of the second image, forming a graph according to the boundaries of pixel sets of communication areas of the image features, obtaining areas or objects corresponding to the communication areas, and judging whether the areas corresponding to the communication areas of the image features conform to the areas or objects in the first image;
matching the image features in the first image with the image features of the second image;
obtaining image features of matching positions from the matching positions of the areas or objects corresponding to the communication areas in the second image as starting positions, wherein the matching degree of each matching position corresponds to the area or object in the first image;
And recording the matching position with the matching degree reaching the threshold as the image characteristic corresponding to the image characteristic in the first image, and taking the matching position as a fusion point.
6. The method of fusion of an artificial intelligence predictive image with a digital pathology image according to claim 1, wherein the process of performing the fusion at the pixel level comprises the steps of:
obtaining an image feature map according to the matching result of the image features of the first image and the second image;
calculating pixel gray values of corresponding image features at fusion points according to the image feature map;
arranging the gray values of the pixels, and selecting the pixel with the largest gray value as the pixel fused at the fusion point by a Gaussian mixture model of the fusion density peak value;
and communicating the pixels of the fusion points with large gray values to obtain a third image.
7. The method of claim 6, wherein the process of selecting pixels at the fusion point comprises the steps of:
calculating the arranged pixel gray values to obtain average pixel gray values, obtaining covariance matrixes of corresponding classes, and constructing initial parameters of a Gaussian mixture model fusing density peaks;
Estimating through a maximum expected algorithm, and constructing a Gaussian mixture model fusing density peaks;
and checking the probability of the arranged pixel gray values according to a Bayesian rule, and inputting the arranged pixel gray values into a Gaussian mixture model of the fusion density peak values to obtain the pixel gray values in the fusion density peak values.
8. The method of fusing an artificial intelligence predictive image with a digital pathology image according to claim 6, wherein the process of connecting the pixels of the fusion point with the maximum gray level value comprises the steps of:
selecting a pathological section edge identification area and a pathological section periphery identification area according to the fusion point pixel with the maximum gray value to obtain an initial third image;
taking any point on an edge line corresponding to the pathological section edge identification area and the pathological section periphery identification area of the initial third image as an origin;
scanning pixel lines along a first direction and a second direction which are opposite to each other in a direction parallel to the edge lines, and obtaining a pixel point with a first pixel gray value which meets a preset condition in the first direction and a pixel point with a different pixel gray value of the edge lines as a first direction connecting point, and a pixel point with a first pixel gray value which meets the preset condition in the second direction and a pixel point with a different pixel gray value of the edge lines as a second direction connecting point;
And connecting the first direction connecting point and the second direction connecting point through a connecting line with the pixel gray value of 0 or 255, and completing the extraction of the third image outline to obtain a third image.
9. A system for fusing an artificial intelligence predictive image with a digital pathology image, comprising:
the image preprocessing module is responsible for preprocessing a first image predicted by the artificial intelligence AI;
the feature extraction module is responsible for extracting color, texture and shape image features from the second image to be fused and the preprocessed first image;
the feature matching module is responsible for matching the extracted image features, and determining similar areas or objects in the first image and the second image so as to fix fusion points;
the image fusion module is responsible for carrying out pixel level fusion on the first image and the second image according to the matching result of the first image and the second image to obtain a third image;
the post-processing module is responsible for carrying out color adjustment, sharpening and artifact removal post-processing on the fused third image to obtain a fourth image, wherein the fourth image is the fused image;
and the result output module is responsible for outputting the fused fourth image.
10. The system for fusing an artificial intelligence predictive image with a digital pathology image according to claim 9, wherein the image fusion module comprises:
The feature map acquisition sub-module is responsible for acquiring an image feature map according to the matching result of the image features of the first image and the second image;
the pixel gray value calculation submodule is responsible for calculating the pixel gray value of the corresponding image feature at the fusion point according to the image feature map;
the model construction submodule is responsible for arranging the gray values of pixels, and selecting the pixel with the largest gray value as the pixel at the fusion point through a Gaussian mixture model of the fusion density peak value;
and the pixel communication sub-module is responsible for communicating pixels of each fusion point with large gray value to obtain a third image.
CN202310889735.3A 2023-07-19 2023-07-19 Method and system for fusing artificial intelligent prediction image and digital pathological image Active CN117036878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310889735.3A CN117036878B (en) 2023-07-19 2023-07-19 Method and system for fusing artificial intelligent prediction image and digital pathological image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310889735.3A CN117036878B (en) 2023-07-19 2023-07-19 Method and system for fusing artificial intelligent prediction image and digital pathological image

Publications (2)

Publication Number Publication Date
CN117036878A true CN117036878A (en) 2023-11-10
CN117036878B CN117036878B (en) 2024-03-26

Family

ID=88623533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310889735.3A Active CN117036878B (en) 2023-07-19 2023-07-19 Method and system for fusing artificial intelligent prediction image and digital pathological image

Country Status (1)

Country Link
CN (1) CN117036878B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263356A1 (en) * 2011-04-12 2012-10-18 Sony Corporation Method for efficient representation and processing of color pixel data in digital pathology images
KR20140131083A (en) * 2013-05-03 2014-11-12 삼성전자주식회사 Medical imaging apparatus and control method for the same
US20160155222A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Medical image processing apparatus and medical image registration method using the same
US20200134815A1 (en) * 2018-10-30 2020-04-30 Diagnocat, Inc. System and Method for an Automated Parsing Pipeline for Anatomical Localization and Condition Classification
US20200143526A1 (en) * 2017-08-23 2020-05-07 Boe Technology Group Co., Ltd. Image processing method and device
US20200305706A1 (en) * 2017-12-11 2020-10-01 Universitat Politecnica De Catalunya Image processing method for glaucoma detection and computer program products thereof
CN112150564A (en) * 2020-08-21 2020-12-29 哈尔滨理工大学 Medical image fusion algorithm based on deep convolutional neural network
CN114638292A (en) * 2022-03-10 2022-06-17 中国医学科学院北京协和医院 Artificial intelligence pathology auxiliary diagnosis system based on multi-scale analysis

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263356A1 (en) * 2011-04-12 2012-10-18 Sony Corporation Method for efficient representation and processing of color pixel data in digital pathology images
KR20140131083A (en) * 2013-05-03 2014-11-12 삼성전자주식회사 Medical imaging apparatus and control method for the same
US20160155222A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Medical image processing apparatus and medical image registration method using the same
US20200143526A1 (en) * 2017-08-23 2020-05-07 Boe Technology Group Co., Ltd. Image processing method and device
US20200305706A1 (en) * 2017-12-11 2020-10-01 Universitat Politecnica De Catalunya Image processing method for glaucoma detection and computer program products thereof
US20200134815A1 (en) * 2018-10-30 2020-04-30 Diagnocat, Inc. System and Method for an Automated Parsing Pipeline for Anatomical Localization and Condition Classification
CN112150564A (en) * 2020-08-21 2020-12-29 哈尔滨理工大学 Medical image fusion algorithm based on deep convolutional neural network
CN114638292A (en) * 2022-03-10 2022-06-17 中国医学科学院北京协和医院 Artificial intelligence pathology auxiliary diagnosis system based on multi-scale analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
STEPHANIE ROBERTSON: "Digital image analysis in breast pathology—from image processing techniques to artificial intelligence", 《TRANSLATIONAL RESEARCH》, vol. 194 *
王传鹏: "基于人工智能预测糖尿病肾病预后模型的建立", 《中国博士学位论文全文数据库医药卫生科技辑》 *

Also Published As

Publication number Publication date
CN117036878B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
WO2018120942A1 (en) System and method for automatically detecting lesions in medical image by means of multi-model fusion
EP2790575B1 (en) Method and apparatus for the assessment of medical images
WO2023137914A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112071418B (en) Gastric cancer peritoneal metastasis prediction system and method based on enhanced CT image histology
CN112785632A (en) Cross-modal automatic registration method for DR (digital radiography) and DRR (digital radiography) images in image-guided radiotherapy based on EPID (extended medical imaging)
CN112686875A (en) Tumor prediction method of PET-CT image based on neural network and computer readable storage medium
CN115049666A (en) Endoscope virtual biopsy device based on color wavelet covariance depth map model
CN117036878B (en) Method and system for fusing artificial intelligent prediction image and digital pathological image
EP4167184A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
Iddrisu et al. 3D reconstructions of brain from MRI scans using neural radiance fields
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
CN116402735A (en) Endoscope image reconstruction method based on multidirectional visual angle calibration
CN113989588A (en) Self-learning-based intelligent evaluation system and method for pentagonal drawing test
CN116721143B (en) Depth information processing device and method for 3D medical image
CN116580446B (en) Iris characteristic recognition method and system for vascular diseases
CN115147378B (en) CT image analysis and extraction method
CN117437514B (en) Colposcope image mode conversion method based on CycleGan
Sreelekshmi et al. A Review on Multimodal Medical Image Fusion
US20230316638A1 (en) Determination Of Illumination Parameters In Medical Image Rendering
CN116797828A (en) Method and device for processing oral full-view film and readable storage medium
Zhang et al. PET and MRI Medical Image Fusion Based on Densely Connected Convolutional Networks
CN117710233A (en) Depth of field extension method and device for endoscopic image
Jagadeeshwar et al. Medical Image Contrast Enhancement using Tuned Fuzzy Logic Intensification for COVID-19 Detection Applications
CN117173065A (en) Anatomical medical image fusion method based on texture perception and pixel intensity correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant