WO2023142615A1 - 图像处理方法、装置、设备、可读存储介质及程序产品 - Google Patents

图像处理方法、装置、设备、可读存储介质及程序产品 Download PDF

Info

Publication number
WO2023142615A1
WO2023142615A1 PCT/CN2022/132171 CN2022132171W WO2023142615A1 WO 2023142615 A1 WO2023142615 A1 WO 2023142615A1 CN 2022132171 W CN2022132171 W CN 2022132171W WO 2023142615 A1 WO2023142615 A1 WO 2023142615A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sample
preset
images
pseudo
Prior art date
Application number
PCT/CN2022/132171
Other languages
English (en)
French (fr)
Inventor
廖俊
姚建华
刘月平
张玲玲
Original Assignee
腾讯科技(深圳)有限公司
河北医科大学第四医院(河北省肿瘤医院)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司, 河北医科大学第四医院(河北省肿瘤医院) filed Critical 腾讯科技(深圳)有限公司
Priority to US18/224,201 priority Critical patent/US20230368379A1/en
Publication of WO2023142615A1 publication Critical patent/WO2023142615A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Definitions

  • the embodiments of the present application relate to the field of medical data processing, and in particular to an image processing method, device, equipment, readable storage medium, and program product.
  • the pathological specimens obtained by sampling are usually further studied to find and identify the boundaries of tumor tissue in the pathological specimens, so as to analyze the tumor tissue more correctly and obtain more valuable information. Results of medical analysis.
  • the pathologist after the surgically resected pathological specimen is fixed with formalin, the pathologist usually determines the boundary of the tumor tissue by visual observation; or X-ray equipment is used to scan the pathological specimen, and the doctor scans the X-ray The image is interpreted and the boundaries of the tumor tissue are determined, and then the work of tumor tissue sampling is carried out.
  • Embodiments of the present application provide an image processing method, device, equipment, readable storage medium, and program product, which can use the spectral characteristics of the first sample at different wavelengths to analyze the sample to be analyzed, and improve the accuracy of pathological material collection.
  • the technical scheme is as follows.
  • an image processing method comprising:
  • the sample image includes an image obtained by collecting a sample to be analyzed within a preset band
  • the sample image is divided into regions to obtain the result of region division, and the sample element type includes the element type to be recognized to be recognized;
  • an image region including the element type to be recognized is determined in the sample image.
  • a content recommendation device in another aspect, includes:
  • a sample acquisition module configured to acquire a sample image, the sample image includes an image obtained by collecting a sample to be analyzed within a preset band;
  • An image acquisition module configured to acquire a first image corresponding to at least one preset wavelength in the preset wavelength band in the sample image, to obtain a pseudo-color image
  • a region division module configured to perform region division on the sample image according to differences in sample element types in the sample image, to obtain a region division result, the sample element type includes an element type to be recognized to be recognized;
  • An area determining module configured to determine an image area including the element type to be identified in the sample image based on the pseudo-color image and the area division result.
  • a computer device in another aspect, includes a processor and a memory, at least one instruction, at least one program, code set or instruction set are stored in the memory, the at least one instruction, the at least A program, the code set or instruction set is loaded and executed by the processor to implement the content recommendation method described in any one of the above embodiments of the present application.
  • a computer-readable storage medium wherein at least one instruction, at least one program, code set or instruction set are stored in the storage medium, the at least one instruction, the at least one program, the code
  • the set or instruction set is loaded and executed by the processor to implement the content recommendation method described in any one of the above-mentioned embodiments of the present application.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the content recommendation method described in any one of the above embodiments.
  • the sample image is obtained by collecting the sample to be analyzed based on the preset band, and at least one preset wavelength with better effect is selected from the preset band, and the first image corresponding to the preset wavelength is determined from the sample image, and the first image is processed. After processing, a pseudo-color image is obtained, and the pseudo-color image can more accurately reflect the advantage of the preset wavelength.
  • the sample image is divided into regions to obtain the result of region division.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application
  • Fig. 2 is a flowchart of an image processing method provided by an exemplary embodiment of the present application
  • Fig. 3 is a schematic diagram of acquiring a sample image provided by an exemplary embodiment of the present application.
  • Fig. 4 is a schematic diagram of sample images in a preset band provided by an exemplary embodiment of the present application.
  • FIG. 5 is a graph of spectral characteristics corresponding to a sample image provided by an exemplary embodiment of the present application.
  • Fig. 6 is a schematic diagram of image processing of a sample to be analyzed provided by an exemplary embodiment of the present application
  • FIG. 7 is a schematic diagram of a pseudo-color image obtained through synthesis provided by an exemplary embodiment of the present application.
  • Fig. 8 is a flowchart of an image processing method provided by another exemplary embodiment of the present application.
  • Fig. 9 is a schematic diagram of taking pathological samples according to another exemplary embodiment of the present application.
  • FIG. 10 is a flow chart of region division of a sample image provided by an exemplary embodiment of the present application.
  • Fig. 11 is a graph of spectral characteristics corresponding to hollow viscera provided by an exemplary embodiment of the present application.
  • Fig. 12 is a graph of spectral characteristics corresponding to the kidney provided by an exemplary embodiment of the present application.
  • Fig. 13 is a graph of spectral characteristics corresponding to mammary glands provided by an exemplary embodiment of the present application.
  • Fig. 14 is a graph of spectral characteristics corresponding to the lung provided by an exemplary embodiment of the present application.
  • Fig. 15 is a schematic diagram of a single-color fill prompt provided by an exemplary embodiment of the present application.
  • Fig. 16 is a flow chart of obtaining prediction results provided by an exemplary embodiment of the present application.
  • Fig. 17 is a schematic representation of four types of tissue classification images provided by an exemplary embodiment of the present application.
  • Fig. 18 is a schematic representation of different images of renal cancer provided by an exemplary embodiment of the present application.
  • Fig. 19 is a schematic representation of different images of the mammary gland provided by an exemplary embodiment of the present application.
  • Fig. 20 is a schematic representation of different images of the mammary gland provided by another exemplary embodiment of the present application.
  • Fig. 21 is a structural block diagram of an image processing device provided by an exemplary embodiment of the present application.
  • Fig. 22 is a structural block diagram of an image processing device provided by another exemplary embodiment of the present application.
  • Fig. 23 is a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • Artificial Intelligence It is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technique of computer science that attempts to understand the nature of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive subject that involves a wide range of fields, including both hardware-level technology and software-level technology.
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes several major directions such as computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Machine learning (Machine Learning, ML): is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specializes in the study of how computers simulate or implement human learning behaviors to acquire new knowledge or skills, and reorganize existing knowledge structures to continuously improve their performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent, and its application pervades all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching and learning.
  • pathologists usually determine the boundaries of tumor tissue by visual observation; or use X-ray equipment to scan pathological specimens, and the doctor interprets the X-ray images and determines the boundaries of tumor tissue, and then conducts tumor tissue boundary. Work on materials.
  • X-ray equipment is also difficult to achieve widespread popularization due to its high price.
  • an image processing method which utilizes the spectral characteristics of the sample to be analyzed at different wavelengths to analyze the sample to be analyzed, so as to improve the accuracy of pathological material collection.
  • the image processing method trained for this application includes at least one of the following scenarios when applied.
  • the tissue with lesions such as: kidney organ, breast, etc.
  • the pseudo-color image is obtained after the first image corresponding to the preset wavelength, and the sample image is divided into regions according to the sample element types of the sample image, and the result of the region division is obtained, and the result of comprehensive analysis of the region division and the pseudo-color image can finally be determined more accurately
  • the image area corresponding to the tumor tissue realizes the identification process of the image area.
  • Food safety is related to life safety. Food often contains different components. Unhealthy components or incorrect component ratios may cause food safety accidents.
  • the food to be detected is used as the sample to be analyzed, the sample to be analyzed is collected in a preset wavelength band and the sample image is obtained, and the first image corresponding to the first wavelength is selected from the sample image to obtain Pseudo-color image, and divide the sample image into regions according to the sample element types of the sample image, obtain the region division results, comprehensively analyze the region division results and pseudo-color images, and finally be able to more accurately determine the regions corresponding to different components in the food to be tested. And determine the image area corresponding to the unhealthy component, and realize the recognition process of the image area.
  • an application program having an image collection function is installed in the terminal 110 .
  • the terminal 110 is used to send the sample image to the server 120 .
  • the server 120 can determine, through the image processing model 121 , the image area in the sample image that includes the element type to be identified according to the spectral information corresponding to the sample image, and mark the image area in a special way and feed it back to the terminal 110 for display.
  • the application method of the image processing model 121 is as follows: select the preset wavelength from the preset wavelength band, determine the first image corresponding to the preset wavelength from the sample image according to the preset wavelength, and process the first image to obtain Pseudo-color image; in addition, according to the sample element type in the sample image, the sample image is divided into regions, and the corresponding region division result of the sample image is obtained, and the image region in the sample image is determined by combining the region division result and the pseudo-color image, the image The area can be used to indicate the position information of the element type to be recognized.
  • the sample to be analyzed is a pathological sample, and after the sample to be analyzed is analyzed, the determined image area is the area corresponding to the tumor tissue, thereby more accurately determining the area information corresponding to the tumor tissue.
  • the above-mentioned process is an example of a non-exclusive case where the image processing model 121 applies the process.
  • the above-mentioned terminals include but are not limited to mobile terminals such as mobile phones, tablet computers, portable laptop computers, intelligent voice interaction devices, smart home appliances, and vehicle-mounted terminals, and can also be implemented as desktop computers; the above-mentioned servers can be independent
  • a physical server can also be a server cluster or a distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services , security services, content delivery network (Content Delivery Network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • Cloud technology refers to a hosting technology that unifies a series of resources such as hardware, applications, and networks in a wide area network or a local area network to realize data calculation, storage, processing, and sharing.
  • the above server can also be implemented as a node in the blockchain system.
  • the image processing method provided by this application is described in combination with the above noun introduction and application scenarios. Taking the method applied to a server as an example, as shown in FIG. 2 , the method includes steps 210 to 240 as follows.
  • Step 210 acquire a sample image.
  • the sample image includes an image obtained by collecting a sample to be analyzed within a preset waveband.
  • the band is used to indicate the range of wavelengths, for example: the visible light band is used to indicate the wavelength range between 380nm and 750nm; the near-infrared band is used to indicate the wavelength range between 750nm and 2500nm; the mid-infrared band is used to indicate the wavelength range between 2500nm and 25000nm wavelength range, etc.
  • the illumination light source providing the wavelength includes not only a halogen lamp, but also an incandescent lamp, a Light Emitting Diode (LED, Light Emitting Diode) light source, and the like.
  • the preset wavelength band is used to indicate the preset wavelength band, and the illumination light source that provides the wavelength covers the preset wavelength band.
  • the preset wavelength band is from 400nm to 1700nm, and the selected illumination source covers the wavelength range from 400nm to 1700nm; or, the preset wavelength band is from 400nm to 1700nm,
  • a lighting source can provide the The light source can provide a wavelength band from 1100nm to 1800nm, and the A lighting source and the B lighting source are used as lighting sources providing preset wavelength bands, etc.
  • Sample to be analyzed is used to indicate the sample to be analyzed.
  • the sample to be analyzed is a surgically resected pathological sample, and after analyzing the surgically resected pathological sample, the location information, property information, etc. of the pathological sample can be known; or, the sample to be analyzed is a chemical mixture, and the chemical After the mixture is analyzed, the composition information, ratio information, etc. in the mixture can be known; or, the sample to be analyzed is a gemstone, and the structure information in the gemstone can be known after the analysis of the gemstone.
  • a push-broom collection operation is performed on the sample to be analyzed to obtain a sample image.
  • the push-broom acquisition is an acquisition method of scanning and imaging point by point along the scanning line, and the push-broom acquisition operation is performed based on the acquisition equipment.
  • FIG. 3 it is a push-broom short-wave infrared hyperspectral imaging system, which includes a sample stage 310 , a short-wave infrared hyperspectral camera 320 , a line light source 330 and a sample 340 to be analyzed.
  • the hyperspectral camera 320 is a combination of imaging technology and spectral detection technology, and is used to collect sample images with spectral information.
  • the sample images are hyperspectral images, and the hyperspectral images are three-dimensional.
  • the x-axis and y-axis are used to represent The coordinate value of the two-dimensional information
  • the z-axis is used to represent the wavelength information.
  • hyperspectral image increases the spectral information of the image, has higher spectral resolution, and can reflect the sample situation of the sample to be analyzed from a wider range of bands and more levels of spectral dimensions, making the sample
  • the image can simultaneously reflect the spatial information and spectral information of the sample to be analyzed.
  • a reflective hyperspectral camera with an effective photosensitive range of 900nm to 1700nm is used to capture hyperspectral images, the spectral resolution is about 5nm, and the image resolution is 0.3 million pixels.
  • the sample to be analyzed is irradiated with an illumination source, and the sample to be analyzed is photographed multiple times under the irradiation conditions of different wavelengths within the preset wavelength band, so as to obtain the correspondence between multiple samples to be analyzed.
  • a sample image of is 900nm-1700nm (selected from the near-infrared band)
  • the sample to be analyzed is a pathological specimen after surgical resection
  • a halogen lamp is used as an illumination source
  • a hyperspectral camera is used as an image acquisition device.
  • the pathological specimens after surgical resection were collected under different wavelengths.
  • a hyperspectral camera is used to collect an image for each wavelength, so as to obtain multiple sample images corresponding to different wavelengths.
  • the sample 340 to be analyzed is irradiated with a line light source 330, and the sample 340 to be analyzed under different wavelengths is photographed by a short-wave infrared hyperspectral camera 320, so as to obtain multiple sample images at multiple different wavelengths the process of.
  • FIG. 4 it is a plurality of sample images 410 collected by a short-wave infrared hyperspectral camera 320, wherein the plurality of sample images are arranged from top to bottom according to the wavelength range of 900nm to 1700nm, and the plurality of sample images 410 are 3D hyperspectral images with spectral information.
  • the M point 420 and the N point 430 in the sample image 410 are arbitrarily selected, based on the spectral information corresponding to the sample image 410, the spectral specific curve 510 corresponding to the M point 420 is obtained, and the N point 430 corresponds to The spectral characteristic curve 520.
  • the spectral characteristic graph is a relationship diagram between light reflectance and wavelength, the abscissa is the wavelength, and the ordinate is the reflectance, and the reflectance is used to indicate the difference between the luminous flux reflected by the sample to be analyzed and the luminous flux incident on the sample to be analyzed. Compare.
  • the spectral characteristic curve in FIG. 5 is a curve obtained after reflectance correction.
  • a tunable filter is used to determine at least one wavelength within a preset wavelength range; based on the collection device, a push-broom collection operation is performed on the sample to be analyzed to obtain a sample image corresponding to at least one wavelength.
  • LCTF Liquid Crystal Tunable Filter
  • the liquid crystal tunable filter is used to select the wavelength from the illumination source covering the preset band, so that it can be quickly and vibration-free
  • the wavelength in the visible light band or the near infrared band can be carefully selected.
  • the wavelength band covered by the lighting source is 900nm ⁇ 1700nm
  • the light with a wavelength of 1130nm is obtained, that is, the liquid crystal tunable filter converts the wavelengths of other wavelengths in the preset band except 1130nm wavelength light is filtered out.
  • the acquisition device is a hyperspectral camera
  • the hyperspectral camera is a camera with a built-in grating push-broom structure.
  • a push-broom acquisition operation is performed on the sample to be analyzed to obtain a sample image.
  • the hyperspectral imaging method in which the hyperspectral camera is equipped with an external push-broom structure, as shown in FIG. 3 , moves the sample stage 310 to perform push-broom imaging. It should be noted that, the above is only an illustrative example, which is not limited in this embodiment of the present application.
  • the lens in the hyperspectral camera may be a zoom lens, that is, a lens that adjusts the field of view through optical zoom.
  • field of view matching is performed by physically lifting the optical bracket, and sample images are acquired; or, sample images are acquired by combining a variable magnification lens with a lifting bracket.
  • Step 220 acquiring a first image corresponding to at least one preset wavelength in the preset wavelength band among the sample images to obtain a pseudo-color image.
  • the sample image is a plurality of images obtained by collecting the sample to be analyzed, and the wavelength corresponding to the sample image is within a preset wavelength band.
  • the preset wavelength band at least one preset wavelength is selected from multiple wavelengths, and the sample image corresponding to the preset wavelength is used as the first image to finally obtain a pseudo-color image.
  • a preset wavelength there is at least one sample image corresponding to the preset wavelength.
  • one sample image may be randomly selected from the multiple sample images as the first image corresponding to the preset wavelength, or the multiple sample images may be The images are combined and analyzed to determine the first image corresponding to the preset wavelength.
  • a preset wavelength there is a sample image corresponding to the preset wavelength, and the sample image is used as the first image.
  • the sample image is used as the first image.
  • coloring processing is performed on the first image corresponding to a preset wavelength in the preset wavelength band to obtain a pseudo-color image.
  • the preset wavelength band is a band from 900nm to 1700nm selected from the near-infrared band. Since the preset wavelength band is an invisible band, the image corresponding to this band is a grayscale image. A preset wavelength is selected from the preset wavelength band, and the first image corresponding to the preset wavelength is a grayscale image, and coloring processing is performed on the grayscale image to obtain a pseudo-color image.
  • Pseudo-color image processing is used to indicate the technical process of converting a black-and-white grayscale image into a color image, thereby improving the legibility of the image content.
  • methods such as gray scale division method and gray scale transformation method are used to perform pseudo-color image processing.
  • the grayscale image is a single-channel image, that is, each pixel has only one value to represent the color, and its pixel value is between 0 and 255, 0 is used to indicate black, 255 is used to indicate white, and the intermediate values are different levels gray.
  • the grayscale image is a three-channel image, the pixel values of the three channels are all the same.
  • the image opposite to the single-channel image includes a three-channel image, that is, each pixel is represented by three values.
  • the RGB image is a three-channel image, and various colors are obtained by changing the three color channels of red (R), green (G), and blue (B) and superimposing them with each other .
  • each pixel is represented by three values.
  • the sample to be analyzed is a surgically resected pathological sample, as shown in FIG. 6 , which is a schematic diagram of different image processing for the sample to be analyzed.
  • Figure 610 is used to indicate the sample to be analyzed (for convenience of representation, it is obtained by using a conventional camera);
  • Figure 620 is used to indicate the hyperspectral image with a wavelength of 1300nm;
  • Figures 631 to 634 are used to indicate the use of hematoxylin and eosin staining method (HE, Hematoxylin Eosin) observed tissue performance, wherein Figure 631 is used to indicate cancer tissue (shown at point A in Figure 610 or Figure 620), and Figure 632 is used to indicate fat tissue (B in Figure 610 or Figure 620
  • Figure 633 is used to indicate normal mucosal tissue (shown at point B in Figure 610 or Figure 620), and
  • Figure 634 is used to indicate muscle tissue (shown at point D in Figure 610 or Figure 620).
  • the wavelength 1300nm is used as the selected preset wavelength
  • the hyperspectral image corresponding to Figure 620 is used as the first image corresponding to the preset wavelength
  • the above-mentioned coloring processing is performed on the first image to obtain a pseudo-color image.
  • At least two preset wavelengths at least two first images respectively corresponding to the at least two preset wavelengths are determined.
  • the i-th preset wavelength corresponds to the i-th first image, and i is a positive integer.
  • At least two first images corresponding to at least two preset wavelengths in the preset wavelength band are synthesized, and the synthesized images are color-imparted to obtain a pseudo-color image.
  • At least two preset wavelengths are selected from preset wavelength bands, and each preset wavelength corresponds to a first image.
  • at least two first images are synthesized to obtain candidate images.
  • the manner of synthesizing the multiple first images includes at least one of the following manners.
  • the first pixel values of the corresponding pixel points of at least two first images are averaged to obtain the second pixel values of the corresponding pixel points; the candidate is determined based on the second pixel values corresponding to each pixel point image.
  • the first pixel values of the corresponding pixel points of the at least two first images are summed and then averaged to obtain the corresponding pixel values
  • the second pixel value that is, the second pixel value is an average value obtained after comprehensive analysis of the first pixel values of corresponding pixel points in different first images.
  • a candidate image is obtained according to the position information of the pixel point, and the pixel value of each pixel point in the candidate image corresponds to the second pixel value.
  • the second pixel when synthesizing the first images respectively corresponding to different preset wavelengths, the second pixel after averaging the multiple first pixel values with the help of the first pixel values corresponding to the pixel points of different first images The value is used as the pixel value of the corresponding point of the second image, so that multiple different first images can be comprehensively equalized by means of the corresponding pixel points of the image, and the equalization level of different first images can be better reflected.
  • At least two first images are synthesized by software to obtain a candidate image.
  • At least two first images are input into Photoshop, and operations such as alignment, splicing, color correction, seam erasing, and exporting are performed to obtain a candidate image after at least two first images are synthesized.
  • the selected preset wavelength is realized as multiple preset wavelengths in the preset band
  • a comprehensive analysis is performed on the first images respectively corresponding to different preset wavelengths, that is, through the above-mentioned synthesis method
  • the different preset Synthesizing the first images corresponding to the wavelengths can carry out a more comprehensive analysis process of the sample to be analyzed from multiple wavelength dimensions, and the candidate images are synthesized with the help of the first images corresponding to multiple wavelengths, so that the candidate images contain the characteristics shared by multiple wavelengths.
  • the sample information smoothes the image information difference caused by the wavelength difference and avoids the limitation of analysis.
  • the first image is a grayscale image
  • the candidate image synthesized based on the first image is a grayscale image
  • the second pixel value corresponding to each pixel in the grayscale image is used to indicate the brightness of the candidate image.
  • the second pixel value is between 0 and 255, 0 is used to indicate black (minimum brightness), and 255 is used to indicate white (maximum brightness), that is, the smaller the value of the second pixel value, the smaller the brightness; The larger the numerical value of the second pixel value is, the larger the brightness is.
  • FIG. 7 it is a schematic diagram of the process of obtaining a pseudo-color image after combining and assigning first images corresponding to three preset wavelengths (wavelength 1100nm, wavelength 1300nm, and wavelength 1450nm).
  • graph 710 is used to indicate a hyperspectral image with a wavelength of 1100nm; graph 720 is used to indicate a hyperspectral image with a wavelength of 1300nm; graph 730 is used to indicate a hyperspectral image with a wavelength of 1450nm.
  • the pseudo-color image shown in Figure 740 is obtained.
  • the The pixels therein are graded for brightness and determine at least two brightness levels, so as to assign colors to different brightness levels to obtain a pseudo-color image.
  • color is assigned to pixels corresponding to different brightness levels, so that the pseudo-color image processed by color assignment is more in line with the human eye's observation habits of images, and it is convenient for professionals to pass through pseudo-color images. Different colors in an image distinguish different image regions.
  • Step 230 according to the difference of sample element types in the sample image, perform region division on the sample image to obtain a region division result.
  • the sample element type includes the element type to be identified to be identified.
  • the sample element type is used to indicate the difference in sample properties corresponding to different sample regions in the sample image.
  • the sample element types include: tumor tissue in the pathological sample, adipose tissue in the pathological sample, mucosal tissue in the pathological sample, and Muscle tissue, etc.; when the sample image is an image obtained by shooting a chemical mixture (including A compound, B compound and impurities), the sample element types include: A compound, B compound and impurities.
  • the element type to be identified is a predetermined tumor tissue to be identified (one of the sample element types corresponding to the pathological image);
  • the identification element type is predetermined, to-be-identified adipose tissue to be identified, and the like.
  • the element type to be identified is a predetermined B compound to be identified (one of the sample element types corresponding to the chemical image) and the like.
  • the sample image is an image with spectral information. According to the difference in properties of different substances, the spectral information is different. According to the spectral information corresponding to the sample image, the sample image is divided into regions to obtain the region division result.
  • the spectral information is displayed differently on the sample image.
  • the sample image is a grayscale image
  • the color of the region corresponding to the sample element A is the darkest
  • the color of the region corresponding to the sample element to be analyzed is the lightest. zoning results.
  • different regions can be filled with different colors to obtain a colored region division result; or, a darker contour line is used to divide different regions to obtain a more obvious Separated zoning results, etc.
  • Step 240 based on the pseudo-color image and the result of area division, determine the image area including the element type to be identified in the sample image.
  • the pseudo-color image is the image obtained after processing the first image corresponding to the preset wavelength; the region division result is the result of region division according to the sample element type in the sample image.
  • different colors are used to divide the pseudo-color image in the pseudo-color image, for example: the sample image is an image obtained by taking a pathological sample, in which the tumor tissue appears orange; the fat tissue appears bright yellow, and the mucosal tissue It appears as light orange which is lighter than tumor tissue, and muscle tissue appears as dark orange which is darker than tumor tissue.
  • the overlapping area between the pseudo-color image and the area division result is determined; in the sample image, the overlapping area is taken as an image area including the element type to be identified.
  • the sample to be analyzed is a pathological sample.
  • a sample image with spectral information is obtained.
  • To observe the tumor tissue in the sample image perform the above processing on the sample image to obtain the selected preset wavelength corresponding to The pseudo-color image and the region division results of the sample image.
  • the overlapping area is used as the image area including the element type (tumor tissue) to be identified, and the tumor is identified.
  • the first image corresponding to the preset wavelength is obtained from the sample image, and a pseudo-color image is obtained, the sample image is divided into regions to obtain the region division result, and the image region is determined by combining the pseudo-color image and the region division result.
  • the pre-determined preset band collect the sample to be analyzed to obtain a sample image, select at least one preset wavelength with better effect from the preset band, and determine and preset from the sample image according to at least one preset wavelength
  • the first image corresponding to the wavelength is processed to obtain a pseudo-color image, and the pseudo-color image can more accurately reflect the advantage of the preset wavelength.
  • the sample image is divided into regions to obtain a region division result.
  • determine the image area including the element type to be identified so as to determine the position information of the area to be identified (such as: tumor tissue), improve the accuracy of pathological material collection, reduce the difficulty of pathological material extraction, and use the to-be-analyzed
  • the spectral characteristics of the sample at different wavelengths correspond to the analysis of the sample to be analyzed. Not only is the operation easier, but the cost is relatively low, and it is easier to apply and widely popularized.
  • the process of dividing the sample image into regions is determined by different spectral information corresponding to different sample element types.
  • step 230 in the above embodiment shown in FIG. 2 may also be implemented as steps 810 to 850 as follows.
  • Step 810 acquire a second image.
  • the second image is a pre-marked image with spectral information collected for the sample to be analyzed.
  • the sample image and the second image are images obtained by collecting the sample to be analyzed, and when the sample to be analyzed is collected, the gold standard of the sample image is determined, that is, the second image is determined.
  • the gold standard is used to indicate a reliable method for diagnosing a disease that is currently recognized by the clinical medical community.
  • the sample to be analyzed is a surgically resected pathological sample.
  • FIG. 9 it is a schematic flow chart of collecting pathological samples.
  • the doctor removes the patient’s pathological samples through tumor resection operation 910 (the pathological samples include tumor tissue), and then performs Cutting 920 to obtain a tissue block of appropriate volume, and then fixing the tissue block 930 by soaking in formalin or other methods. For example: within 30 minutes after surgical resection of the pathological sample, put the tissue block into a sufficient amount of 3.7% neutral formalin solution for fixation, and the fixation time is 12h-48h.
  • tissue slices (about 5 mm in average) with a thickness of 5 mm ⁇ 1 mm were cut from the fixed tissue block, including tumor tissue and surrounding normal tissue of 1-2 cm.
  • the tissue block after fixing the pathological sample may be used as the sample to be analyzed, and the tissue slice after the tissue block is sliced may be used as the sample to be analyzed.
  • the tissue piece is generally taken, conventionally dehydrated, embedded, and processed by HE staining to obtain a stained image 940, and the stained image 940 is processed by a digital scanner 950.
  • a digital scanner 950 After scanning, a plurality of whole-view digital slice (WSI, Whole Slide Image) images 960 are obtained.
  • WSI is the image of the pathological sample collected under the digital scanner 950 (a motorized microscope structure).
  • the WSI image to be analyzed can be spliced from multiple pathological slices. Tablet 970.
  • the virtual large slice 970 for labeling as a gold standard
  • ARP Advanced Systems Analysis Program
  • the label It can be carried out by marking the area, and the marked area includes not only the area where one or more kinds of lesions are located, but also special areas with prompting functions.
  • the tumor tissue is marked in red
  • the normal mucosa is marked in green
  • the adipose tissue is marked in yellow
  • the muscle tissue is marked in blue
  • the tumor tissue is marked Red, normal tissue in green, adipose tissue in yellow, etc.
  • the above-mentioned color marking is only a schematic example, and different colors can also be used to mark the selected tissue, for example: when marking the breast tissue in the solid organ, mark the tumor tissue in the breast tissue red, adipose tissue in yellow, fibrous connective tissue in green, etc.
  • it may not be marked, for example: when the above-mentioned color marking method is used to mark the solid organ, if there is no fat tissue in the observed organ, then no To mark yellow.
  • the marked WSI image is used as the second image to realize the acquisition process of the second image.
  • Step 820 train the candidate segmentation model with the second image.
  • the candidate division model is an untrained model with a certain area division function.
  • the candidate segmentation model is trained. Under the training of a large number of second images, the candidate segmentation model learns, and can gradually and automatically identify special areas such as lesion areas, and gradually become capable of area segmentation Function.
  • Step 830 in response to the training of the candidate segmentation model achieving the training effect, an image segmentation model is obtained.
  • the image segmentation model is used to perform region segmentation on the first image.
  • the image partition model will be obtained because the training of the candidate partition model reaches the training target.
  • the loss value is used to judge the training effect of the candidate partition model, and the training target is at least Including one of the following situations.
  • the candidate partition model obtained by the latest iterative training is used as the image partition model.
  • the loss value reaching the convergence state is used to indicate that the value of the loss value obtained through the loss function no longer changes or the range of change is smaller than a preset threshold.
  • the loss value corresponding to the nth second image is 0.1
  • the loss value corresponding to the n+1th second image is also 0.1, which can be regarded as the loss value has reached the convergence state
  • the candidate division models adjusted by the loss values corresponding to the n+1 second images are used as image division models to implement the training process of the candidate division models.
  • the candidate segmentation model obtained in the latest iterative training is used as the image segmentation model.
  • one acquisition can obtain a loss value, and the number of acquisitions of the loss value used to train the image division model is preset.
  • the number of acquisitions of the loss value is the number of acquisitions of the second image. number; or, when one second image corresponds to multiple loss values, the number of loss value acquisitions is the number of loss values.
  • the threshold for the number of loss value acquisitions is 10 times, that is, when the acquisition times threshold is reached, the candidate division model for the latest loss value adjustment is used as the image division model, or the loss value is
  • the candidate division model adjusted by the minimum loss value in the 10-time adjustment process is used as the image division model to realize the training process of the candidate division model.
  • the deep learning network involved in the candidate segmentation model can be a convolutional network (Convolutional Networks for Biomedical Image Segmentation, U-net) for biomedical image segmentation, a Generative Adversarial Network (Generative Adversarial Networks, GAN), Convolutional Neural Networks (CNN) and other deep learning networks.
  • the deep learning network is a strategy for region segmentation.
  • machine learning algorithms other than deep learning can also be used, such as: Principal Component Analysis (Principal Component Analysis, PCA), etc.; or, other non-machine learning algorithms, such as: Support Vector Machine (Support Vector Machine, SVM ), maximum likelihood method, spectral angle, spectral information divergence, Mahalanobis distance, etc.
  • PCA Principal Component Analysis
  • SVM Support Vector Machine
  • maximum likelihood method maximum likelihood method
  • spectral angle spectral information divergence
  • Mahalanobis distance etc.
  • a second image that can represent the current gold standard for diagnosing diseases recognized by the clinical medical community is obtained, and the second image can more accurately represent the pathological location corresponding to the pathological sample.
  • step 840 divide the sample image into an image division model obtained through pre-training, and determine the difference representation of element types.
  • image preprocessing is performed on the sample image, it is input into a pre-trained image division model.
  • preprocessing 1020 is performed on the sample images 1010 .
  • the process of performing image preprocessing 1020 on the sample image includes at least one of the following: performing geometric transformation operations on the sample image, image enhancement operations, etc. (such as: image background correction, registration, denoising, etc.), so as to highlight the important features.
  • the image segmentation model 1030 obtained by pre-training the pre-processed sample image 1010 is used to divide the regions in the sample image by the image segmentation model 1030 .
  • the sample image is an image with spectral information, and spectral analysis is performed on the sample image to obtain a spectral analysis result; based on the spectral analysis result, the differential representation of the element type corresponding to the sample image is determined.
  • the spectral analysis results are expressed in the form of a spectral characteristic curve.
  • the abscissa of the spectral characteristic curve is the wavelength, and the ordinate is the reflectance.
  • Different spectral curves are used to indicate the reflection of different samples to be analyzed at different wavelengths Rate changes, that is, the results of spectral analysis.
  • the wavelength for distinguishing tumor tissue from normal tissue in different organs is 1296-1308 nm (the effect is better in this wavelength range).
  • the hollow viscera with tumor tissue such as: esophagus, stomach, colorectum
  • kidney, breast, and lung are analyzed as samples to be analyzed, and the correspondence of hollow viscera, kidney, breast, and lung
  • the sample image is a three-dimensional hyperspectral image, and according to the data corresponding to the three-dimensional hyperspectral image, spectral characteristic curves corresponding to hollow organs, kidneys, breasts, and lungs are respectively obtained.
  • FIG. 11 it is a spectral characteristic curve diagram corresponding to a hollow organ 1110, wherein, the wavelength curve corresponding to tumor tissue (cancerous tissue) is a tumor wavelength curve 1120; the wavelength curve corresponding to adipose tissue is a fat wavelength curve 1130; The wavelength curve corresponding to the mucous membrane is the mucous membrane wavelength curve 1140 ; the wavelength curve corresponding to the muscle tissue is the muscle tissue wavelength curve 1150 .
  • tumor tissue cancerous tissue
  • adipose tissue is a fat wavelength curve 1130
  • the wavelength curve corresponding to the mucous membrane is the mucous membrane wavelength curve 1140 ;
  • the wavelength curve corresponding to the muscle tissue is the muscle tissue wavelength curve 1150 .
  • FIG. 12 it is a spectral characteristic curve diagram corresponding to kidney 1210, wherein, the wavelength curve corresponding to tumor tissue (cancer tissue) is tumor wavelength curve 1220; the wavelength curve corresponding to adipose tissue is fat wavelength curve 1230; the wavelength curve corresponding to normal mucosa is The wavelength curve is the mucosal wavelength curve 1240 .
  • FIG. 13 it is a spectral characteristic curve diagram corresponding to breast 1310, wherein, the wavelength curve corresponding to tumor tissue (cancerous tissue) is tumor wavelength curve 1320; the wavelength curve corresponding to adipose tissue is fat wavelength curve 1330; The wavelength curve is the mucosal wavelength curve 1340 .
  • FIG. 14 it is a spectral characteristic graph corresponding to lung 1410 , wherein the wavelength curve corresponding to tumor tissue (cancerous tissue) is tumor wavelength curve 1420 ; the wavelength curve corresponding to normal lung is normal wavelength curve 1430 .
  • the difference representation of the element type corresponding to the sample image is the difference between different tissues, for example, tumor tissue and adipose tissue are different.
  • tissue for example, tumor tissue and adipose tissue are different.
  • Fig. 11 to Fig. 14 Based on the analysis of Fig. 11 to Fig. 14, when the wavelength is about 1300nm, different tissues in the hollow organ tissue samples show a good degree of discrimination. Tumor tissue in solid organs (such as: breast, kidney, and lung) also showed a good degree of differentiation from surrounding normal tissue and adipose tissue.
  • the tumor tissue in the 1300nm hyperspectral image is gray
  • the normal muscle tissue is darker gray than the tumor tissue
  • the fat tissue is gray white
  • the normal mucosa is lighter than the muscle layer.
  • dark gray slightly darker than tumor tissue
  • 1300nm hyperspectral image shows better discrimination of fat, muscle layer and tumor tissue.
  • the three peaks and valleys 1100nm, 1300nm, and 1450nm in the hyperspectral image are extracted as characteristic bands to synthesize a short-wave infrared color composite image, thereby providing a pseudo-color image that is more in line with the observation habits of the human eye, so that doctors can Identify different organizations.
  • short-wave infrared color composite images cancer tissue appears orange, muscle tissue appears darker orange than tumor tissue, normal mucosa appears lighter orange than cancer tissue, and adipose tissue appears bright yellow.
  • spectral analysis is performed on the sample image, so that the spectral analysis results corresponding to the sample image can be used to more intuitively determine the reflectance changes of the sample to be analyzed at different wavelengths, and then determine the representative of different
  • the difference between tissues is conducive to regional analysis of sample images based on this difference.
  • Step 850 based on the difference representation of the element type, perform region division on the sample image, and determine the region division result corresponding to the sample image.
  • the image division model gives a corresponding area information prompt on the sample image, and the area information prompt includes at least one of the following methods.
  • contour lines are used to divide different regions in the sample image to obtain different divided regions, wherein the contour lines can be either darker curves or colored curves.
  • tumor tissue area is filled with red 1510
  • fat tissue area is filled with green 1520
  • the areas that cannot be accurately divided are filled with white or not filled.
  • deep learning is performed based on the shortwave infrared hyperspectral image 1610 (sample image) and the marked WSI1620, and finally the prediction result after predicting the shortwave infrared hyperspectral image 1610 is obtained.
  • 1630 schematically, the prediction result 1630 is prompted in a monochromatic filling manner.
  • the reflectance variation of the sample to be analyzed indicated by the difference representation of the element type is fully utilized, and the sample image is divided into regions to determine the expected value.
  • the region division results corresponding to the sample images in the bands are set to refine the analysis dimension of the sample images in the form of regions, which is conducive to improving the analysis accuracy of the sample images.
  • the sample to be analyzed is collected to obtain a sample image, at least one preset wavelength with better effect is selected from the preset band, and according to at least one preset wavelength, from the sample image
  • the first image corresponding to the preset wavelength is determined in the method, and after the first image is processed, a pseudo-color image is obtained, and the pseudo-color image can more accurately reflect the advantages of the preset wavelength.
  • the sample image is divided into regions to obtain a region division result.
  • the image region including the element type to be recognized is determined, so as to determine the location information of the region to be recognized (eg, tumor tissue).
  • the training process and application process of the region division model are described.
  • the full-field digital image is used as the second image, and the untrained candidate division model is trained with the second image until the convergence condition is obtained to obtain the image division model, and the sample image is divided into regions by the image division model Division, according to the difference representation of the element type in the sample image, determine the region division result corresponding to the sample image.
  • the model is used to learn the resected tissue lesion area, and the special area such as the lesion area is automatically identified through the model, and the image area is specially marked on the image output by the model in the form of area information prompts, so that the model can be used to better Analyze images accurately to improve the accuracy of pathological materials.
  • the above image processing method is applied to the medical field to process pathological images. After obtaining pathological images with infrared hyperspectral information corresponding to different parts, combined with narrow-band synthetic pseudo-color images and deep learning to predict the resected tissue lesion area, thus providing a basis for intraoperative tumor margin determination and postoperative auxiliary pathological materials new solution.
  • the above image processing method is applied to at least two identification processes as follows: (1) identification of hollow organ tumor tissue; (2) identification of solid organ tumor tissue.
  • Hollow organs refer to organs with a lumen shape and a large amount of space inside the organs, such as stomach, intestines, bladder, gallbladder, etc.; solid organs are relative to hollow organs, including the heart, lungs, etc. , kidney, liver, spleen, etc. The difference is that the former is solid and the latter is hollow.
  • abdominal solid organs include liver, spleen, kidney, adrenal gland, pancreas, etc.
  • abdominal hollow organs include gallbladder, stomach, duodenum, jejunum, ileum, appendix, colon, etc.
  • colon cancer tissue in an optional embodiment, colon cancer tissue, rectal cancer tissue, gastric cancer tissue and esophageal cancer tissue in hollow viscera are studied.
  • HSI1300nm showed better discrimination, and the imaging colors were similar.
  • hyperspectral imaging shows a greater advantage in identifying the muscular layers of hollow viscera.
  • hyperspectral images are significantly clearer than conventional color images.
  • Selected HSI images of 1100nm, 1300nm and 1450nm to synthesize color images can clearly show the range of tumor tissue, and different tissues present colors with different intensities from yellow to orange.
  • FIG. 17 it is an image representation of four kinds of tissue classifications selected in hollow organs. Among them, sample 1 is colon cancer tissue, sample 2 is rectal cancer tissue, sample 3 is gastric cancer tissue, and sample 4 is esophageal cancer tissue.
  • the graphs 1710 to 1740 shown in the first row are used to indicate conventional color images captured by common cameras (equivalent to naked eye observation);
  • Figures 1711 to 1741 shown in the second row are used to indicate that the X-ray images obtained by using X-ray equipment can show the general outline of the tumor area, but the effect is not clear, and the muscle layer structure cannot be distinguished;
  • Figures 1712 to 1742 shown in the third row are used to indicate the hyperspectral image (HSI 1300nm image) with a wavelength of 1300nm collected by a hyperspectral camera.
  • the hyperspectral image is a grayscale image, and different tissues show different shades color, visible to the naked eye;
  • Figures 1713 to 1743 shown in the fourth row are used to indicate false color images synthesized using a hyperspectral image with a wavelength of 1100nm, a hyperspectral image with a wavelength of 1300nm, and a hyperspectral image with a wavelength of 1450nm;
  • Figures 1714 to 1744 shown in the fifth row are used to indicate the artificial intelligence segmentation image, such as the output image obtained by using the above-mentioned region division model, which can provide more detailed material information.
  • color A represents tumor tissue
  • color B represents muscle layer tissue
  • color C represents normal mucosal tissue
  • color D represents adipose tissue.
  • the darker the color the higher the confidence.
  • Graphs 1715 to 1745 shown in the sixth row indicate WSI images (gold standard) for showing the true extent of tumor tissue.
  • kidney, lung, and breast are studied in parenchymal organs with lesions (such as tumor tissue).
  • lesions such as tumor tissue
  • FIG. 18 it is a representation of different images in kidney cancer.
  • Figure 1810 is used to indicate the conventional color image in the kidney tissue (observed with the naked eye).
  • the tumor tissue appears off-white, the adipose tissue appears yellow, and the normal kidney tissue appears light brown;
  • Figure 1811 is used to indicate the normal color image.
  • Schematic diagram of the enlarged tumor border Among them, the boundary between the tumor tissue and the normal kidney tissue displayed in the enlarged image is not easy to distinguish.
  • Diagram 1820 is used to indicate an X-ray image acquired with X-ray equipment. Among them, although the X-ray image can show the outline of the tumor, but the boundary is not clear, and it is impossible to distinguish between normal kidney tissue and tumor tissue.
  • Figure 1830 is used to indicate a hyperspectral image with a wavelength of 1300nm
  • Figure 1831 is used to indicate a schematic diagram of a tumor boundary after the hyperspectral image with a wavelength of 1300nm is enlarged.
  • the tumor tissue is gray-white
  • the normal kidney tissue is gray
  • the adipose tissue is bright gray-white.
  • the tumor tissue and the surrounding normal tissue have a clear boundary.
  • Diagram 1840 is used to indicate the pseudo-color image
  • diagram 1832 is used to indicate the schematic diagram of the tumor border after zooming in on the synthesized pseudo-color image.
  • the pseudo-color image can be obtained by coloring a hyperspectral image corresponding to one wavelength, or by combining and coloring multiple hyperspectral images corresponding to multiple wavelengths .
  • the selection of the wavelength can be either randomly selected or predetermined.
  • at least one wavelength with better effect is pre-selected, and a pseudo-color image is obtained according to the hyperspectral image corresponding to the at least one wavelength.
  • pre-select a wavelength of 1300nm with a better effect and colorize the hyperspectral image corresponding to the wavelength of 1300nm to obtain a pseudo-color image
  • randomly select the wavelength of 1250nm and colorize the hyperspectral image corresponding to the wavelength of 1250nm to obtain Pseudocolor image.
  • three wavelengths with better effects are selected in advance, namely wavelength 1300nm, wavelength 1100nm and wavelength 1450nm, and the hyperspectral images corresponding to the three wavelengths are synthesized and colored to obtain a pseudo-color image.
  • the kidney cancer tissue is orange
  • the adipose tissue is bright yellow
  • the normal kidney tissue area is orange-yellow to black.
  • the boundary between the tumor tissue and the surrounding tissue is clear and easy to distinguish.
  • Figure 1850 is used to indicate the artificial intelligence segmentation image, which is used to divide the sample image into regions. Schematically, different regions are distinguished in different forms, for example, tumor tissue is shown in red, normal kidney tissue is shown in green, adipose tissue is shown in yellow, etc., and the darker the color, the higher the confidence.
  • the image is segmented by artificial intelligence. Red is tumor tissue, green is normal kidney tissue, and yellow is adipose tissue. The darker the color, the higher the confidence level, and the outline of the tumor is more consistent with the tumor boundary of WSI.
  • Map 1860 is used to indicate the WSI tumor region outline.
  • a mammary gland in a solid organ is taken as an example for illustration. It seems that it is not easy to judge the boundaries of tumors with the naked eye. For example, it is impossible to accurately identify the boundaries of tumor tissue through photos taken by ordinary cameras. As shown in Fig. 19, the diagram 1910 is used to indicate the conventional color image taken by the ordinary camera, wherein the circled part is the boundary part of the tumor tissue, and the boundary between the tumor tissue and the surrounding tissue in this part cannot be compared in the conventional color image. clearly distinguish.
  • X-ray images have a good effect in judging tumor tissue, and have been used as the main auxiliary tool in pathological material collection to help pathologists find the tumor bed area.
  • Figure 1920 is used to indicate the X-ray image acquired by X-ray equipment. In the displayed case, the edge of the tumor outline shown in the X-ray image is spiky, which is obviously larger than the tumor outline shown by WSI.
  • Figure 1930 is used to indicate a hyperspectral image with a wavelength of 1300nm, in which the tumor tissue appears in dark gray (the part corresponding to the irregular shape), and the surrounding normal breast tissue in the circle appears in lighter gray.
  • Map 1940 is used to indicate a false color image determined from a hyperspectral image corresponding to at least one selected wavelength.
  • the area of the tumor tissue is darker orange compared with the surrounding breast tissue, and the fat tissue is bright yellow.
  • Figure 1950 is used to indicate the artificial intelligence segmentation image (the processing result of the sample image through the deep learning model), which provides a more accurate information reference of the tumor tissue range;
  • Figure 1960 is used to indicate the full-field digital slice image (WSI is the gold standard ).
  • X-ray images obtained by X-ray equipment can show punctate calcifications.
  • Figure 2010 is used to indicate the conventional color image taken by an ordinary camera, the tumor area appears gray and white, and the approximate range of the tumor tissue can be identified;
  • Figure 2020 is used to indicate the X-ray images collected by X-ray equipment.
  • the X-ray image can roughly show the edge of the tumor tissue, the edge appears as a spicule, and punctate calcification can be seen inside it (the position indicated by the arrow in Figure 2020);
  • Figure 2030 is used to indicate the hyperspectral spectrum with a wavelength of 1300nm
  • the tumor tissue is shown in dark gray
  • the normal breast tissue is lighter in gray than the tumor tissue area
  • the fat tissue is gray-white.
  • Figure 2040 is used to indicate that the short-wave infrared color image is obtained after processing the hyperspectral image corresponding to at least one selected wavelength, and the short-wave infrared color image can display a clearer tumor outline;
  • Figure 2050 is used to indicate the image result of manual segmentation Shown;
  • graph 2060 is used to indicate the whole field digital slice (WSI) as the gold standard. Schematically, combined with the short-wave infrared color image and the image results of manual segmentation, the image region including the tumor tissue is determined, and the contour shown in the image region has the highest coincidence degree with the gold standard (WSI).
  • the sample to be analyzed is collected to obtain a sample image, at least one preset wavelength with better effect is selected from the preset band, and according to at least one preset wavelength, from the sample image
  • the first image corresponding to the preset wavelength is determined in the method, and after the first image is processed, a pseudo-color image is obtained, and the pseudo-color image can more accurately reflect the advantages of the preset wavelength.
  • the sample image is divided into regions to obtain a region division result.
  • the image region including the element type to be recognized is determined, so as to determine the location information of the region to be recognized (eg, tumor tissue).
  • the above-mentioned image processing method is applied to the medical field, and hollow organs and solid organs are analyzed, and the benefit of the above-mentioned image processing method is proved.
  • the above-mentioned image processing method is more reliable than the methods of naked-eye observation and touch by the doctor, and the consistency of the image is more guaranteed.
  • the hyperspectral imaging system has the characteristics of no damage, no contact, and no ionizing radiation, and the hardware system cost of the hyperspectral imaging system is lower than that of X-ray equipment.
  • Fig. 21 is a structural block diagram of an image processing device provided by an exemplary embodiment of the present application. As shown in Fig. 21, the device includes the following parts:
  • a sample acquisition module 2110 configured to acquire a sample image, the sample image includes an image obtained by collecting a sample to be analyzed within a preset band;
  • An image acquisition module 2120 configured to acquire a first image corresponding to at least one preset wavelength in the preset wavelength band in the sample image, to obtain a pseudo-color image
  • the area division module 2130 is configured to perform area division on the sample image according to the difference of the sample element type in the sample image, and obtain the area division result, and the sample element type includes the element type to be identified to be identified;
  • An area determining module 2140 configured to determine an image area including the element type to be identified in the sample image based on the pseudo-color image and the area division result.
  • the image acquisition module 2120 is further configured to perform coloring processing on the first image corresponding to a preset wavelength in the preset wavelength band to obtain the pseudo-color image; or, the Combining at least two first images corresponding to at least two preset wavelengths in the preset wavelength band, and performing coloring processing on the synthesized images to obtain the pseudo-color image.
  • the image acquisition module 2120 is further configured to determine at least two first images respectively corresponding to the at least two preset wavelengths according to the at least two preset wavelengths, wherein the first The i preset wavelengths correspond to the i-th first image, and i is a positive integer; performing synthesis processing on the at least two first images to obtain a candidate image; performing coloring processing on the candidate image to obtain the pseudo-color image.
  • the image acquisition module 2120 is further configured to average the pixel values of the corresponding pixel points of the at least two first images to obtain the second pixel value of the corresponding pixel points; based on The second pixel value corresponding to each pixel point determines the candidate image.
  • the image acquisition module 2120 is further configured to perform brightness classification on the pixels in the candidate image based on the brightness values of the pixels in the candidate image, and determine at least two brightness levels; Coloring the at least two brightness levels respectively to obtain the pseudo-color image.
  • the area division module 2130 includes:
  • a determining unit 2131 configured to divide the sample image into an image division model obtained through pre-training, and determine the difference representation of the element type
  • the division unit 2132 is configured to perform region division on the sample image based on the difference representation of the element type, and determine the region division result corresponding to the sample image.
  • the sample image is an image with spectral information
  • the determining unit 2131 is further configured to perform spectral analysis on the sample image to obtain a spectral analysis result; based on the spectral analysis result, determine the difference representation of the element type corresponding to the sample image.
  • the device is also used to acquire a second image, the second image is a pre-marked image with spectral information obtained by collecting the sample to be analyzed; using the second image Training the candidate division models; in response to the training of the candidate division models achieving a training effect, an image division model is obtained, and the image division model is used to perform region division on the first image.
  • the sample acquisition module 2110 is further configured to perform a push-broom acquisition operation on the sample to be analyzed to obtain the sample image.
  • the push-broom collection operation is performed based on collection equipment
  • the sample acquisition module 2110 is further configured to determine at least one Wavelength: based on the collection device, perform a push-broom collection operation on the sample to be analyzed to obtain a sample image corresponding to the at least one wavelength.
  • the area determination module 2140 is further configured to determine an overlapping area between the pseudo-color image and the area division result; in the sample image, the overlapping area is taken as including the The image area of the type of element to be identified.
  • the sample to be analyzed is collected to obtain a sample image, at least one preset wavelength with better effect is selected from the preset band, and according to at least one preset wavelength, from the sample image
  • the first image corresponding to the preset wavelength is determined in the method, and after the first image is processed, a pseudo-color image is obtained, and the pseudo-color image can more accurately reflect the advantages of the preset wavelength.
  • the sample image is divided into regions to obtain a region division result.
  • the image region including the element type to be recognized is determined, so as to determine the location information of the region to be recognized (eg, tumor tissue).
  • the image processing device provided by the above embodiment is only illustrated by the division of the above functional modules. In practical applications, the above function allocation can be completed by different functional modules according to the needs, that is, the internal structure of the device Divided into different functional modules to complete all or part of the functions described above.
  • the image processing device provided by the above embodiments and the image processing method embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments, and will not be repeated here.
  • Fig. 23 shows a schematic structural diagram of a server provided by an exemplary embodiment of the present application.
  • the server 2300 includes a central processing unit (Central Processing Unit, CPU) 2301, a system memory 2304 including a random access memory (Random Access Memory, RAM) 2302 and a read only memory (Read Only Memory, ROM) 2303, and a connection system memory 2304 and the system bus 2305 of the central processing unit 2301.
  • Server 2300 also includes mass storage device 2306 for storing operating system 2313 , application programs 2314 and other program modules 2315 .
  • Mass storage device 2306 is connected to central processing unit 2301 through a mass storage controller (not shown) connected to system bus 2305 . Mass storage device 2306 and its associated computer-readable media provide non-volatile storage for server 2300 .
  • computer-readable media may comprise computer storage media and communication media.
  • the server 2300 can be connected to the network 2312 through the network interface unit 2311 connected to the system bus 2305, or in other words, the network interface unit 2311 can also be used to connect to other types of networks or remote computer systems (not shown).
  • the above-mentioned memory also includes one or more programs, one or more programs are stored in the memory and configured to be executed by the CPU.
  • the embodiment of the present application also provides a computer device, the computer device includes a processor and a memory, at least one instruction, at least one section of program, code set or instruction set are stored in the memory, at least one instruction, at least one section of program, code The set or instruction set is loaded and executed by the processor to implement the image processing method provided by the above method embodiments.
  • Embodiments of the present application also provide a computer-readable storage medium, on which at least one instruction, at least one program, code set or instruction set is stored, at least one instruction, at least one program, code set or The instruction set is loaded and executed by the processor, so as to realize the image processing methods provided by the above method embodiments.
  • Embodiments of the present application also provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the image processing method described in any one of the above embodiments.
  • the computer-readable storage medium may include: a read-only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), a solid-state hard drive (SSD, Solid State Drives) or an optical disc, etc.
  • random access memory may include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
  • ReRAM resistive random access memory
  • DRAM Dynamic Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

一种图像处理方法、装置、设备、可读存储介质及程序产品,涉及医学数据处理领域。该方法包括:获取样本图像,样本图像包括在预设波段内对待分析样本进行采集得到的图像(210);获取样本图像中与预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像(220);根据样本图像中样本元素类型的差异,对样本图像进行区域划分,得到区域划分结果(230);基于伪彩色图像和区域划分结果,在样本图像中确定包括待识别元素类型的图像区域(240)。

Description

图像处理方法、装置、设备、可读存储介质及程序产品
本申请要求于2022年1月25日提交的申请号为202210086842.8、发明名称为“图像处理方法、装置、设备、可读存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及医学数据处理领域,特别涉及一种图像处理方法、装置、设备、可读存储介质及程序产品。
背景技术
在通过手术切除包括肿瘤组织的病理标本后,通常会进一步研究采样得到的病理标本,对病理标本中肿瘤组织的边界进行寻找和识别,从而对肿瘤组织进行更正确的分析,得到更有价值的医学分析结果。
相关技术中,通过福尔马林将手术切除的病理标本进行固定后,通常由病理医生以肉眼观察的方法确定肿瘤组织的边界;或者采用X光设备对病理标本进行扫描,由医生对X光影像进行解读并确定肿瘤组织的边界,进而进行肿瘤组织的取材工作。
然而,采用上述方法确定肿瘤组织的边界时,对于一些瘤床不明显的病变,肉眼很难识别,而X光设备也由于价格昂贵,较难达到广泛普及。
发明内容
本申请实施例提供了一种图像处理方法、装置、设备、可读存储介质及程序产品,能够利用第一样本在不同波长下的光谱特性对待分析样本进行分析,提高病理取材的准确性。所述技术方案如下。
一方面,提供了一种图像处理方法,所述方法包括:
获取样本图像,所述样本图像包括在预设波段内对待分析样本进行采集得到的图像;
获取所述样本图像中与所述预设波段中至少一个第一波长对应的第一图像,得到伪彩色图像;
根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,所述样本元素类型中包括待识别的待识别元素类型;
基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域。
另一方面,提供了一种内容推荐装置,所述装置包括:
样本获取模块,用于获取样本图像,所述样本图像包括在预设波段内对待分析样本进行采集得到的图像;
图像获取模块,用于获取所述样本图像中与所述预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像;
区域划分模块,用于根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,所述样本元素类型中包括待识别的待识别元素类型;
区域确定模块,用于基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域。
另一方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述本申请实施例中任一所述内容推荐方法。
另一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述本申请实施例中任一所述的内容推荐方法。
另一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的内容推荐方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
结合伪彩色图像和区域划分结果确定图像区域,可以避免仅仅依靠医生的裸眼观测和描述对肿瘤组织的大小、区域等进行判断,从而降低对患者肿瘤组织区域判断的不准确性。基于预设波段对待分析样本进行采集得到样本图像,从预设波段中选取至少一个效果较好的预设波长,并从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后,得到伪彩色图像,伪彩色图像能够较准确地体现预设波长的优势。此外,根据样本图像中待识别元素类型的差异,对样本图像进行区域划分后得到区域划分结果。结合伪彩色图像和区域划分结果,确定包括待识别元素类型的图像区域,从而确定待识别区域(如:肿瘤组织)的位置信息,提高病理取材的准确性,降低病理取材的难度,利用待分析样本在不同波长下对应的光谱特性进行分析,不仅操作较为简便,且成本相对较低,更容易广泛应用。
附图说明
图1是本申请一个示例性实施例提供的实施环境示意图;
图2是本申请一个示例性实施例提供的图像处理方法的流程图;
图3是本申请一个示例性实施例提供的获取样本图像的示意图;
图4是本申请一个示例性实施例提供的预设波段内的样本图像示意图;
图5是本申请一个示例性实施例提供的样本图像对应的光谱特征曲线图;
图6是本申请一个示例性实施例提供的对待分析样本进行图像处理的示意图;
图7是本申请一个示例性实施例提供的合成得到伪彩色图像的示意图;
图8是本申请另一个示例性实施例提供的图像处理方法的流程图;
图9是本申请另一个示例性实施例提供的对病理样本进行取材的示意图;
图10是本申请一个示例性实施例提供的对样本图像进行区域划分的流程图;
图11是本申请一个示例性实施例提供的空腔脏器对应的光谱特征曲线图;
图12是本申请一个示例性实施例提供的肾对应的光谱特征曲线图;
图13是本申请一个示例性实施例提供的乳腺对应的光谱特征曲线图;
图14是本申请一个示例性实施例提供的肺对应的光谱特征曲线图;
图15是本申请一个示例性实施例提供的单色填充提示的示意图;
图16是本申请一个示例性实施例提供的获得预测结果的流程图;
图17是本申请一个示例性实施例提供的四种组织分类图像表现示意图;
图18是本申请一个示例性实施例提供的肾癌的不同图像表现示意图;
图19是本申请一个示例性实施例提供的乳腺的不同图像表现示意图;
图20是本申请另一个示例性实施例提供的乳腺的不同图像表现示意图;
图21是本申请一个示例性实施例提供的图像处理装置的结构框图;
图22是本申请另一个示例性实施例提供的图像处理装置的结构框图;
图23是本申请一个示例性实施例提供的服务器的结构框图。
具体实施方式
首先,针对本申请实施例中涉及的名词进行简单介绍。
人工智能(Artificial Intelligence,AI):是利用数字计算机或者数字计算机控制的机器模 拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
机器学习(Machine Learning,ML):是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、示教学习等技术。
相关技术中,通常由病理医生以肉眼观察的方法确定肿瘤组织的边界;或者采用X光设备对病理标本进行扫描,由医生对X光影像进行解读并确定肿瘤组织的边界,进而进行肿瘤组织的取材工作。然而,采用上述方法确定肿瘤组织的边界时,对于一些瘤床不明显的病变,肉眼很难识别,而X光设备也由于价格昂贵,较难达到广泛普及。
本申请实施例中,提供了一种图像处理方法,利用待分析样本在不同波长下的光谱特性对待分析样本进行分析,提高病理取材的准确性。针对本申请训练得到的图像处理方法,在应用时包括如下场景中的至少一种。
一、应用于医学领域中
肿瘤切除手术中需要准确地知道肿瘤边缘位置,以实现完整切除肿瘤区域的过程,从而防止患者病情复发和避免二次手术,术后组织病理分析是肿瘤诊断的金标准。为了准确获得患者的病灶信息,医生挑选病理组织块的过程尤为重要,漏选含有病灶的组织块将限制病理医生做出更准确的判断,而过多的选取组织块则会大大增加制片的工作量,降低医疗效率。示意性的,采用上述图像处理方法,以具有病灶的组织(如:肾脏器官、乳腺等)为待分析样本,对待分析样本在预设波段内进行采集并获取得到样本图像,从样本图像中选择预设波长对应的第一图像后得到伪色彩图像,并根据样本图像的样本元素类型对样本图像进行区域划分,得到区域划分结果,综合分析区域划分结果和伪彩色图像,最终能够较为准确地确定肿瘤组织对应的图像区域,实现对图像区域的识别过程。通过上述方法,可以辅助病理医生更快速地找到病灶区域,也可以减少采用X光等影像设备的仪器成本,运用更广泛、经济的光谱仪器获取样本图像,并对具有光谱信息的样本图像进行分析,从而在降低医学成本的基础上,提高肿瘤组织的判断准确率。
二、应用于食品检测领域中
食品安全关系到生命安全,食品中往往含有不同的组成成分,不健康的成分或者不正确的成分比例,都可能会造成食品安全事故。示意性的,采用上述图像处理方法,以待检测食品为待分析样本,对待分析样本在预设波段内进行采集并获取得到样本图像,从样本图像中选择第一波长对应的第一图像后得到伪色彩图像,并根据样本图像的样本元素类型对样本图像进行区域划分,得到区域划分结果,综合分析区域划分结果和伪彩色图像,最终能够较为准确地确定待检测食品中不同成分对应的区域,并确定不健康成分对应的图像区域,实现对图像区域的识别过程。通过上述方法,可以辅助食品监管机构对食品进行更好地监督,结合根据第一波长确定伪彩色图像以及区域划分结果,更准确地对图像区域进行识别。
值得注意的是,上述应用场景仅为示意性的举例,本实施例提供的图像处理方法还可以应用于其他场景中,本申请实施例对此不加以限定。
可以理解的是,在本申请的具体实施方式中,涉及到用户信息等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
其次,对本申请实施例中涉及的实施环境进行说明,示意性的,请参考图1,该实施环境中涉及终端110、服务器120,终端110和服务器120之间通过通信网络130连接。
在一些实施例中,终端110中安装有具有图像采集功能的应用程序。在一些实施例中,终端110用于向服务器120发送样本图像。服务器120可根据样本图像对应的光谱信息,通过图像处理模型121确定样本图像中包括待识别元素类型的图像区域,并将该图像区域以特殊方式进行标识并反馈至终端110进行显示。
其中,图像处理模型121的应用方式如下所示:从预设波段中选取预设波长,根据预设波长从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后得到伪彩色图像;此外,根据样本图像中的样本元素类型,对样本图像进行区域划分,得到样本图像对应的区域划分结果,结合区域划分结果和伪彩色图像,确定样本图像中的图像区域,该图像区域可以用于指示待识别元素类型的位置信息。例如:待分析样本为病理样本,对待分析样本进行分析后,确定的图像区域为肿瘤组织对应的区域,由此更为准确地确定肿瘤组织对应的区域信息。上述过程是图像处理模型121应用过程的不唯一情形的举例。
值得注意的是,上述终端包括但不限于手机、平板电脑、便携式膝上笔记本电脑、智能语音交互设备、智能家电、车载终端等移动终端,也可以实现为台式电脑等;上述服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。
其中,云技术(Cloud technology)是指在广域网或局域网内将硬件、应用程序、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
在一些实施例中,上述服务器还可以实现为区块链系统中的节点。
结合上述名词简介和应用场景,对本申请提供的图像处理方法进行说明,以该方法应用于服务器为例,如图2所示,该方法包括如下步骤210至步骤240。
步骤210,获取样本图像。
其中,样本图像包括在预设波段内对待分析样本进行采集得到的图像。
波段用于指示波长的范围,例如:可见光波段用于指示380nm至750nm之间的波长范围;近红外波段用于指示750nm至2500nm之间的波长范围;中红外波段用于指示2500nm至25000nm之间的波长范围等。
可选地,提供波长的照明光源既包括卤素灯,也包括白炽灯、发光二级管(LED,Light Emitting Diode)光源等。示意性的,预设波段用于指示预先设置的波段,提供波长的照明光源中涵盖预设波段。例如,预设波段为400nm至1700nm的波段,被选择的照明光源中涵盖400nm至1700nm的波段;或者,预设波段为400nm至1700nm的波段,A照明光源可以提供400nm至1200nm的波段、B照明光源可以提供1100nm至1800nm的波段,采用A照明光源和B照明光源作为提供预设波段的照明光源等。
待分析样本用于指示待进行分析的样本。可选地,待分析样本为手术切除的病理样本,对手术切除的病理样本进行分析后,可以知悉病理样本的位置信息、性质信息等;或者,待分析样本为一份化学混合物,对该化学混合物进行分析后,可以知悉混合物中的成分信息、比例信息等;或者,待分析样本为一块宝石,对该块宝石进行分析后,可以知悉宝石中的结构信息等。
在一个可选的实施例中,对待分析样本进行推扫式采集操作,得到样本图像。
其中,推扫式采集是沿扫描线逐点扫描成像的一种采集方式,推扫式采集操作是基于采集设备进行的。示意性的,如图3所示,为一个推扫式短波红外高光谱成像系统,该系统包括样本台310、短波红外高光谱相机320、线光源330以及待分析样本340。其中,高光谱相机320是成像技术和光谱探测技术的结合,用于采集得到具有光谱信息的样本图像,样本图像为高光谱图像,高光谱图像是三维的,其中x轴与y轴用于表示二维信息的坐标值,z轴用于表示波长信息。与普通成像技术相比,高光谱图像增加了图像的光谱信息,具有更高的光谱分辨率,能够从更广泛的波段范围以及更多层次的光谱维度上反映待分析样本的样本情况,使得样本图像能够同时反映待分析样本的空间信息以及光谱信息。在针对高光谱的样本图像进行分析时,也便于从更广泛的波段范围中选择分辨率更高、分析效果更好的波长,对样本图像进行后续分析。即:样本图像为具有光谱信息的图像。
示意性的,采用有效感光范围为900nm至1700nm的反射式高光谱相机拍摄高光谱图像,光谱分辨率大约为5nm,图像分辨率为30万像素。
可选地,在确定预设波段后,以照明光源照射待分析样本,并在预设波段内,以不同的波长照射条件下,对待分析样本进行多次拍摄,从而获得多个待分析样本对应的样本图像。示意性的,预设波段为900nm~1700nm(从近红外波段中选取的波段),待分析样本为手术切除后的病理标本,采用卤素灯作为照明光源,将高光谱相机作为图像采集设备,在不同波长下对手术切除后的病理标本进行采集。例如:在预设波段内,采用高光谱相机,对每一个波长采集一个图像,从而获得不同波长下对应的多个样本图像。
如图3所示,以线光源330对待分析样本340进行照射,并采用短波红外高光谱相机320对不同波长照射下的待分析样本340进行拍摄,从而实现获取多个不同波长下多个样本图像的过程。如图4所示,为采用短波红外高光谱相机320采集到的多个样本图像410,其中,多个样本图像依据波长900nm~1700nm的范围,从上向下依次排列,多个样本图像410为具有光谱信息的三维高光谱图像。
示意性的,如图5所示,任意选取样本图像410中的M点420以及N点430,基于样本图像410对应的光谱信息,得到M点420对应的光谱特定曲线510,以及N点430对应的光谱特性曲线520。其中,光谱特性曲线图是光反射率与波长之间的关系图,横坐标为波长,纵坐标为反射率,反射率用于指示被待分析样本反射的光通量与入射到待分析样本的光通量之比。可选地,图5中的光谱特性曲线为经过反射率校正后得到的曲线。
在一个可选的实施例中,在预设波段范围内,采用可调谐滤波器,确定至少一个波长;基于采集设备,对待分析样本进行推扫式采集操作,获取至少一个波长对应的样本图像。
可选地,在高光谱相机前添加液晶可调谐滤波器(LCTF,Liquid Crystal Tunable Filter),液晶可调谐滤波器用于从涵盖预设波段的照明光源中对波长进行选择,从而能够快速而且无振动地选择可见光波段内或者近红外波段内的波长。例如:照明光源涵盖的波段为900nm~1700nm,将照明光源发射的光经过液晶可调谐滤波器后,得到1130nm波长的光,即液晶可调谐滤波器将预设波段内除1130nm波长以外的其他波长的光予以滤除。
可选地,采集设备为高光谱相机,高光谱相机为内置光栅推扫结构的相机,根据光栅的排列方式,对待分析样本进行推扫式采集操作,得到样本图像。或者,高光谱相机外置推扫结构的高光谱拍摄方式,如图3所示,移动样品台310进行推扫拍摄等。值得注意的是,以上仅为示意性的举例,本申请实施例对此不加以限定。
在一些实施例中,高光谱相机中的镜头可以是变焦镜头,即:通过光学变倍调整视野的镜头。可选地,通过物理升降光学支架进行视野匹配,并获取样本图像;或者,采用变倍镜头与升降支架相结合的方式,获取样本图像等。
步骤220,获取样本图像中与预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像。
示意性的,样本图像为对待分析样本进行采集得到的多个图像,样本图像对应的波长在预设波段内。在预设波段内,从多个波长中选择至少一个预设波长,并将预设波长对应的样 本图像作为第一图像,最终得到伪彩色图像。
可选地,对于一个预设波长,存在与该预设波长对应的至少一个样本图像。示意性的,当被选择的预设波长对应存在多个样本图像时,既可以从多个样本图像中随机选择一个样本图像作为与该预设波长对应的第一图像,也可以对多个样本图像进行结合分析,从而确定与该预设波长对应的第一图像。
或者,对于一个预设波长,存在与该预设波长对应的一个样本图像,将该样本图像作为第一图像。可选地,以一个预设波长对应一个第一图像为例,根据所选择的预设波长的数量,对第一图像的处理方式存在差异。示意性的,对选择一个预设波长和选择多个预设波长的情况分别进行分析。
(1)选择一个预设波长
在一个可选的实施例中,对预设波段中一个预设波长对应的第一图像进行赋色处理,得到伪彩色图像。
示意性的,预设波段为从近红外波段中选取的900nm至1700nm的波段,由于预设波段为不可见波段,故该波段对应的图像为灰度图像。从预设波段中选择一个预设波长,该预设波长对应的第一图像为一幅灰度图像,对该灰度图像进行赋色处理,得到伪彩色图像。
伪彩色图像处理用于指示将黑白的灰度图像转换为彩色图像的技术过程,从而提高图像内容的可辨识度。示意性的,采用灰度分成法、灰度变换法等方法,进行伪彩色图像处理。
可选地,灰度图像为单通道图像,即每个像素点只有一个值表示颜色,其像素值位于0至255之间,0用于指示黑色,255用于指示白色,中间值为不同等级的灰色。或者,当灰度图像为三通道图像时,三个通道的像素值均相同。
可选地,与单通道图像相对的图像包括三通道图像,即每个像素点都有3个值表示。示意性的,RGB图像为三通道图像,是通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加,从而得到各式各样的颜色。其中,每一个像素点由三个值表示。
示意性的,待分析样本为手术切除的病理样本,如图6所示,为对待分析样本进行不同图像处理的示意图。其中,图610用于指示待分析样本(为便于表示,采用常规相机进行拍摄得到);图620用于指示波长为1300nm的高光谱图像;图631至图634用于指示采用苏木素伊红染色法(HE,Hematoxylin Eosin)观察到的组织表现,其中图631用于指示癌组织(图610或图620中的A点所示)、图632用于指示脂肪组织(图610或图620中的B点所示)、图633用于指示正常黏膜组织(图610或图620中的B点所示)、图634用于指示肌组织(图610或图620中的D点所示)。
示意性的,将波长1300nm作为被选择的预设波长,将图620对应的高光谱图像作为预设波长对应的第一图像,对第一图像进行上述赋色处理,得到伪彩色图像。
(2)选择多个预设波长
示意性的,根据至少两个预设波长,确定至少两个预设波长分别对应的至少两个第一图像。其中,第i个预设波长对应第i个第一图像,i为正整数。
在一个可选的实施例中,将预设波段中至少两个预设波长对应的至少两个第一图像进行合成处理,对合成的图像以及进行赋色处理后,得到伪彩色图像。
示意性的,从预设波段中选择至少两个预设波长,每一个预设波长对应一个第一图像。可选地,对至少两个第一图像进行合成处理,得到候选图像。
示意性的,对多个第一图像进行合成处理的方式包括如下至少一种方式。
(1)像素值处理
在一个可选的实施例中,对至少两个第一图像对应像素点的第一像素值进行平均处理,得到对应像素点的第二像素值;基于各像素点对应的第二像素值确定候选图像。
示意性的,在获得至少两个预设波长对应的至少两个第一图像后,将至少两个第一图像对应像素点的第一像素值进行求和后取平均值,得到对应像素点的第二像素值,即:第二像素值为对不同第一图像对应像素点的第一像素值进行综合分析后得到的平均值。可选地,在 确定各像素点对应的第二像素值后,依据像素点的位置信息,得到候选图像,候选图像中各个像素点的像素值为对应的第二像素值。
可选地,在对不同预设波长分别对应的第一图像进行合成时,借助不同第一图像对应像素点的第一像素值,将多个第一像素值进行平均值处理后的第二像素值作为第二图像对应点的像素值,从而能够借助图像对应像素点,对多个不同的第一图像进行综合性地均衡处理,更好地体现不同第一图像的均衡水平。
(2)软件处理
在一个可选的实施例中,在确定预设波长对应的第一图像后,将至少两个第一图像通过软件进行合成处理,得到候选图像。
示意性的,将至少两个第一图像输入Photoshop中,进行对齐、拼接调色、擦除接缝、导出等操作,得到将至少两个第一图像进行合成后的候选图像。
以上仅为示意性的举例,本申请实施例对此不加以限定。可选地,当所选择的预设波长实现为预设波段中的多个预设波长时,对不同预设波长分别对应的第一图像进行综合分析,即通过上述的合成方法,将不同预设波长分别对应的第一图像进行合成,能够从多个波长维度对待分析样本进行更加全面的分析过程,借助多个波长分别对应的第一图像合成候选图像,使得候选图像蕴含有多个波长共有的样本信息,平滑了波长差异带来的图像信息差异,避免了分析的局限性。
在一个可选的实施例中,对。
示意性的,基于候选图像中像素点的亮度值,对候选图像中的像素点进行亮度分级,确定至少两个亮度级别;对至少两个亮度级别分别赋色,得到伪彩色图像。
可选地,第一图像为灰度图像,基于第一图像合成得到的候选图像为灰度图像,灰度图像中各个像素点对应的第二像素值用于指示候选图像的亮度。示意性的,第二像素值位于0至255之间,0用于指示黑色(亮度最小),255用于指示白色(亮度最大),即:第二像素值的数值越小,亮度越小;第二像素值的数值越大,亮度越大。
示意性的,如图7所示,为将三个预设波长(波长1100nm、波长1300nm以及波长1450nm)对应的第一图像进行合成处理和赋值处理后得到伪彩色图像的过程示意图。
其中,图710用于指示波长为1100nm的高光谱图像;图720用于指示波长为1300nm的高光谱图像;图730用于指示波长为1450nm的高光谱图像。可选地,将上述三个预设波长对应的高光谱图像进行合成处理和赋值处理后,得到图740所示的伪彩色图像。
示意性的,在得到一个预设波长对应的第一图像,或多个预设波长分别对应的第一图像所合成的候选图像后,根据第一图像或候选图像中像素点的亮度值,对其中的像素点进行亮度分级并确定至少两个亮度级别,从而对不同的亮度级别分别赋色以得到伪彩色图像。通过上述赋色处理,借助图像像素点的亮度变化,为不同亮度级别对应的像素点进行赋色,使得赋色处理的伪彩色图像更加符合人眼对图像的观察习惯,便于专业人士通过伪彩色图像中的不同颜色,区别不同的图像区域。
步骤230,根据样本图像中样本元素类型的差异,对样本图像进行区域划分,得到区域划分结果。
其中,样本元素类型中包括待识别的待识别元素类型。
可选地,样本元素类型用于指示样本图像中不同样本区域对应的样本性质差异。示意性的,当样本图像为针对病理样本进行拍摄得到的图像,样本元素类型包括:该病理样本中的肿瘤组织、该病理样本中的脂肪组织、该病理样本中的黏膜组织、该病理样本中的肌组织等;当样本图像为针对化学混合物(其中包括A化合物、B化合物以及杂质)进行拍摄得到的图像,样本元素类型包括:A化合物、B化合物以及杂质。
示意性的,当样本图像为针对病理样本进行拍摄得到病理图像时,待识别元素类型为预先确定的、待识别的肿瘤组织(该病理图像对应的样本元素类型中的一种);或者,待识别元素类型为预先确定的、待识别的待识别的脂肪组织等。可选地,当样本图像为针对化学混合 物进行拍摄得到的化学图像,待识别元素类型为预先确定的、待识别的B化合物(该化学图像对应的样本元素类型中的一种)等。
示意性的,样本图像为具有光谱信息的图像,根据不同物质的性质差异,光谱信息存在差异,根据样本图像对应的光谱信息,对样本图像进行区域划分,得到区域划分结果。
示意性的,光谱信息在样本图像上显示不同,例如:样本图像为灰度图像时,A样本元素对应的区域颜色最深,待分析样本元素对应的区域颜色最浅,由此得到样本图像对应的区域划分结果。
在一个可选的实施例中,为便于区分,可以对不同的区域填充不同的颜色,得到具有颜色的区域划分结果;或者,采用较深的轮廓线,对不同区域进行划分,得到具有较明显分隔的区域划分结果等。
值得注意的是,以上仅为示意性的举例,本申请实施例对此不加以限定。
步骤240,基于伪彩色图像和区域划分结果,在样本图像中确定包括待识别元素类型的图像区域。
示意性的,伪彩色图像是对预设波长对应的第一图像进行处理后得到的图像;区域划分结果是根据样本图像中样本元素类型进行区域划分后的结果。可选地,伪彩色图像中以不同的颜色对伪彩色图像进行划分,例如:样本图像为针对病理样本进行拍摄得到的图像,其中的肿瘤组织呈现为橙色;脂肪组织呈现为亮黄色、黏膜组织呈现为较肿癌组织颜色更浅的浅橙色、肌组织呈现为较肿瘤组织颜色更深的深橙色等。
在一个可选的实施例中,确定伪彩色图像与区域划分结果中的重叠区域;在样本图像中,将重叠区域作为包括待识别元素类型的图像区域。
示意性的,待分析样本为病理样本,对病理样本进行采集后得到具有光谱信息的样本图像,欲观察样本图像中的肿瘤组织,对样本图像进行上述处理过程,得到被选择的预设波长对应的伪彩色图像以及对样本图像的区域划分结果。根据伪彩色图像中确定的肿瘤组织的区域,以及区域划分结果中包括待识别元素类型(肿瘤组织)的识别结果,将重叠区域作为包括待识别元素类型(肿瘤组织)的图像区域,实现对肿瘤组织区域的识别过程。
示意性的,在得到光谱效果较好的第一波长对应的第一图像的伪彩色图像,以及对样本图像进行区域划分后的区域划分结果后,综合了赋色处理后得到的伪彩色图像以及光谱分析结果,对样本图像进行了更全面的分析过程,从而使得包括待识别元素类型的图像区域不仅蕴含了伪彩色图像所表达的图像信息,还包括了区域划分结果所表示的光谱信息,充分提高了确定图像区域的准确度。
以上仅为示意性的举例,本申请实施例对此不加以限定。
综上所述,从样本图像中获取预设波长对应的第一图像,并得到伪彩色图像,对样本图像进行区域划分后得到区域划分结果,结合伪彩色图像和区域划分结果确定图像区域。通过上述方法,可以避免仅仅依靠医生的裸眼观测和描述对肿瘤组织的大小、区域等进行判断。基于预先确定的预设波段,对待分析样本进行采集,得到样本图像,从预设波段中选取至少一个效果较好的预设波长,并根据至少一个预设波长,从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后,得到伪彩色图像,伪彩色图像能够较准确地体现预设波长的优势。根据样本图像中样本元素类型的差异,对样本图像进行区域划分后得到区域划分结果。结合伪彩色图像以及区域划分结果,确定包括待识别元素类型的图像区域,从而确定待识别区域(如:肿瘤组织)的位置信息,提高病理取材的准确性,降低病理取材的难度,利用待分析样本在不同波长下对应的光谱特性对待分析样本进行分析,不仅操作较为简便,且成本相对较低,更容易应用并广泛普及。
在一个可选的实施例中,对样本图像进行区域划分的过程是通过不同样本元素类型对应的不同光谱信息确定的。示意性的,如图8所示,上述图2所示出的实施例中的步骤230还可以实现为如下步骤810至步骤850。
步骤810,获取第二图像。
其中,第二图像是针对待分析样本进行采集得到的具有光谱信息的预先标注图像。
可选地,样本图像和第二图像是对待分析样本进行采集得到的图像,在对待分析样本进行采集时,确定样本图像的金标准,即确定第二图像。金标准用于指示当前临床医学界公认的诊断疾病的可靠方法。
在一个可选的实施例中,待分析样本为手术切除的病理样本。示意性的,如图9所示,为对病理样本进行取材的流程示意图,首先,医生通过肿瘤切除手术910对患者的病理样本进行切除(病理样本中包括肿瘤组织),之后,对病理样本进行切割920,得到适当体积的组织块,然后,采用福尔马林浸泡等方法将组织块进行固定930。例如:在手术切除病理样本离体后的30分钟内,将组织块放入足量的3.7%中性福尔马林溶液中进行固定,固定时间12h-48h。随后,对固定后的组织块切取厚度5mm±1mm的组织片(平均约为5mm),其中包括肿瘤组织及周围1-2cm的正常组织。可选地,既可以将对病理样本进行固定后的组织块作为待分析样本,也可以将对组织块进行切片后的组织片作为待分析样本。
在一个可选的实施例中,在获取待分析样本后,组织片经大体取材、常规脱水、包埋以及HE染色制片处理后,得到染色图像940,将染色图像940经过数字扫描仪950进行扫描后,得到多个全视野数字切片(WSI,Whole Slide Image)图像960。其中,WSI是在数字扫描仪950(一种电动显微镜结构)下采集的该病理样本的图像。示意性的,若单张WSI图像的尺寸较小,被分析的WSI图像可以是多张病理切片拼接而成,例如,采用WSI拼接软件对多个WSI部分(fragments)进行拼接,还原得到虚拟大切片970。
可选地,以在虚拟大切片970上运用高级系统分析程序(ASAP,Advanced Systems Analysis Program)进行标注作为金标准,得到多个针对该病理样本扫描得到的、具有标注的WSI图像,其中,标注可以采用对区域标注的方式进行,标注的区域既包括一种或者多种病灶所在的区域,还包括有提示作用的特殊区域等。示意性的,在空腔脏器中,将肿瘤组织标注为红色,将正常黏膜标记为绿色,将脂肪组织标记为黄色,将肌组织标记为蓝色;在实质脏器中,将肿瘤组织标注为红色,将正常组织标注为绿色,将脂肪组织标记为黄色等。可选地,上述颜色标记仅为示意性的举例,也可以采用不同的颜色对被选择的组织进行标记,例如:对实质脏器中的乳腺组织进行标记时,将乳腺组织中的肿瘤组织标记为红色,将脂肪组织标记为黄色,将纤维结缔组织标记为绿色等。可选地,当被观察的脏器中不存在对应颜色的组织时,可以不标注,例如:采用上述颜色标记方式标记实质脏器时,若被观察的脏器中不存在脂肪组织,则不予标记黄色。示意性的,将标注的WSI图像作为第二图像,实现对第二图像的获取过程。
步骤820,以第二图像对候选划分模型进行训练。
其中,候选划分模型为未训练的、具有一定区域划分功能的模型。示意性的,以第二图像为金标准,对候选划分模型进行训练,在大量第二图像的训练下,候选划分模型进行学习,并能够逐渐自动识别病灶区域等特殊区域,并逐渐具备区域分割功能。
步骤830,响应于对候选划分模型的训练达到训练效果,得到图像划分模型。
其中,图像划分模型用于对第一图像进行区域分割。示意性的,在对候选划分模型进行训练的过程中,会因为对候选划分模型的训练达到训练目标而得到图像划分模型,可选地,以损失值判断候选划分模型的训练效果,训练目标至少包括如下一种情况。
1、响应于损失值达到收敛状态,将最近一次迭代训练得到的候选划分模型作为图像划分模型。
示意性的,损失值达到收敛状态用于指示通过损失函数得到的损失值的数值不再变化或者变化幅度小于预设阈值。例如:第n个第二图像对应的损失值为0.1,第n+1个第二图像对应的损失值也为0.1,可以视为该损失值达到收敛状态,将第n个第二图像或者第n+1个第二图像对应的损失值调整的候选划分模型作为图像划分模型,实现对候选划分模型的训练过程。
2、响应于损失值的获取次数达到次数阈值,将最近一次迭代训练得到的候选划分模型作 为图像划分模型。
示意性的,一次获取可以得到一个损失值,预先设定用于训练图像划分模型的损失值的获取次数,当一个第二图像对应一个损失值时,损失值的获取次数即为第二图像的个数;或者,当一个第二图像对应多个损失值时,损失值的获取次数即为损失值的个数。例如:预先设定一次获取可以得到一个损失值,损失值获取的次数阈值为10次,即当达到获取次数阈值时,将最近一次损失值调整的候选划分模型作为图像划分模型,或者将损失值10次调整过程中最小损失值调整的候选划分模型作为图像划分模型,实现对候选划分模型的训练过程。
值得注意的是,以上仅为示意性的举例,本申请实施例对此不加以限定。
在一个可选的实施例中,候选划分模型中所涉及的深度学习网络可以为用于生物医学图像分割的卷积网络(Convolutional Networks for Biomedical Image Segmentation,U-net)、生成对抗网络(Generative Adversarial Networks,GAN),卷积神经网络(Convolutional Neural Networks,CNN)等深度学习网络。其中,深度学习网络是一种进行区域分割的策略。
可选地,也可以使用深度学习以外的机器学习算法,如:主成分分析方法(Principal Component Analysis,PCA)等;或者,采用其他非机器学习算法,如:支持向量机(Support Vector Machine,SVM)、最大似然法,波谱角,波谱信息散度,马氏距离等。
示意性的,通过对病理样本中的病理区域进行标注,从而得到能够代表当前临床医学界公认的诊断疾病金标准的第二图像,该第二图像能够较为准确地表示病理样本对应的病理位置,进而便于更精准地对候选划分模型进行训练,逐渐提升候选划分模型的鲁棒性,并得到符合训练效果的图像划分模型。
步骤840,将样本图像通过预先训练得到的图像划分模型,确定元素类型的差异表示。
在一个可选的实施例中,对样本图像进行图像预处理后,输入预先训练得到的图像划分模型中。
示意性的,如图10所示,在采集得到多个样本图像1010后,将样本图像1010进行预处理1020。其中,对样本图像进行图像预处理1020的过程包括如下至少一种:对样本图像进行几何变换操作、图像增强操作等(如:图像背景校正、配准、去噪等),从而突出样本图像中的重要特征。之后,将预处理后的样本图像1010通过预先训练得到的图像划分模型1030中,由图像划分模型1030对样本图像中的区域进行划分。
在一个可选的实施例中,样本图像为具有光谱信息的图像,对样本图像进行光谱分析,得到光谱分析结果;基于光谱分析结果,确定样本图像对应的元素类型的差异表示。
根据样本图像对应的待分析样本的差异,得到不同样本图像对应的不同光谱分析结果。示意性的,光谱分析结果采用光谱特征曲线图的形式表示,光谱特征曲线图的横坐标为波长,纵坐标为反射率,不同的光谱曲线用于指示不同的待分析样本在不同波长下的反射率变化情况,即光谱分析结果。
在一个可选的实施例中,对62例不同系统组织的高光谱图像进行分析后,初步确定不同器官中区分肿瘤组织与正常组织的波长在1296-1308nm(该波长范围内效果较好)。示意性的,以存在肿瘤组织的空腔脏器(如:食管,胃,结直肠)、肾、乳腺以及肺作为待分析样本为例进行分析,得到空腔脏器、肾、乳腺以及肺对应的样本图像,样本图像为三维高光谱图像,根据三维高光谱图像对应的数据,得到空腔脏器、肾、乳腺以及肺分别对应的光谱特征曲线图。
如图11所示,为空腔脏器1110对应的光谱特征曲线图,其中,肿瘤组织(癌组织)对应的波长曲线为肿瘤波长曲线1120;脂肪组织对应的波长曲线为脂肪波长曲线1130;正常黏膜对应的波长曲线为黏膜波长曲线1140;肌组织对应的波长曲线为肌组织波长曲线1150。
如图12所示,为肾1210对应的光谱特征曲线图,其中,肿瘤组织(癌组织)对应的波长曲线为肿瘤波长曲线1220;脂肪组织对应的波长曲线为脂肪波长曲线1230;正常黏膜对应的波长曲线为黏膜波长曲线1240。
如图13所示,为乳腺1310对应的光谱特征曲线图,其中,肿瘤组织(癌组织)对应的 波长曲线为肿瘤波长曲线1320;脂肪组织对应的波长曲线为脂肪波长曲线1330;正常黏膜对应的波长曲线为黏膜波长曲线1340。
如图14所示,为肺1410对应的光谱特征曲线图,其中,肿瘤组织(癌组织)对应的波长曲线为肿瘤波长曲线1420;正常的肺对应的波长曲线为正常波长曲线1430。
其中,样本图像对应的元素类型的差异表示即为不同组织之间的差异,例如:肿瘤组织和脂肪组织是不同的等。以上仅为示意性的举例,本申请实施例对此不加以限定。
综合图11至图14进行分析,在波长为1300nm左右时,在空腔脏器组织的样本中不同组织显示出较好的区分度。实质脏器(如:乳腺、肾及肺)中的肿瘤组织与周围的正常组织及脂肪组织亦显示较好的区分度。
示意性的,以结肠癌为例,通过肉眼观察1300nm高光谱图像中肿瘤组织呈灰色,正常肌组织表现出较肿瘤组织颜色深的灰黑色,脂肪组织呈灰白色,正常粘膜显示出较肌层浅,较肿瘤组织稍深的深灰色,1300nm高光谱图像显示出对脂肪、肌层及肿瘤组织较好的区分度。
在一个可选的实施例中,抽取高光谱图像中三个波峰波谷1100nm、1300nm以及1450nm作为特征波段,合成短波红外彩色合成图像,从而提供更符合人眼观察习惯的伪彩色图像,以便于医生识别不同组织。在短波红外彩色合成图像中,癌组织呈现为橙色,肌组织则表现为较肿瘤组织颜色更深的橙色,正常粘膜表现为较癌组织浅的橙色,脂肪组织则呈亮黄色。
示意性的,基于样本图像具有的光谱信息,对样本图像进行光谱分析,从而利用样本图像对应的光谱分析结果,更直观地确定待分析样本在不同波长下的反射率变化情况,进而确定代表不同组织之间的差异情况,有利于基于该差异对样本图像进行区域分析。
步骤850,基于元素类型的差异表示,对样本图像进行区域划分,确定样本图像对应的区域划分结果。
示意性的,在得到光谱分析结果后,由图像划分模型在样本图像上给出相应的区域信息提示,区域信息提示包括如下至少一种方式。
(1)轮廓线提示
示意性的,采用轮廓线对样本图像中不同的区域进行划分处理,得到不同划分区域,其中,轮廓线既可以是较深的曲线,也可以是有颜色的曲线等。
(2)热力图提示
示意性的,采用特殊高亮的方式,对肿瘤组织所在区域进行表示。
(3)单色填充提示
示意性的,如图15所示,对于不同的区域,以填充的不同颜色予以区分,例如:肿瘤组织区域填充红色1510、脂肪组织区域填充绿色1520等。可选地,对无法准确划分的区域,填充为白色或者不对其进行填充等。
在一个可选的实施例中,如图16所示,基于短波红外高光谱图像1610(样本图像)和已标注的WSI1620进行深度学习,最终得到对短波红外高光谱图像1610进行预测后的预测结果1630,示意性的,预测结果1630采用单色填充的方式进行提示。以上仅为示意性的举例,本申请实施例对此不加以限定。
示意性的,在借助图像划分模型确定元素类型的差异表示后,充分利用了元素类型的差异表示所指示的待分析样本在不同波长下的反射率变化情况,对样本图像进行区域划分,确定预设波段内的样本图像对应的区域划分结果,以区域的形式细化了样本图像的分析维度,有利于提高对样本图像的分析准确性。
综上所述,基于预先确定的预设波段,对待分析样本进行采集,得到样本图像,从预设波段中选取至少一个效果较好的预设波长,并根据至少一个预设波长,从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后,得到伪彩色图像,伪彩色图像能够较准确地体现预设波长的优势。根据样本图像中样本元素类型的差异,对样本图像进行区域划分后得到区域划分结果。结合伪彩色图像进而区域划分结果,确定包括待识别元素类型的图像区域,从而确定待识别区域(如:肿瘤组织)的位置信息。通过上述方法,可以避免仅仅依 靠医生的裸眼观测和描述对肿瘤组织的大小、区域等进行判断,降低病理取材的难度,不仅操作较为简便,且成本相对较低。
在本申请实施例中,对区域划分模型的训练过程和应用过程进行了说明。在训练区域划分模型时,将全视野数字图像作为第二图像,以第二图像对具有未训练的候选划分模型进行训练,直至达到收敛条件得到图像划分模型,以图像划分模型对样本图像进行区域划分,根据样本图像中的元素类型的差异表示,确定样本图像对应的区域划分结果。通过上述方法,以模型对切除的组织病灶区域进行学习,通过模型自动识别病灶区域等特殊区域,并在模型输出的图像上以区域信息提示的方式对图像区域进行特殊标示,从而借助模型更好地对图像进行分析,提高病理取材的准确性。
在一个可选的实施例中,将上述图像处理方法应用于医学领域,对病理图像进行处理。在获取得到不同部位对应的具有红外高光谱信息病理图像后,结合窄带合成伪彩色图像,以及深度学习对切除的组织病灶区域进行预测,从而对术中肿瘤边缘确定和术后辅助病理取材提供了新的解决方案。示意性的,将上述图像处理方法应用于如下至少两种识别过程中:(一)对空腔脏器肿瘤组织进行识别;(二)对实质脏器肿瘤组织进行识别。
(一)对空腔脏器肿瘤组织进行识别
空腔脏器,是指管腔状、脏器内部含有大量空间的脏器,如胃、肠、膀胱、胆等;实质性脏器是相对于空腔脏器而言,包括心脏,肺部,肾,肝,脾,等等。不同点就是前者是实心的而后者是空心的。示意性的,腹部实质性脏器有包括肝脏、脾脏、肾脏、肾上腺、胰腺等;腹部空腔脏器包括胆囊、胃、十二指肠、空肠、回肠、阑尾、结肠等。
在一个可选的实施例中,对空腔脏器中的结肠癌组织、直肠癌组织、胃癌组织及食管癌组织进行研究。在四种不同的肿瘤组织中,HSI1300nm显示出较好的区分度,且成像的颜色类似。
与X线图像相比,高光谱成像对空腔脏器肌层的识别显示出较大的优势。在判断肿瘤边界时,高光谱图像明显较常规彩色图像更加清晰。精选1100nm、1300nm以及1450nm的HSI图像合成彩色图像能清晰的显示肿瘤组织的范围,不同组织呈现从黄色到橙色不同强度的色彩。
示意性的,如图17所示,为空腔脏器中所选取的四种组织分类图像表现。其中,样本1为结肠癌组织,样本2为直肠癌组织,样本3为胃癌组织,样本4为食管癌组织。
其中,第一行所示的图1710至图1740用于指示采用普通相机拍摄的常规彩色图像(相当于肉眼观察);
第二行所示的图1711至图1741用于指示使用X光设备获取得到的X线图像,能显示肿瘤区域的大致轮廓,但效果并不清晰,且无法分辨肌层结构;
第三行所示的图1712至图1742用于指示采用高光谱相机采集得到的波长为1300nm的高光谱图像(HSI 1300nm图像),高光谱图像为灰度图像,且不同组织显示出深浅不一的颜色,肉眼可辨;
第四行所示的图1713至图1743用于指示使用波长为1100nm的高光谱图像、波长为1300nm的高光谱图像以及波长为1450nm的高光谱图像合成的伪彩色图像;
第五行所示的图1714至图1744用于指示人工智能分割图像,如采用上述区域划分模型得到的输出图像,该图像能够提供更多详细的取材信息。例如:A颜色代表肿瘤组织,B颜色代表肌层组织,C颜色代表正常粘膜组织,D颜色代表脂肪组织,可选地,颜色越深则置信度越高。
第六行所示的图1715至图1745指示WSI图像(金标准),用于显示肿瘤组织的真实范围。
(二)对实质脏器肿瘤组织进行识别
在一个可选的实施例中,对具有病灶的(如具有肿瘤组织)实质脏器中的肾、肺和乳腺 进行研究,在肿瘤组织相对较大的样本中,肉眼识别并不是很困难。示意性的,如图18所示,为肾癌中不同图像的表示形式。
图1810用于指示肾组织中的常规彩色图像(肉眼观察),在常规彩色图像中,肿瘤组织呈现灰白色,脂肪组织呈现黄色,正常肾组织呈现浅棕色;图1811用于指示对常规彩色图像进行放大后的肿瘤边界示意图。其中,放大图像中显示的肿瘤组织与正常肾组织交界处的界限不易分辩。
图1820用于指示采用X光设备采集得到的X线图像。其中,X线图像虽能显示出肿瘤的轮廓,但境界较不清晰,无法较好地区分出正常肾组织及肿瘤组织。
图1830用于指示波长为1300nm的高光谱图像;图1831用于指示对波长为1300nm的高光谱图像进行放大后的肿瘤边界示意图。高光谱1300nm图像成像中,肿瘤组织呈灰白色,正常肾组织呈灰色,脂肪组织呈亮灰白色,放大图中,肿瘤组织与周围正常组织有较清晰的分界。
图1840用于指示伪彩色图像;图1832用于指示对合成的伪彩色图像进行放大后的肿瘤边界示意图。
示意性的,伪彩色图像既可以由一个波长对应的一个高光谱图像进行赋色处理后得到的,也可以是对多个波长对应的多个高光谱图像进行合成处理和赋色处理后得到的。其中,波长的选择既可以是随机选择的,也可以是预先确定的。示意性的,综合实验结果判断,预先选择效果较好的至少一个波长,根据至少一个波长对应的高光谱图像,得到伪彩色图像。
例如:预先选择效果较好的波长1300nm,并对波长1300nm对应的高光谱图像进行赋色后得到伪彩色图像;或者,随机选择波长1250nm,并对波长1250nm对应的高光谱图像进行赋色后得到伪彩色图像。或者,预先选择效果较好的三个波长,分别为波长1300nm、波长1100nm以及波长1450nm,将三个波长分别对应的高光谱图像进行合成处理和赋色处理后,得到伪彩色图像。
在短红外合成彩色图像中,肾癌组织呈橘黄色,脂肪组织呈亮黄色,正常肾组织区域显示橘黄偏黑的颜色,在放大图中显示,肿瘤组织与周围组织的边界清晰易于分辨。
图1850用于指示人工智能分割图像,用于对样本图像进行区域划分。示意性的,不同区域采用不同形式进行区别,如:肿瘤组织呈现为红色,正常肾组织呈现为绿色,脂肪组织呈现为黄色等,颜色越深置信度越高。人工智能分割图像,红色为肿瘤组织,绿色为正常肾组织,黄色为脂肪组织,颜色越深则置信度越高,肿瘤的轮廓与WSI的肿瘤边界更加吻合。
图1860用于指示WSI肿瘤区域轮廓。
在一个可选的实施例中,以实质脏器中的乳腺为例进行说明。肉眼判断肿瘤的边界似乎并不容易,例如,无法通过普通相机拍摄的照片准确识别肿瘤组织的边界。如图19所示,图1910用于指示普通相机拍摄的常规彩色图像,其中,圈出的部分为肿瘤组织的边界部分,该部分中的肿瘤组织与周围组织的界限在常规彩色图像中无法较清晰地区分。
X线图像在判断肿瘤组织中具有较好的效果,一直以来在病理的取材中被作为主要的辅助工具,帮助病理医生寻找瘤床范围。图1920用于指示采用X光设备采集得到的X线图像,在显示的病例中,X线图像中显示的肿瘤轮廓的边缘呈现毛刺状,与WSI显示的肿瘤轮廓相比,明显范围较大。
图1930用于指示波长为1300nm的高光谱图像,其中,肿瘤组织呈现为深灰色(不规则形状对应的部分),圆圈中为周围正常的乳腺组织,呈现为较浅的灰色。
图1940用于指示根据至少一个被选择的波长对应的高光谱图像确定的伪彩色图像。示意性的,在短波红外合成的伪彩色图像中,与周围乳腺组织相比肿瘤组织的区域为较深的橙色,脂肪组织呈亮黄色。
图1950用于指示人工智能分割图像(对样本图像经过深度学习模型得到的处理结果),提供了较为准确的肿瘤组织范围的信息参考;图1960用于指示全视野数字切片图像(WSI为金标准)。
可选地,在乳腺病例中,经过X光设备得到的X线图像,可以显示点状钙化。示意性的,如图20所示,图2010用于指示普通相机拍摄的常规彩色图像,肿瘤区域呈现为灰白色,能够辨认肿瘤组织的大致范围;图2020用于指示采用X光设备采集得到的X线图像,该X线图像可以大致显示肿瘤组织的边缘,边缘呈现为毛刺状,且其内可见点状钙化(图2020中箭头所指示的位置);图2030用于指示波长为1300nm的高光谱图像,其中的肿瘤组织显示深灰色,正常乳腺组织相比肿瘤组织区域灰度较浅,脂肪组织呈现为灰白色。图2040用于指示对上述被选择的至少一个波长对应的高光谱图像进行处理后得到短波红外彩色图像,该短波红外彩色图像能够显示更加清晰的肿瘤轮廓;图2050用于指示人工分割的图像结果显示;图2060用于指示作为金标准的全视野数字切片(WSI)。示意性的,结合短波红外彩色图像以及人工分割的图像结果显示,确定包括肿瘤组织的图像区域,该图像区域显示的轮廓与金标准(WSI)的吻合度最高。
综上所述,基于预先确定的预设波段,对待分析样本进行采集,得到样本图像,从预设波段中选取至少一个效果较好的预设波长,并根据至少一个预设波长,从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后,得到伪彩色图像,伪彩色图像能够较准确地体现预设波长的优势。根据样本图像中样本元素类型的差异,对样本图像进行区域划分后得到区域划分结果。结合伪彩色图像进而区域划分结果,确定包括待识别元素类型的图像区域,从而确定待识别区域(如:肿瘤组织)的位置信息。通过上述方法,可以避免仅仅依靠医生的裸眼观测和描述对肿瘤组织的大小、区域等进行判断,降低病理取材的难度,不仅操作较为简便,且成本相对较低。
在本申请实施例中,将上述图像处理方法应用于医学领域中,对空腔脏器以及实质脏器进行了分析,证明了通过上述图像处理方法的有益性。一方面,上述图像处理方法相比医生裸眼观察和手感触摸的方法更可靠,图像的一致性更有保障。另一方面,高光谱拍摄系统具有无损伤、无接触、无电离辐射的特点,且高光谱拍摄系统的硬件系统成本相比X射线设备的成本更低。
图21是本申请一个示例性实施例提供的图像处理装置的结构框图,如图21所示,该装置包括如下部分:
样本获取模块2110,用于获取样本图像,所述样本图像包括在预设波段内对待分析样本进行采集得到的图像;
图像获取模块2120,用于获取所述样本图像中与所述预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像;
区域划分模块2130,用于根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,所述样本元素类型中包括待识别的待识别元素类型;
区域确定模块2140,用于基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域。
在一个可选的实施例中,所述图像获取模块2120还用于对所述预设波段中预设波长对应的第一图像进行赋色处理,得到所述伪彩色图像;或者,将所述预设波段中至少两个预设波长对应的至少两个第一图像进行合成处理,对合成的图像进行赋色处理,得到所述伪彩色图像。
在一个可选的实施例中,所述图像获取模块2120还用于根据所述至少两个预设波长,确定所述至少两个预设波长分别对应的至少两个第一图像,其中,第i个预设波长对应第i个第一图像,i为正整数;对所述至少两个第一图像进行合成处理,得到候选图像;对所述候选图像进行赋色处理,得到所述伪彩色图像。
在一个可选的实施例中,所述图像获取模块2120还用于对所述至少两个第一图像对应像素点的像素值进行平均处理,得到所述对应像素点的第二像素值;基于各像素点对应的第二像素值确定所述候选图像。
在一个可选的实施例中,所述图像获取模块2120还用于基于所述候选图像中像素点的亮度值,对所述候选图像中的像素点进行亮度分级,确定至少两个亮度级别;对所述至少两个亮度级别分别赋色,得到所述伪彩色图像。
如图22所示,在一个可选的实施例中,所述区域划分模块2130包括:
确定单元2131,用于将所述样本图像通过预先训练得到的图像划分模型,确定所述元素类型的差异表示;
划分单元2132,用于基于所述元素类型的差异表示,对所述样本图像进行区域划分,确定所述样本图像对应的所述区域划分结果。
在一个可选的实施例中,所述样本图像为具有光谱信息的图像;
所述确定单元2131还用于对所述样本图像进行光谱分析,得到光谱分析结果;基于所述光谱分析结果,确定所述样本图像对应的所述元素类型的差异表示。
在一个可选的实施例中,所述装置还用于获取第二图像,所述第二图像是针对所述待分析样本进行采集得到的具有光谱信息的预先标注图像;以所述第二图像对候选划分模型进行训练;响应于对所述候选划分模型的训练达到训练效果,得到图像划分模型,所述图像划分模型用于对所述第一图像进行区域分割。
在一个可选的实施例中,所述样本获取模块2110还用于对所述待分析样本进行推扫式采集操作,得到所述样本图像。
在一个可选的实施例中,所述推扫式采集操作是基于采集设备进行的,所述样本获取模块2110还用于在所述预设波段范围内,采用可调谐滤波器,确定至少一个波长;基于所述采集设备,对所述待分析样本进行推扫式采集操作,获取所述至少一个波长对应的样本图像。
在一个可选的实施例中,所述区域确定模块2140还用于确定所述伪彩色图像与所述区域划分结果中的重叠区域;在所述样本图像中,将所述重叠区域作为包括所述待识别元素类型的所述图像区域。
综上所述,基于预先确定的预设波段,对待分析样本进行采集,得到样本图像,从预设波段中选取至少一个效果较好的预设波长,并根据至少一个预设波长,从样本图像中确定与预设波长对应的第一图像,对第一图像进行处理后,得到伪彩色图像,伪彩色图像能够较准确地体现预设波长的优势。根据样本图像中样本元素类型的差异,对样本图像进行区域划分后得到区域划分结果。结合伪彩色图像进而区域划分结果,确定包括待识别元素类型的图像区域,从而确定待识别区域(如:肿瘤组织)的位置信息。通过上述装置,可以避免仅仅依靠医生的裸眼观测和描述对肿瘤组织的大小、区域等进行判断,降低病理取材的难度,不仅操作较为简便,且成本相对较低。
需要说明的是:上述实施例提供的图像处理装置,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图23示出了本申请一个示例性实施例提供的服务器的结构示意图。该服务器2300包括中央处理单元(Central Processing Unit,CPU)2301、包括随机存取存储器(Random Access Memory,RAM)2302和只读存储器(Read Only Memory,ROM)2303的系统存储器2304,以及连接系统存储器2304和中央处理单元2301的系统总线2305。服务器2300还包括用于存储操作系统2313、应用程序2314和其他程序模块2315的大容量存储设备2306。
大容量存储设备2306通过连接到系统总线2305的大容量存储控制器(未示出)连接到中央处理单元2301。大容量存储设备2306及其相关联的计算机可读介质为服务器2300提供非易失性存储。
不失一般性,计算机可读介质可以包括计算机存储介质和通信介质。
根据本申请的各种实施例,服务器2300可以通过连接在系统总线2305上的网络接口单元2311连接到网络2312,或者说,也可以使用网络接口单元2311来连接到其他类型的网络或远程计算机系统(未示出)。
上述存储器还包括一个或者一个以上的程序,一个或者一个以上程序存储于存储器中,被配置由CPU执行。
本申请的实施例还提供了一种计算机设备,该计算机设备包括处理器和存储器,该存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述各方法实施例提供的图像处理方法。
本申请的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行,以实现上述各方法实施例提供的图像处理方法。
本申请的实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述实施例中任一所述的图像处理方法。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。

Claims (15)

  1. 一种图像处理方法,由服务器执行,所述方法包括:
    获取样本图像,所述样本图像包括在预设波段内对待分析样本进行采集得到的图像;
    获取所述样本图像中与所述预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像;
    根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,所述样本元素类型中包括待识别的待识别元素类型;
    基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域。
  2. 根据权利要求1所述的方法,其中,所述获取所述样本图像中与所述预设波段中至少一个第一波长对应的第一图像,得到伪彩色图像,包括:
    对所述预设波段中预设波长对应的第一图像进行赋色处理,得到所述伪彩色图像;
    或者,
    将所述预设波段中至少两个预设波长对应的至少两个第一图像进行合成处理,对合成的图像进行赋色处理,得到所述伪彩色图像。
  3. 根据权利要求2所述的方法,其中,所述将所述预设波段中至少两个预设波长对应的至少两个第一图像进行合成处理,对合成的图像进行赋色处理,得到所述伪彩色图像,包括:
    根据所述至少两个预设波长,确定所述至少两个预设波长分别对应的至少两个第一图像,其中,第i个预设波长对应第i个第一图像,i为正整数;
    对所述至少两个第一图像进行合成处理,得到候选图像;
    对所述候选图像进行赋色处理,得到所述伪彩色图像。
  4. 根据权利要求3所述的方法,其中,所述对所述至少两个第一图像进行合成处理,得到候选图像,包括:
    对所述至少两个第一图像对应像素点的第一像素值进行平均处理,得到所述对应像素点的第二像素值;
    基于各像素点对应的第二像素值确定所述候选图像。
  5. 根据权利要求3所述的方法,其中,所述对所述候选图像进行赋色处理,得到所述伪彩色图像,包括:
    基于所述候选图像中像素点的亮度值,对所述候选图像中的像素点进行亮度分级,确定至少两个亮度级别;
    对所述至少两个亮度级别分别赋色,得到所述伪彩色图像。
  6. 根据权利要求1至5任一所述的方法,其中,所述根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,包括:
    将所述样本图像通过预先训练得到的图像划分模型,确定所述元素类型的差异表示;
    基于所述元素类型的差异表示,对所述样本图像进行区域划分,确定所述样本图像对应的所述区域划分结果。
  7. 根据权利要求6所述的方法,其中,所述样本图像为具有光谱信息的图像;
    所述将所述样本图像通过预先训练得到的图像划分模型,确定所述元素类型的差异表示,包括:
    对所述样本图像进行光谱分析,得到光谱分析结果;
    基于所述光谱分析结果,确定所述样本图像对应的所述元素类型的差异表示。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    获取第二图像,所述第二图像是针对所述待分析样本进行采集得到的具有光谱信息的预先标注图像;
    以所述第二图像对候选划分模型进行训练;
    响应于对所述候选划分模型的训练达到训练效果,得到图像划分模型,所述图像划分模型用于对所述第一图像进行区域分割。
  9. 根据权利要求1至5任一所述的方法,其中,所述获取样本图像,包括:
    对所述待分析样本进行推扫式采集操作,得到所述样本图像。
  10. 根据权利要求9所述的方法,其中,所述推扫式采集操作是基于采集设备进行的;
    所述获取样本图像,包括:
    在所述预设波段范围内,采用可调谐滤波器,确定至少一个预设波长;
    基于所述采集设备,对所述待分析样本进行推扫式采集操作,获取所述至少一个预设波长对应的样本图像。
  11. 根据权利要求1至5任一所述的方法,其中,所述基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域,包括:
    确定所述伪彩色图像与所述区域划分结果中的重叠区域;
    在所述样本图像中,将所述重叠区域作为包括所述待识别元素类型的所述图像区域。
  12. 一种图像处理装置,所述装置包括:
    样本获取模块,用于获取样本图像,所述样本图像包括在预设波段内对待分析样本进行采集得到的图像;
    图像获取模块,用于获取所述样本图像中与所述预设波段中至少一个预设波长对应的第一图像,得到伪彩色图像;
    区域划分模块,用于根据所述样本图像中样本元素类型的差异,对所述样本图像进行区域划分,得到区域划分结果,所述样本元素类型中包括待识别的待识别元素类型;
    区域确定模块,用于基于所述伪彩色图像和所述区域划分结果,在所述样本图像中确定包括所述待识别元素类型的图像区域。
  13. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至11任一所述的图像处理方法。
  14. 一种计算机可读存储介质,所述存储介质中存储有至少一段程序,所述至少一段程序由处理器加载并执行以实现如权利要求1至11任一所述的图像处理方法。
  15. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现如权利要求1至11任一所述的图像处理方法。
PCT/CN2022/132171 2022-01-25 2022-11-16 图像处理方法、装置、设备、可读存储介质及程序产品 WO2023142615A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/224,201 US20230368379A1 (en) 2022-01-25 2023-07-20 Image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210086842.8 2022-01-25
CN202210086842.8A CN114445362A (zh) 2022-01-25 2022-01-25 图像处理方法、装置、设备、可读存储介质及程序产品

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/224,201 Continuation US20230368379A1 (en) 2022-01-25 2023-07-20 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2023142615A1 true WO2023142615A1 (zh) 2023-08-03

Family

ID=81369562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132171 WO2023142615A1 (zh) 2022-01-25 2022-11-16 图像处理方法、装置、设备、可读存储介质及程序产品

Country Status (3)

Country Link
US (1) US20230368379A1 (zh)
CN (1) CN114445362A (zh)
WO (1) WO2023142615A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445362A (zh) * 2022-01-25 2022-05-06 腾讯科技(深圳)有限公司 图像处理方法、装置、设备、可读存储介质及程序产品
CN115236015B (zh) * 2022-07-21 2024-05-03 华东师范大学 基于高光谱成像技术的穿刺样本病理分析系统及方法
CN116798583A (zh) * 2023-06-28 2023-09-22 华东师范大学 病理组织宏观信息采集分析系统及其分析方法
CN116956139A (zh) * 2023-08-04 2023-10-27 深圳优立全息科技有限公司 一种基于红外波段的设备关联方法及相关装置
CN117274236B (zh) * 2023-11-10 2024-03-08 山东第一医科大学第一附属医院(山东省千佛山医院) 基于高光谱图像的尿液成分异常检测方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555835A (zh) * 2019-09-04 2019-12-10 郑州大学 一种脑片图像区域划分方法及装置
US20200167910A1 (en) * 2018-11-28 2020-05-28 International Business Machines Corporation Recognizing pathological images captured by alternate image capturing devices
CN112907581A (zh) * 2021-03-26 2021-06-04 山西三友和智慧信息技术股份有限公司 一种基于深度学习的mri多类脊髓肿瘤分割方法
CN113450305A (zh) * 2020-03-26 2021-09-28 太原理工大学 医疗图像的处理方法、系统、设备及可读存储介质
CN114445362A (zh) * 2022-01-25 2022-05-06 腾讯科技(深圳)有限公司 图像处理方法、装置、设备、可读存储介质及程序产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200167910A1 (en) * 2018-11-28 2020-05-28 International Business Machines Corporation Recognizing pathological images captured by alternate image capturing devices
CN110555835A (zh) * 2019-09-04 2019-12-10 郑州大学 一种脑片图像区域划分方法及装置
CN113450305A (zh) * 2020-03-26 2021-09-28 太原理工大学 医疗图像的处理方法、系统、设备及可读存储介质
CN112907581A (zh) * 2021-03-26 2021-06-04 山西三友和智慧信息技术股份有限公司 一种基于深度学习的mri多类脊髓肿瘤分割方法
CN114445362A (zh) * 2022-01-25 2022-05-06 腾讯科技(深圳)有限公司 图像处理方法、装置、设备、可读存储介质及程序产品

Also Published As

Publication number Publication date
CN114445362A (zh) 2022-05-06
US20230368379A1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
WO2023142615A1 (zh) 图像处理方法、装置、设备、可读存储介质及程序产品
AU2019203346B2 (en) Optical detection of skin disease
US11984217B2 (en) Method and apparatus for processing histological image captured by medical imaging device
US11257213B2 (en) Tumor boundary reconstruction using hyperspectral imaging
CN110033456B (zh) 一种医疗影像的处理方法、装置、设备和系统
JP6885564B2 (ja) 腫瘍および/または健常組織の非侵襲的検出方法およびハイパースペクトルイメージング装置
US20150230875A1 (en) Method and system for providing recommendation for optimal execution of surgical procedures
Garnavi Computer-aided diagnosis of melanoma
JP7427289B2 (ja) 生体細胞解析装置、生体細胞解析システム、生体細胞解析プログラムおよび生体細胞解析方法
CN113450305B (zh) 医疗图像的处理方法、系统、设备及可读存储介质
Domingues et al. Computer vision in esophageal cancer: a literature review
Aggarwal et al. Applications of multispectral and hyperspectral imaging in dermatology
CN115049666A (zh) 基于彩色小波协方差深度图模型的内镜虚拟活检装置
Gavrilov et al. Deep learning based skin lesions diagnosis
CN116916812A (zh) 用于评估组织重塑的系统和方法
Bochko et al. Lower extremity ulcer image segmentation of visual and near‐infrared imagery
Li Hyperspectral imaging technology used in tongue diagnosis
WO2019092723A1 (en) System and method for determining pathological status of a laryngopharyngeal area in a patient
EP3023936B1 (en) Diagnostic apparatus and image processing method in the same apparatus
Sanchez et al. A new system of computer-aided diagnosis of skin lesions
Suárez et al. Non-invasive Melanoma Diagnosis using Multispectral Imaging.
CN114494188A (zh) 病理样本的选取方法、装置、设备、存储介质及程序产品
CN117274146A (zh) 图像处理方法、装置、设备、存储介质及程序产品
Mukku et al. A Specular Reflection Removal Technique in Cervigrams
Zhang What’s wrong with the tongue: color grading quantization based on hyperspectral images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22923409

Country of ref document: EP

Kind code of ref document: A1