WO2023103467A1 - 图像处理方法、装置及设备 - Google Patents

图像处理方法、装置及设备 Download PDF

Info

Publication number
WO2023103467A1
WO2023103467A1 PCT/CN2022/115375 CN2022115375W WO2023103467A1 WO 2023103467 A1 WO2023103467 A1 WO 2023103467A1 CN 2022115375 W CN2022115375 W CN 2022115375W WO 2023103467 A1 WO2023103467 A1 WO 2023103467A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
boundary
pixel
detected
Prior art date
Application number
PCT/CN2022/115375
Other languages
English (en)
French (fr)
Inventor
王晶
贺光琳
周璐璐
Original Assignee
杭州海康慧影科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康慧影科技有限公司 filed Critical 杭州海康慧影科技有限公司
Publication of WO2023103467A1 publication Critical patent/WO2023103467A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • the present application relates to the field of medical technology, and in particular to an image processing method, device and equipment.
  • Endoscopes are a commonly used medical device, consisting of a light guide structure and a set of lenses. After the endoscope enters the inside of the target object, the endoscope can be used to collect visible light images and fluorescence images of the specified position inside the target object. image to generate a fused image based on a visible light image and a fluorescence image.
  • the fused image can clearly display the normal tissue and diseased tissue at the specified position inside the target object, that is, the normal tissue and the diseased tissue at the specified position inside the target object can be distinguished based on the fused image, so that the target object can be examined and treated based on the fused image, and accurate decision can be made. Which tissues need to be removed.
  • the fluorescence image is superimposed on the visible light image for display, which may cause some areas of the visible light image to be blocked, causing certain visual obstacles, and then resulting in a poor fusion image effect. It cannot clearly display the normal tissue and diseased tissue at the specified position inside the target object, which affects the quality and efficiency of the doctor's operation, and the user experience is poor.
  • the present application provides an image processing method to improve the quality and efficiency of doctors' operations.
  • the present application provides an image processing method, the method comprising: acquiring a visible light image and a fluorescence image corresponding to a designated position inside the target object; wherein the designated position includes lesion tissue and normal tissue; determining the lesion from the image to be detected A boundary to be cut corresponding to the tissue; wherein, the image to be detected is the fluorescence image, or the image to be detected is a fusion image of the visible light image and the fluorescence image; a target image is generated, and the target image includes The visible light image and the boundary to be cut.
  • the present application provides an image processing device, which includes: an acquisition module, configured to acquire a visible light image and a fluorescence image corresponding to a specified position inside a target object; wherein, the specified position includes diseased tissue and normal tissue; a determination module, used Determine the boundary to be cut corresponding to the lesion tissue from the image to be detected; wherein, the image to be detected is the fluorescence image, or the image to be detected is a fusion image of the visible light image and the fluorescence image ; A generating module, configured to generate a target image, the target image including the visible light image and the boundary to be cut.
  • the present application provides an image processing device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor; The instructions are executed to implement the image processing method disclosed in the above example of the present application.
  • the present application provides a machine-readable storage medium on which computer instructions are stored, and when the computer instructions are invoked by a processor, the processor executes the above-mentioned image processing method.
  • the boundary to be cut corresponding to the diseased tissue (that is, the boundary of the lesion area) can be determined, and the target image is generated based on the visible light image and the boundary to be cut, that is, the boundary to be cut is superimposed on the visible light Display on the image, because the boundary to be cut is superimposed on the visible light image for display instead of the fluorescence image superimposed on the visible light image for display, avoiding the problem that most areas of the visible light image are blocked, and improving the visible light image being blocked by fluorescence development
  • the problem of avoiding or alleviating visual impairment, the effect of the target image is better, it can clearly display the normal tissue and diseased tissue at the specified position inside the target object, improve the quality and efficiency of the doctor's operation, and the user experience is better. Since there is a boundary to be cut in the target image, the doctor can know the boundary to be cut corresponding to the diseased tissue, and can cut according to the boundary to be cut, improving the quality and efficiency of the
  • FIG. 1 is a schematic structural view of an endoscope system in an embodiment of the present application
  • FIG. 2 is a schematic diagram of the functional structure of an endoscope system in an embodiment of the present application.
  • FIG. 3A is a schematic diagram of a visible light image in an embodiment of the present application.
  • Fig. 3B is a schematic diagram of a fluorescent image in an embodiment of the present application.
  • FIG. 3C is a schematic diagram of a fused image in an embodiment of the present application.
  • FIG. 4 is a schematic flow diagram of an image processing method in an embodiment of the present application.
  • FIG. 5A is a schematic diagram of training and testing based on a target segmentation model in an embodiment of the present application
  • FIG. 5B is a schematic diagram of the network structure of the segmentation model in an embodiment of the present application.
  • Fig. 5C is a schematic diagram of a sample image and a calibration mask image in an embodiment of the present application.
  • Fig. 5D is a schematic diagram of a target image in an embodiment of the present application.
  • FIG. 6A is a schematic structural diagram of an endoscope system in an embodiment of the present application.
  • FIG. 6B is a schematic diagram of area contour detection in an embodiment of the present application.
  • FIG. 6C is a schematic diagram of contour detection using region segmentation in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image processing device in an embodiment of the present application.
  • first, second, and third may be used in the embodiment of the present application to describe various information, such information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, furthermore, the use of the word “if” could be interpreted as “at” or “when” or "in response to a determination.”
  • FIG. 1 it is a schematic structural diagram of an endoscope system, which may include: an endoscope, a light source, a camera system host, a display device and a storage device, and the display device and the storage device are external devices.
  • the endoscope system shown in FIG. 1 is just an example of an endoscope system, and the structure is not limited.
  • the endoscope can be inserted into a specified position inside the target object (for example, a subject such as a patient) (that is, the position to be inspected, that is, the area to be inspected inside the patient, there is no limit to this specified position), and the target An image of a specified position inside the object, and output the image of the specified position inside the target object to a display device and a storage device.
  • Users for example, medical personnel, etc.
  • Users can check abnormal parts such as bleeding parts and tumor parts at designated positions inside the target object by observing the images displayed on the display device. Users can perform postoperative review and surgical training by accessing the images stored in the storage device.
  • the endoscope can collect the image of the specified position inside the target object, and input the image to the host computer of the camera system.
  • the light source can provide a light source for the endoscope, that is, illuminating light is emitted from the front end of the endoscope, so that the endoscope can collect relatively clear images inside the target object.
  • the camera system host can input the image to the storage device, and the storage device stores the image.
  • the user can access the image in the storage device, or access the video in the storage device (by a large number of images) composed video).
  • the host computer of the camera system can also input the image to the display device, and the display device displays the image, and the user can observe the image displayed by the display device in real time.
  • FIG. 2 is a schematic diagram of the functional structure of the endoscope system
  • the endoscope may include an imaging optical system, an imaging unit, a processing unit and an operating unit.
  • the imaging optical system is used to condense the light from the observation site, and the imaging optical system is composed of one or more lenses.
  • the imaging unit is used to photoelectrically convert the light received from the imaging optical system to generate image data.
  • the imaging unit is composed of sensors such as CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge Coupled Device, Charge Coupled Device) composition.
  • the processing unit is used for converting image data into digital signals, and sending the converted digital signals (such as pixel values of each pixel point) to the host computer of the camera system.
  • the operating unit may include but not limited to switches, buttons, and touch panels, and is used to receive indication signals for switching actions of the endoscope and light sources, and output the indication signals to the camera system host.
  • the light source may include an illumination control unit and an illumination unit.
  • the illumination control unit is used for receiving an indication signal from the camera system host, and based on the indication signal, controls the illumination unit to provide illumination light to the endoscope.
  • the camera system host is used to process the image data received from the endoscope and transmit it to a display device and a storage device, which are external devices of the camera system host.
  • the camera system host may include an image input unit, an image processing unit, an intelligent processing unit, a video encoding unit, a control unit and an operation unit.
  • the image input unit is used for receiving signals sent by the endoscope, and transmitting the received signals to the image processing unit.
  • the image processing unit is used to perform ISP (Image Signal Processing, image signal processing) operations on the image input by the image input unit, including but not limited to brightness transformation, sharpening, fluorescent dyeing, scaling, etc., and the image processed by the image processing unit is transmitted to Intelligent processing unit, video encoding unit or display device.
  • ISP Image Signal Processing, image signal processing
  • the intelligent processing unit intelligently analyzes the image, including but not limited to scene classification based on deep learning (such as thyroid surgery scene, hepatobiliary surgery scene, ENT surgery scene), instrument head detection, gauze detection and dense fog classification (thick Fog classification means that the use of electric knife to cut tissue will produce smoke during surgery, and the smoke will affect the field of vision.
  • scene classification based on deep learning (such as thyroid surgery scene, hepatobiliary surgery scene, ENT surgery scene), instrument head detection, gauze detection and dense fog classification (thick Fog classification means that the use of electric knife to cut tissue will produce smoke during surgery, and the smoke will affect the field of vision.
  • the image processed by the intelligent processing unit is transmitted to the image processing unit or video encoding unit, and the image processing unit’s processing methods for the image processed by the intelligent processing unit include but are not limited to brightness transformation, Overlapping frame (overlapping frame refers to superimposing a frame pattern on the image, such as a recognition frame, to mark the recognition result of the target, that is, the overlapping frame acts as a mark and prompt) and zooming.
  • the video encoding unit is used to encode and compress the image, and transmit it to the storage device.
  • the control unit is used to control each module of the endoscope system, including but not limited to the lighting mode of the light source, the image processing mode, the intelligent processing mode and the video encoding mode, etc.
  • the operation unit may include but not limited to a switch, a button and a touch panel for receiving an external indication signal and outputting the received indication signal to the control unit.
  • a white light endoscope and a fluorescence endoscope can be inserted into a designated position inside the target object and images of the designated position inside the target object can be collected.
  • An image collected by a white light endoscope is called a visible light image (also called a white light image), and an image collected by a fluorescence endoscope is called a fluorescence image.
  • fluorescent contrast agent such as ICG (Indocyanine Green, indocyanine green), etc.
  • ICG Indocyanine Green, indocyanine green
  • the diseased tissue absorbs more fluorescent contrast agent. Fluorescence is generated, so that the diseased tissue is highlighted in the fluorescence image collected by the fluorescence endoscope. Based on the fluorescence image, normal tissue and diseased tissue can be distinguished, helping medical staff to accurately distinguish diseased tissue.
  • the visible light image can be an image generated based on visible light, as shown in Figure 3A, which is a schematic diagram of a visible light image at a specified position inside the target object
  • the fluorescence image can be an image generated based on fluorescence
  • Figure 3B is a schematic representation of a fluorescent image of a specified location inside the object of interest.
  • visible light images and fluorescence images of specified positions inside the target object can be collected, that is, the camera system host can obtain visible light images and fluorescence images.
  • a fusion image can be generated based on the visible light image and the fluorescence image.
  • the fusion image can clearly display the normal tissue and the diseased tissue at the specified position inside the target object, and the normal tissue at the specified position inside the target object can be distinguished based on the fusion image. and diseased tissue.
  • the fluorescence endoscope is divided into a positive development mode (the positive development mode may also be referred to as a positive development mode) and a negative development mode (the negative development mode may also be referred to as a negative development mode) during use.
  • the positive development mode fluorescence appears at the position of the diseased tissue, that is, the developed area of the fluorescence image corresponds to the diseased tissue.
  • the negative developing mode the fluorescence appears in the position other than the diseased tissue, and the fluorescence does not appear in the position of the diseased tissue, that is, the non-developed area of the fluorescence image corresponds to the diseased tissue.
  • the positive development mode it is necessary to adopt the injection method of the fluorescent contrast agent that matches the positive development mode. Based on the injection method of the fluorescent contrast agent that matches the positive development mode, it is necessary to ensure that the diseased tissue is visualized and the normal tissue is not developed.
  • the negative development mode it is necessary to use the injection method of the fluorescent contrast agent that matches the negative development mode. Based on the injection method of the fluorescent contrast agent that matches the negative development mode, it is necessary to ensure that the diseased tissue is not developed and the normal tissue is developed.
  • the fluorescence image on the upper side in Figure 3C is the fluorescence image in the positive development mode.
  • the developed area of the fluorescence image corresponds to the diseased tissue
  • the non-developed area of the fluorescence image corresponds to the normal tissue.
  • the fused image can be obtained.
  • the region corresponding to the diseased tissue is a developing region
  • the corresponding region of the diseased tissue can be dyed so that medical staff can observe the corresponding region of the diseased tissue.
  • the fluorescence image on the lower side in Figure 3C is the fluorescence image in the negative development mode.
  • the developed area of the fluorescence image corresponds to the normal tissue
  • the non-developed area of the fluorescence image corresponds to the diseased tissue.
  • fusion can be obtained image.
  • the area corresponding to the diseased tissue is a non-developed area
  • the area corresponding to the normal tissue is a developed area
  • the area corresponding to the normal tissue can be dyed to highlight the diseased tissue.
  • the fluorescence image collected in the positive development mode can be referred to the fluorescence image shown on the upper side of Fig. 3C.
  • it can be pre-configured to use the negative development mode to collect the fluorescence image, and the fluorescence image collected in the negative development mode can be referred to the fluorescence image shown on the lower side of Figure 3C.
  • the developed area of the fluorescent image can be superimposed on the visible light image to obtain a fused image, so that the position corresponding to the developed area on the visible light image is blocked, resulting in visual distraction.
  • the non-developed area of the fluorescent image can be superimposed on the visible light image to obtain a fused image, so that the position corresponding to the non-developed area on the visible light image will be blocked, causing visual interference.
  • the embodiment of the present application proposes an image processing method, which can determine the boundary to be cut corresponding to the lesion tissue (that is, the boundary to be cut of the lesion area), and superimpose the boundary to be cut of the lesion area on the visible light image for display , in order to improve the problem that the visible light image is blocked by fluorescence development, avoid or reduce visual interference, the doctor can know the boundary of the lesion area to be cut based on the image, and can according to the lesion area (development area in positive development mode or non-development area in negative development mode) area) to be cut to improve the quality and efficiency of the doctor's operation.
  • the lesion area development area in positive development mode or non-development area in negative development mode
  • FIG. 4 is a schematic flow chart of the image processing method.
  • the method can include:
  • Step 401 Acquire a visible light image and a fluorescence image corresponding to a specified position inside the target object.
  • the collection time of the visible light image and the collection time of the fluorescence image may be the same.
  • the specified position inside the target object may include diseased tissue and normal tissue
  • the visible light image may include the area corresponding to the diseased tissue and the area corresponding to the normal tissue
  • the fluorescence image may include the area corresponding to the diseased tissue and the area corresponding to the normal tissue. The region corresponding to this normal tissue.
  • FIG. 3A is a schematic diagram of a visible light image of a specified location inside a target object
  • FIG. 3B is a schematic diagram of a fluorescence image of a specified location inside a target object.
  • the development mode can be a positive development mode or a negative development mode. If the development mode is a positive development mode, when acquiring a fluorescence image corresponding to a specified position inside the target object, the fluorescence image is a fluorescence image corresponding to the positive development mode, that is, the fluorescence The developed area of the image corresponds to the diseased tissue, and the non-developed area of the fluorescence image corresponds to the normal tissue.
  • the fluorescence image is the fluorescence image corresponding to the negative development mode, that is, the development area of the fluorescence image corresponds to normal tissue, and the non-development area of the fluorescence image corresponds to diseased tissue.
  • the visible light image at the specified position inside the target object may be collected through a white light endoscope
  • the visible light image may include an area corresponding to normal tissue and an area corresponding to diseased tissue.
  • the fluorescence image may include an area corresponding to normal tissue and an area corresponding to diseased tissue.
  • the visible light image and the fluorescence image corresponding to the specified position inside the target object can be obtained.
  • the visible light image can include the area corresponding to the lesion tissue and the area corresponding to the normal tissue
  • the fluorescence image can include the area corresponding to the lesion tissue and the area corresponding to the normal tissue. Areas corresponding to normal tissue.
  • Step 402. Determine the boundary to be cut corresponding to the lesion tissue from the image to be detected; wherein, the image to be detected is a fluorescence image, or the image to be detected is a fusion image of a visible light image and a fluorescence image.
  • the following steps may be used to determine the boundary to be cut:
  • Step 4021 Determine the target area corresponding to the diseased tissue from the image to be detected.
  • the target area may be an area in the image to be detected, and the target area in the image to be detected corresponds to the diseased tissue.
  • the image to be detected may be a fluorescence image, in this case, a target area corresponding to the diseased tissue is determined from the fluorescence image, and the target area is an area corresponding to the diseased tissue in the fluorescence image.
  • the image to be detected may be a fusion image of a visible light image and a fluorescence image, in this case, a fusion image is generated based on the visible light image and the fluorescence image, and the target area corresponding to the lesion tissue is determined from the fusion image, and the target area is the fusion image The region corresponding to the diseased tissue.
  • a fusion image based on a visible light image and a fluorescence image it may include, but not limited to, the following methods: perform contrast enhancement on the fluorescence image to enhance the contrast of the fluorescence image, and obtain a contrast-enhanced fluorescence image.
  • the contrast enhancement method may include But not limited to: histogram equalization (Histogram Equalization), local contrast enhancement, etc. There is no limit to the contrast enhancement method. Then, color mapping is performed on the contrast-enhanced fluorescence image, that is, the fluorescence image is dyed to obtain a color-mapped fluorescence image.
  • the fluorescence image When the fluorescence image is dyed, different fluorescence brightness in the fluorescence image can correspond to different hues and saturation, there is no restriction on the color mapping method. Then, the color-mapped fluorescence image and the visible light image are fused to obtain a fused image (that is, an image after dyeing the visible light image). For example, the color-mapped fluorescence image can be superimposed on the visible light image for display, so as to obtain the fusion image, that is, dyeing the visible light image based on the fluorescence image, and then realize the effect of displaying the developed area of the fluorescence image on the visible light image .
  • FIG. 3C shows an example of generating a fusion image based on a visible light image and a fluorescence image.
  • the fluorescence image can be used as the image to be detected, and the target area corresponding to the lesion tissue can be determined from the image to be detected, or a fusion image can be generated based on the visible light image and the fluorescence image, and the fusion image can be used as the image to be detected, and the target area corresponding to the lesion tissue can be determined from the image to be detected A target area corresponding to the diseased tissue is determined in the detection image.
  • the following steps can be used to determine the target area corresponding to the lesion tissue from the image to be detected.
  • the following steps are just examples, and the determination method is not limited, as long as It only needs to be able to obtain the target area corresponding to the diseased tissue.
  • Step 40211 based on the pixel value corresponding to each pixel in the image to be detected, select the target pixel corresponding to the diseased tissue from all the pixels in the image to be detected.
  • the brightness of the developed area of the image to be detected (such as the developed area of the fluorescence image or the fusion image) is relatively large, and the brightness of the non-developed area of the image to be detected is relatively large. is relatively small, therefore, based on the pixel values corresponding to each pixel in the fluorescence image, the target pixel corresponding to the diseased tissue can be selected.
  • the diseased tissue corresponds to the developed area
  • the normal tissue corresponds to the non-developed area, that is, the brightness value corresponding to the diseased tissue is relatively large
  • the brightness value corresponding to the normal tissue is relatively small.
  • the diseased tissue corresponds to the non-developed area
  • the normal tissue corresponds to the developed area, that is, the brightness value corresponding to the diseased tissue is relatively small, and the brightness value corresponding to the normal tissue is relatively large. Therefore, the pixel with a small brightness value can be used as the target pixel.
  • the fluorescent image is a fluorescent image in positive developing mode, that is, a fluorescent image acquired in positive developing mode, wherein, in positive developing mode, the developed area of the fluorescent image corresponds to the diseased tissue
  • the image to be detected i.e. For each pixel in the fluorescent image or fusion image
  • the pixel value (such as brightness value) corresponding to the pixel is greater than the first threshold
  • the pixel value corresponding to the pixel is not is greater than the first threshold
  • target pixels can be selected from all the pixels in the image to be detected, and the number of target pixels can be multiple.
  • the target pixel is a pixel in the image to be detected whose pixel value is greater than the first threshold.
  • the target pixel in the image to be detected may be determined in a binarization manner, and the binarization is to set the pixel value of the pixel on the image to 0 or 255.
  • the value range of the pixel value is 0-255
  • you can set an appropriate binarization threshold if the pixel value of the pixel is greater than the binarization threshold, then set the pixel value of the pixel to 255, if the pixel The pixel value of is not greater than the binarization threshold, then the pixel value of the pixel point is set to 0, thereby obtaining a binarized image, and the pixel value of the pixel point in the binarized image is 0 or 255.
  • a threshold can be pre-configured as the first threshold, and the first threshold can be configured according to experience, without limitation. Based on the first threshold, the boundary between the developing area and the non-developing area can be represented, and the The threshold distinguishes developed and non-developed areas in the image to be inspected. Based on this, since the luminance value corresponding to the diseased tissue is relatively large, and the luminance value corresponding to the normal tissue is relatively small, the pixel with a large luminance value can be used as the target pixel.
  • the pixel value corresponding to the pixel is the target pixel; if the pixel value corresponding to the pixel is not greater than the first threshold, it can be determined that the pixel is not the target pixel.
  • the pixel value corresponding to the pixel may be the pixel value of the pixel itself, or the local pixel mean value corresponding to the pixel, for example , to obtain a sub-block centered on the pixel, the sub-block includes M pixels, M is a positive integer, and the average value of the pixel values of the M pixels is taken as the local pixel mean value corresponding to the pixel.
  • the fluorescence image is a fluorescence image in negative development mode, that is, a fluorescence image acquired in negative development mode, wherein, in the negative development mode, the non-developed area of the fluorescence image corresponds to the lesion tissue
  • the image to be detected i.e. for each pixel in the fluorescence image or fusion image
  • the pixel value such as brightness value
  • target pixels can be selected from all the pixels in the image to be detected, that is, the pixels in the image to be detected whose pixel values are smaller than the second threshold.
  • a threshold can be pre-configured as the second threshold, and the second threshold can be configured based on experience without limitation. Based on the second threshold, it can represent the boundary between the developed area and the non-developed area, and can distinguish between developed and non-developed areas. Based on this, since the luminance value corresponding to the diseased tissue is relatively small, and the luminance value corresponding to the normal tissue is relatively large, the pixel with a small luminance value can be used as the target pixel.
  • the pixel value corresponding to the pixel is less than the second threshold, it can be determined that the pixel is the target pixel, and if the pixel value corresponding to the pixel is not less than the second threshold, it can be determined that the pixel is not the target pixel.
  • the pixel value corresponding to the pixel may be the pixel value of the pixel itself, or the local pixel mean value corresponding to the pixel.
  • the image to be detected can be input to the trained target segmentation model, so that the target segmentation model can determine the predicted label value corresponding to each pixel point in the image to be detected based on the pixel value corresponding to each pixel point, and the pixel
  • the prediction label value corresponding to the point is the first value or the second value.
  • the first value is used to indicate that the pixel point is a pixel point corresponding to the lesion tissue
  • the second value is used to indicate that the pixel point is not corresponding to the lesion tissue.
  • pixel A pixel point whose predicted label value is the first value is determined as a target pixel point.
  • a machine learning algorithm can be used to train the target segmentation model, and the target segmentation model can be used to realize the segmentation of the lesion area (that is, the target area corresponding to the diseased tissue), that is, to distinguish from all pixels of the image to be detected The target pixel corresponding to the diseased tissue is obtained.
  • Figure 5A relates to the training process and testing process of the object segmentation model.
  • network training is performed based on sample images, calibration information, loss function and network structure, so as to obtain the target segmentation model.
  • the test image (that is, the image to be detected) can be input to the trained target segmentation model, and the target segmentation model performs network inference on the image to be detected to obtain the segmentation result corresponding to the image to be detected, that is, from the image to be detected
  • the target pixels corresponding to the diseased tissue are distinguished.
  • the training process and the testing process may include the following steps:
  • Step S11 obtaining an initial segmentation model, which may be a machine learning model, such as a deep learning model or a neural network model, and the type of the initial segmentation model is not limited.
  • the initial segmentation model may be a classification model, which is used to output the predicted label value corresponding to each pixel in the image, and the predicted label value is the first value or the second value, that is, the output of the initial segmentation model corresponds to two categories, The first value is used to indicate that the pixel point corresponds to the lesion tissue, and the second value is used to indicate that the pixel point does not correspond to the lesion tissue.
  • There is no restriction on the structure of the initial segmentation model as long as the initial segmentation model can realize the above functions. .
  • the network structure of the initial segmentation model can be referred to as shown in Figure 5B, which can include an input layer (for receiving input images), an encoding network, a decoding network and an output layer (for outputting segmentation results), and the encoding network consists of a convolutional layer
  • the decoding network consists of a convolutional layer and an upsampling layer.
  • the unet network model is used as the initial segmentation model.
  • Step S12 acquiring the sample image and the calibration information corresponding to the sample image, the calibration information including the calibration label value corresponding to each pixel in the sample image.
  • the calibration label value corresponding to the pixel point is the first value (such as 1, etc.), to indicate that the pixel point corresponds to the diseased tissue.
  • the calibration label value corresponding to the pixel point is the second value (such as 0, etc.), to indicate that the pixel point does not correspond to the diseased tissue.
  • each sample image is a fluorescence image or a fusion image inside the target object, and the sample images may also be called training images.
  • the marking information corresponding to the sample image can be manually marked by the user. For example, it is necessary to mark the lesion area (that is, the area corresponding to the diseased tissue) (that is, mark the outline of the lesion area).
  • a calibration mask image can be output, which is also the corresponding image of the sample image. Calibration information.
  • the value of the contour of the lesion area is the first value
  • the value inside the contour is the first value, indicating that both the contour and the interior of the contour are pixels corresponding to the lesion tissue point.
  • the values outside the contour are all the second values, indicating that none of the pixels outside the contour correspond to the diseased tissue.
  • the left image can be a sample image (such as a fluorescence image)
  • the right image can be a calibration mask image corresponding to the sample image, that is, the calibration information corresponding to the sample image, and the black pixels are all related to the lesion tissue
  • the corresponding pixels and the white pixels are not pixels corresponding to the diseased tissue.
  • the sample image and the calibration information corresponding to the sample image can be obtained, and the calibration information corresponding to the sample image can be a calibration mask image.
  • Step S13 input the sample image to the initial segmentation model, so that the initial segmentation model determines the predicted label value corresponding to each pixel point and the prediction corresponding to the predicted label value based on the pixel value corresponding to each pixel point in the sample image Probability, the predicted label value corresponding to the pixel can be the first value or the second value.
  • the working principle of the initial segmentation model is based on the pixel corresponding to each pixel in the sample image value to determine the segmentation result of the sample image.
  • the initial segmentation model can determine the predicted label value corresponding to each pixel point and the predicted label value based on the pixel value corresponding to each pixel point in the sample image.
  • the predicted probability corresponding to the value for example, the first pixel corresponds to the first value and the predicted probability of the pixel corresponding to the first value is 0.8, the second pixel corresponds to the first value and the pixel corresponds to the first value The predicted probability of the value is 0.6, the third pixel corresponds to the second value and the predicted probability of this pixel corresponding to the second value is 0.8, and so on.
  • Step S14 Determine the target loss value based on the calibration label value and prediction label value corresponding to each pixel. For example, for each pixel, the target loss value is determined based on the calibration label value corresponding to the pixel point, the predicted label value corresponding to the pixel point, and the predicted probability corresponding to the predicted label value.
  • the loss value can be determined using the following formula,
  • cross-entropy loss function is just an example, and the type of this loss function is not limited.
  • loss represents the loss value corresponding to the pixel point
  • M represents the number of categories, which is 2 in this embodiment, that is, there are two categories in total
  • category 1 corresponds to the first value
  • Category 2 corresponds to the second value.
  • yc is the value corresponding to the calibration label value, which can be equal to 0 or 1. If the calibration label value corresponding to the pixel point is the first value, then y1 is 1, and y2 is 0. If the calibration label value corresponding to the pixel point is the first value Two values, then y1 is 0, y2 is 1.
  • pc is the predicted probability corresponding to the predicted label value. If the probability of the first value corresponding to the pixel is p_1, the probability corresponding to the second value is 1-p_1.
  • the above formula can be used to calculate the loss value corresponding to the pixel, and then the target loss value can be determined based on the loss values corresponding to all pixels in the sample image, for example, based on The arithmetic mean of the sum of loss values corresponding to all pixels determines the target loss value.
  • Step S15 train the initial segmentation model based on the target loss value to obtain the target segmentation model.
  • the network parameters of the initial segmentation model are adjusted based on the target loss value to obtain the adjusted segmentation model.
  • the goal of network parameter adjustment is to make the target loss value smaller and smaller.
  • the adjusted segmentation model is used as the initial segmentation model, and steps S13-S14 are re-executed until the target loss value meets the optimization target, and the adjusted segmentation model is used as the target segmentation model.
  • the target segmentation model is obtained through training, and the testing process is performed based on the target segmentation model.
  • Step S16 in the testing process, after obtaining the image to be detected, the image to be detected can be input to the target segmentation model (for the working principle of the target segmentation model, refer to the working principle of the above-mentioned initial segmentation model), so that the target segmentation model is based on the target segmentation model
  • the pixel value corresponding to each pixel point in the image and determine the predicted label value corresponding to each pixel point. If the predicted label value corresponding to the pixel is the first value, it is determined that the pixel is a pixel corresponding to the diseased tissue, that is, the pixel is the target pixel.
  • the predicted label value corresponding to the pixel is the second value, it is determined that the pixel is not a pixel corresponding to the diseased tissue, that is, the pixel is not the target pixel.
  • the pixel point whose predicted label value is the first value can be determined as the target pixel point, so as to find all the target pixel points corresponding to the lesion tissue from the image to be detected.
  • the target pixel points corresponding to the lesion tissue can be selected from all the pixel points of the image to be detected, and there is no restriction on the selection method.
  • Step 40212 obtain at least one connected domain composed of all target pixels in the image to be detected.
  • a connected region detection algorithm can be used to determine the connected domains (ie, the connected regions of the target pixels) composed of these target pixels, and the number of connected domains can be At least one, the process of determining the connected domain is not limited.
  • the connected area detection algorithm refers to finding and marking each connected domain in the image to be detected, that is to say, based on the connected area detection algorithm, it is possible to know the Each connected domain in the image, there is no restriction on this process.
  • Step 40213 Determine the target area corresponding to the diseased tissue from the image to be detected based on the connected domain.
  • the number of connected domains may be at least one, and if the number of connected domains is one, the connected domain is used as the target area corresponding to the lesion tissue in the image to be detected. If the number of connected domains is at least two, the connected domain with the largest area is used as the target area corresponding to the lesion tissue in the image to be detected, other connected domains can also be used as the target area, and multiple adjacent connected domains can also be used as the target area.
  • the target area corresponding to the lesion tissue in the image to be detected, or all the connected domains may also be used as the target area corresponding to the lesion tissue in the image to be detected, which is not limited.
  • the connected domain after obtaining the connected domain composed of target pixels, can also be morphologically processed (such as using a 9*9 filter kernel to smooth the connected domain to obtain a connected domain with smooth edges ) and interference removal processing (such as removing isolated points), to obtain the connected domain after morphological processing and interference removal processing, so as to determine the target corresponding to the lesion tissue from the image to be detected based on the connected domain after morphological processing and interference removal processing area.
  • morphologically processed such as using a 9*9 filter kernel to smooth the connected domain to obtain a connected domain with smooth edges
  • interference removal processing such as removing isolated points
  • step 40211-step 40213 the target area corresponding to the diseased tissue can be obtained, and step 4021 is completed so far, and the subsequent step 4022 is performed based on the target area.
  • Step 4022 Determine the area contour corresponding to the diseased tissue based on the target area corresponding to the diseased tissue. For example, all boundary pixel points in the target area are determined, and an area contour corresponding to the diseased tissue is determined based on all boundary pixel points in the target area, that is, all boundary pixel points form the area contour.
  • Step 4023 Determine the boundary to be cut corresponding to the diseased tissue based on the contour of the region.
  • the contour of the region after obtaining the contour of the region corresponding to the diseased tissue, the contour of the region can be directly determined as the boundary to be cut, indicating that cutting along the contour of the region is required, and the contour of the region can be an irregular shape , can be any shape, and there is no restriction on the outline of this area.
  • the region contour corresponding to the diseased tissue after the region contour corresponding to the diseased tissue is obtained, it may also be determined whether there is an overlapping boundary between the region contour and the organ contour of the organ corresponding to the diseased tissue. If there is no overlapping boundary between the contour of the region and the contour of the organ, the contour of the region is determined as the boundary to be cut, indicating that cutting along the contour of the region is required. If there is a coincident boundary between the contour of the region and the contour of the organ, the non-coincident boundary between the contour of the region and the contour of the organ (that is, the remaining borders in the contour of the region except the coincident boundary) is determined as the boundary to be cut, indicating that it needs to be cut along The non-coinciding boundaries are cut.
  • the organ contour of the organ corresponding to the diseased tissue may be obtained first.
  • This embodiment does not limit the acquisition method, as long as the organ contour of the organ corresponding to the diseased tissue can be obtained.
  • a deep learning model for recognizing organ contours can be pre-trained, and the training process of the deep learning model is not limited.
  • the above-mentioned visible light images can be input to the deep learning model, and the deep learning model can output the organ contour of the organ corresponding to the diseased tissue.
  • the working principle of the deep learning model is no limit to the working principle of the deep learning model, as long as the deep learning model can output the organ contour That's it.
  • the region contour corresponding to the diseased tissue and the organ contour corresponding to the diseased tissue are obtained, it can be judged whether there is a coincident boundary between the region contour and the organ contour, that is, whether there is a coincident line segment.
  • the outline of the area can be determined as the boundary to be cut.
  • the edge of the organ is already a diseased tissue.
  • the entire left side (or right side, or upper side, or lower side, etc.) The left margins of the corresponding organs coincide.
  • the diseased tissue can be completely cut only by cutting along the non-overlapping boundary. Therefore, the non-overlapping boundary between the contour of the region and the contour of the organ is determined as the boundary to be cut.
  • the coincident boundary between the contour of the region and the contour of the organ is the edge of the corresponding organ of the diseased tissue, and when cutting along the non-coincident boundary, the region corresponding to the coincident boundary is also cut.
  • the boundaries to be cut can be determined.
  • the above are just two examples, and there is no limit to the determination method, as long as the diseased tissue can be completely cut based on the boundaries to be cut.
  • step 402 is completed, the boundary to be cut is obtained, and the subsequent step 403 is performed based on the boundary to be cut.
  • Step 403 generating a target image, which may include a visible light image and a boundary to be cut.
  • the to-be-cut boundary can be superimposed on the visible light image to obtain a target image, and the target image includes the visible light image and the to-be-cut boundary.
  • a cutting boundary, the position of the boundary to be cut on the visible light image is the same as the position of the boundary to be cut on the image to be detected.
  • the target image can be displayed on the screen for the medical staff to view. Since the boundary to be cut is the boundary of the lesion area, the medical staff can clearly view the boundary of the lesion area from the target image. , the boundary of the lesion area can provide a reference for cutting.
  • generating the target image based on the visible light image and the boundary to be cut may include the following steps:
  • Step 4031 Determine the target boundary feature corresponding to the boundary to be cut.
  • the target boundary feature may include but not limited to target color and/or target line type, and there is no limitation on this target boundary feature, and may be any feature used to display the boundary to be cut.
  • the target color can be any color, such as blue, red, green, etc., and there is no restriction on the target color, as long as the target color is different from the target image itself and can highlight the boundary to be cut.
  • the target line type can be any line type, such as solid line type, dotted line type, etc. There is no restriction on this target line type, as long as the target line type can highlight the boundary to be cut.
  • Step 4032 generating a target cutting boundary based on the features of the boundary to be cut and the target boundary.
  • the feature of the target boundary is the target color
  • color adjustment is performed on the boundary to be cut to obtain the target cutting boundary
  • the color of the target cutting boundary is the target color. For example, if the target color is blue, adjust the color of the boundary to be cut to blue to obtain a blue target cutting boundary.
  • the line type adjustment is performed on the boundary to be cut to obtain the target cutting boundary, and the line type of the target cutting boundary is the target line type.
  • the target line type is a dotted line type
  • the line type of the boundary to be cut is adjusted to a dotted line type to obtain a dotted line type target cutting boundary.
  • the target boundary feature is the target color and target line type
  • adjust the color and line type of the target boundary to obtain the target cutting boundary
  • the color of the target cutting boundary is the target color
  • the line type of the target cutting boundary is the target line type.
  • the target color is blue and the target line type is a dotted line
  • the color of the boundary to be cut can be adjusted to blue
  • the line type of the boundary to be cut can be adjusted to a dotted line
  • the obtained color is Blue and the line type is the target cutting boundary of dotted line type.
  • Step 4033 superimposing the target cutting boundary on the visible light image to obtain the target image.
  • the target cutting boundary can be superimposed on the visible light image to obtain the target image, which includes the visible light image and the target cutting boundary.
  • the target image can be displayed on the screen for medical staff to view . Since the target cutting boundary is the boundary of the lesion area, and the target cutting boundary is a highlighted boundary, for example, the target cutting boundary is highlighted as a boundary with the target color and/or the target line type, therefore, the medical staff can clearly see from the target image View to the border of the lesion area.
  • the fluorescence image is a fluorescence image in positive development mode
  • the fluorescent image includes the developed area around the boundary to be cut
  • the developed area around the boundary to be cut is superimposed on the target image, that is, the low-intensity fluorescent developed area around the boundary to be cut is reserved on the target image.
  • the fluorescent image is a fluorescent image in negative development mode, and the fluorescent image includes the developed area of the inner boundary of the boundary to be cut, superimpose the developed area of the inner boundary of the boundary to be cut on the target image, that is, retain the inner boundary of the boundary to be cut on the target image Areas of low intensity fluorescence development.
  • the fluorescence image is a fluorescence image in positive development mode
  • the development area of the fluorescence image corresponds to the lesion tissue
  • the boundary to be cut is the boundary of the lesion tissue corresponding to the target area
  • the inner periphery of the boundary to be cut is the development area.
  • the fluorescent image there may also be a developed area around the boundary to be cut, but the intensity of the developed area outside the boundary to be cut is lower than the intensity of the developed area inside the boundary to be cut, and the target image is obtained by superimposing the boundary to be cut on the visible light image Afterwards, the developed area on the periphery of the border to be cut may not be superimposed on the target image, or the developed area on the periphery of the border to be cut may be superimposed on the target image, that is, the developed area is expanded to the periphery from the border to be cut, and the fluorescent image is retained in the target image Areas of low-intensity fluorescence development on the periphery of the boundary to be cut.
  • the fluorescent image is a fluorescent image under negative development mode
  • the developed area of the fluorescent image corresponds to normal tissue
  • the non-developed area corresponds to diseased tissue
  • the boundary to be cut is the boundary of the diseased tissue corresponding to the target area
  • the periphery of the boundary to be cut is the developed area
  • the fluorescent image there may also be a developed area inside the boundary to be cut, but the intensity of the developed area inside the boundary to be cut is lower than the intensity of the developed area outside the boundary to be cut, and the target is obtained by superimposing the boundary to be cut on the visible light image
  • the developed area within the boundary to be cut may not be superimposed on the target image, or the developed area within the boundary to be cut may be superimposed on the target image, that is, the developed area shall be expanded from the boundary to be cut to the inner circle, and the target image shall retain Areas of low-intensity fluorescence development within the boundary to be cut in the fluorescence image.
  • the boundary to be cut is an example of an outline of a region.
  • the image on the left is a visible light image
  • the first image is a fluorescence image in positive developing mode.
  • the second image is a fused image of the fluorescence image and the visible light image, and the fused image shows the developed area of the fluorescence image (ie, the developed area of the fluorescence image is superimposed on the visible light image), instead of showing the outline of the area.
  • the third image is the target image under contour display 1, the target image displays the contour of the first region, that is, the contour of the first region is superimposed on the visible light image to obtain the target image, and contour display 1 indicates that the color of the contour of the first region is the target color ( The target color is not shown in FIG. 5D ), and the line type of the outline of the first region is a solid line type.
  • the fourth image is the target image under the outline display 2, the target image shows the outline of the second area, the outline display 2 indicates that the color of the outline of the second area is the target color, and the line type of the outline of the second area is a dotted line.
  • the fifth image is the target image under the outline display 3, the target image shows the outline of the third area, and the target image shows the low-intensity fluorescence developing area around the outline of the third area.
  • the first image is a fluorescent image in negative development mode.
  • the second image is a fused image of the fluorescence image and the visible light image.
  • the third image is the target image under the outline display 1, the target image shows the outline of the fourth area, and the outline display 1 indicates that the color of the outline of the fourth area is the target color, and the line type of the outline of the fourth area is a solid line.
  • the fourth image is the target image under outline display 2, the target image shows the outline of the fifth area, outline display 2 indicates that the color of the outline of the fifth area is the target color, and the line type of the outline of the fifth area is a dotted line.
  • the fifth image is the target image under the outline display 3, the target image shows the outline of the sixth area, and the target image shows the low-intensity fluorescence developing area in the inner periphery of the outline of the sixth area.
  • the display manner of the target image in FIG. 5D is just a few examples, and there is no limitation on this display manner.
  • step 401-step 403 can be executed , that is to say, determine the boundary to be cut corresponding to the lesion tissue from the image to be detected, and superimpose the boundary to be cut on the visible light image to obtain a target image, and display the target image. If no display switching command for the fluorescence image is received, a fusion image may be generated based on the visible light image and the fluorescence image, and the fusion image may be displayed, and this process will not be described in detail in this embodiment.
  • the medical staff can issue a display switching command for the fluorescent image, so that the display switching command for the fluorescent image can be received, and then the image processing method of this embodiment is used.
  • the boundary to be cut is displayed superimposed on the image.
  • the boundary to be cut corresponding to the diseased tissue (that is, the boundary of the lesion area) can be determined, and the target image is generated based on the visible light image and the boundary to be cut, that is, the boundary to be cut is superimposed on the visible light
  • the border to be cut is superimposed on the visible light image for display instead of the fluorescence image superimposed on the visible light image, it avoids the problem that most areas of the visible light image are blocked and improves the visibility of the visible light image.
  • the problem of fluorescein development occlusion can avoid or reduce visual interference, and the display effect of the target image is better.
  • the doctor can clearly display the normal tissue and diseased tissue at the specified position inside the target object, improve the quality and efficiency of the doctor's operation, and have a better user experience. Since the boundary to be cut is displayed on the target image, the doctor can know the boundary to be cut corresponding to the diseased tissue, and can cut according to the boundary to be cut, improving the quality and efficiency of the doctor's operation.
  • the endoscope system may include an image acquisition unit, an image processing unit, and an image display unit.
  • the image acquisition unit is used to acquire endoscopic video
  • the endoscopic video includes visible light image and fluorescence image.
  • the image processing unit is used to fuse the visible light image and the fluorescence image to obtain a fusion image, detect the contour of the lesion area on the fusion image, and obtain the contour of the region (that is, the contour of the region corresponding to the diseased tissue), or directly on the fluorescence image
  • the contour of the lesion area is detected to obtain the contour of the area.
  • the image display unit is used to superimpose the area outline on the visible light image for display, and provide it to doctors.
  • Image acquisition department Visible light reflected by the specified position of the target object and excited fluorescence are collected by two sensors located at the front end of the mirror tube, and an image signal is generated, and then the image signal is transmitted to the back end, and then the image signal is processed to obtain an endoscopic image.
  • the two sensors are respectively a sensor for collecting visible light (referred to as a white light endoscope) and a sensor for collecting fluorescence (referred to as a fluorescence endoscope).
  • the white light endoscope is used to collect visible light and generate a corresponding white light image signal, and the endoscopic image generated based on the white light image signal is a visible light image.
  • a fluorescence endoscope is used to collect fluorescence and generate corresponding fluorescence image signals, and the endoscopic image generated based on the fluorescence image signals is a fluorescence image.
  • the image acquisition unit can acquire visible light images and fluorescence images.
  • the image processing unit may obtain a real-time video stream, which may include visible light images and fluorescence images.
  • the image processing unit can detect the outline of the lesion area, obtain the area outline, and save the area outline of the lesion area.
  • the region segmentation method can be used for contour detection.
  • a threshold is set, and local The binarized image is obtained by mean value binarization, and then morphological processing (such as using a 9*9 filter kernel for smoothing, obtaining smooth edges and removing interference islands, etc.), can obtain the area contour of the lesion area.
  • the fluorescence image in the positive development mode is taken as an example.
  • the implementation process is the same, and the black and white color inversion can be performed during binarization, which will not be repeated here.
  • step 402 For the implementation process of contour detection using the region segmentation method, refer to step 402, and method 1 and method 2 in step 402 are region segmentation methods, which will not be repeated here.
  • machine learning can be used for contour detection.
  • a target segmentation model can be obtained by training first.
  • endoscopic images such as fluorescent images or fusion images
  • endoscopic images can be The mirror image is input to the target segmentation model, and the target segmentation model outputs the area outline of the lesion area.
  • the mode 3 in step 402 is a machine learning mode, which will not be repeated here.
  • Image display unit The image display unit is used to superimpose the area outline obtained by the image processing unit on the visible light image to obtain the target image, and display the target image on the screen for the doctor to view and provide a reference for cutting.
  • the specific superposition method please refer to step 403 and FIG. 5D , which will not be repeated here.
  • FIG. 7 is a schematic structural diagram of the image processing device.
  • the device may include:
  • An acquisition module 71 configured to acquire a visible light image and a fluorescence image corresponding to a designated position inside the target object; wherein the designated position includes diseased tissue and normal tissue;
  • the determination module 72 is configured to determine the boundary to be cut corresponding to the lesion tissue from the image to be detected; wherein, the image to be detected is the fluorescence image, or the image to be detected is the visible light image combined with the Fusion image of fluorescence image;
  • a generating module 73 configured to generate a target image, where the target image includes the visible light image and the boundary to be cut.
  • the determination module 72 determines the boundary to be cut corresponding to the lesion tissue from the image to be detected, it is specifically used to: determine a target area corresponding to the lesion tissue from the image to be detected; Determining the contour of the region corresponding to the diseased tissue; determining the boundary to be cut corresponding to the diseased tissue based on the contour of the region.
  • the determination module 72 determines the target area corresponding to the lesion tissue from the image to be detected, it is specifically used to: based on the pixel values corresponding to each pixel in the image to be detected, from all pixel points in the image to be detected Selecting target pixels corresponding to the diseased tissue; obtaining at least one connected domain composed of the target pixel points in the image to be detected; determining a target area from the image to be detected based on the at least one connected domain.
  • the determination module 72 selects the target pixel corresponding to the diseased tissue from all the pixels in the image to be detected based on the pixel value corresponding to each pixel in the image to be detected. In: if the fluorescence image is a fluorescence image in the positive development mode, when the pixel value corresponding to the pixel in the image to be detected is greater than the first threshold, it is determined that the pixel is the target pixel; wherein, in the positive In the development mode, the development area of the fluorescence image corresponds to the lesion tissue; or, if the fluorescence image is a fluorescence image in the negative development mode, when the pixel value corresponding to the pixel in the image to be detected is smaller than the second When the threshold is reached, it is determined that the pixel point is the target pixel point; wherein, in the negative developing mode, the non-developing area of the fluorescent image corresponds to the diseased tissue.
  • the determination module 72 selects the target pixel corresponding to the diseased tissue from all the pixels in the image to be detected based on the pixel value corresponding to each pixel in the image to be detected. In: inputting the image to be detected into the trained target segmentation model, so that the target segmentation model determines the pixel value of each pixel in the image to be detected Predicted label values corresponding to the pixels respectively; wherein, the predicted label value corresponding to the pixel in the image to be detected is the first value or the second value; the predicted label value in the image to be detected is the first value
  • the pixels of are determined as the target pixels.
  • the determination module 72 trains the target segmentation model, it is specifically used to: obtain the sample image and the calibration information corresponding to the sample image, and the calibration information includes each pixel in the sample image corresponding to Calibration label value; Wherein, if the pixel point in the image to be detected is a pixel point corresponding to the diseased tissue, then the calibration label value corresponding to the pixel point is the first value, if the pixel point in the image to be detected If it is not a pixel point corresponding to the diseased tissue, then the calibration label value corresponding to the pixel point is the second value; the sample image is input to the initial segmentation model, so that the initial segmentation model is based on the The pixel value corresponding to each pixel point determines the predicted label value corresponding to each pixel point in the image to be detected; wherein, the predicted label value corresponding to the pixel point in the image to be detected is the first value Or the second value; determine the target loss value based on the calibration label value and the predicted label value corresponding to each
  • the determination module 72 determines the area contour corresponding to the lesion tissue based on the target area, it is specifically used to: determine boundary pixels in the target area, and determine the lesion based on the boundary pixels Tissue corresponding to the region outline.
  • the determination module 72 determines the boundary to be cut corresponding to the diseased tissue based on the contour of the region, it is specifically used to: determine the contour of the region as the boundary to be cut; or, if the contour of the region is the same as If the organ contour of the organ corresponding to the diseased tissue has a coincident boundary, the non-coincident boundary between the region contour and the organ contour is determined as the boundary to be cut.
  • the generation module 73 when the generation module 73 generates the target image, it is specifically used to: superimpose the boundary to be cut on the visible light image to obtain the target image; or determine the characteristics of the target boundary, based on the boundary to be cut and the The target boundary feature is used to generate a target cutting boundary, and the target image is obtained by superimposing the target cutting boundary on the visible light image.
  • the generation module 73 when the generation module 73 generates the target cutting boundary based on the boundary to be cut and the feature of the target boundary, it is specifically used to: if the feature of the target boundary is a target color, perform color adjustment on the boundary to be cut , to obtain the target cutting boundary, the color of the target cutting boundary is the target color; if the target boundary feature is the target line type, the line type adjustment is performed on the to-be-cut boundary to obtain the target cutting boundary , the line type of the target cutting boundary is the target line type; if the target boundary features are the target color and target line type, then perform color adjustment and line type adjustment on the boundary to be cut to obtain the target cutting border, the color of the target cutting border is the target color, and the line type of the target cutting border is the target line type.
  • the generation module 73 after the generation module 73 generates the target image, it is also used to: if the fluorescence image is a fluorescence image in positive development mode, and the fluorescence image includes a development area around the border to be cut, then in the Superimpose the developed area on the periphery of the boundary to be cut on the target image; if the fluorescent image is a fluorescent image in negative development mode, and the fluorescent image includes the developed area inside the boundary to be cut, then in the The target image is superimposed on the developed area at the inner periphery of the boundary to be cut.
  • the determining module 72 determines the boundary to be cut corresponding to the lesion tissue from the image to be detected, it is specifically used to: if a display switching command for the fluorescence image is received, the display switching command is used to instruct the display to be cut.
  • Cutting boundaries determining the to-be-cut boundaries corresponding to the diseased tissue from the images to be detected.
  • the embodiment of the present application proposes an image processing device (ie, the camera system host of the above-mentioned embodiment), the image processing device may include a processor and a machine-readable storage medium, and the machine-readable storage medium stores There are machine-executable instructions that can be executed by a processor; the processor is configured to execute the machine-executable instructions to implement the image processing method disclosed in the above examples of the present application.
  • the embodiment of the present application also provides a machine-readable storage medium, on which several computer instructions are stored, and when the computer instructions are executed by a processor, the present invention can be realized. Apply the image processing method disclosed in the above example.
  • the above-mentioned machine-readable storage medium may be any electronic, magnetic, optical or other physical storage device, which may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium can be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard disk drive), solid state drive, any type of storage disk (such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • a typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • these computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing device to operate in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means,
  • the instruction means implements the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operational steps are performed on the computer or other programmable equipment to produce computer-implemented processing, so that the information executed on the computer or other programmable equipment
  • the instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.

Abstract

一种图像处理方法、装置及设备,包括:获取目标对象内部指定位置对应的可见光图像和荧光图像(401);其中,目标对象内部指定位置包括病变组织和正常组织;从待检测图像中确定病变组织对应的待切割边界;其中,待检测图像为荧光图像,或,可见光图像与荧光图像的融合图像(402);生成目标图像,目标图像包括可见光图像和待切割边界(403)。

Description

图像处理方法、装置及设备 技术领域
本申请涉及医疗技术领域,尤其涉及一种图像处理方法、装置及设备。
背景技术
内窥镜(Endoscopes)是一种常用的医疗器械,由导光束结构及一组镜头组成,在内窥镜进入目标对象内部后,可以使用内窥镜采集目标对象内部指定位置的可见光图像和荧光图像,基于可见光图像和荧光图像生成融合图像。融合图像能够清晰显示目标对象内部指定位置的正常组织和病变组织,即基于融合图像能够区分目标对象内部指定位置的正常组织和病变组织,从而基于融合图像对目标对象进行检查及治疗,能够准确决定哪些组织需要被切除。
在基于可见光图像和荧光图像生成融合图像时,是将荧光图像叠加到可见光图像上进行显示,从而可能导致可见光图像的部分区域被遮挡,造成一定的视觉障碍,继而导致融合图像的效果较差,无法清晰显示目标对象内部指定位置的正常组织和病变组织,影响医生手术的质量和效率,使用感受较差。
有鉴于此,本申请提供一种图像处理方法,以提高医生手术的质量和效率。
发明内容
本申请提供一种图像处理方法,所述方法包括:获取目标对象内部指定位置对应的可见光图像和荧光图像;其中,所述指定位置包括病变组织和正常组织;从待检测图像中确定所述病变组织对应的待切割边界;其中,所述待检测图像为所述荧光图像,或,所述待检测图像为所述可见光图像与所述荧光图像的融合图像;生成目标图像,所述目标图像包括所述可见光图像和所述待切割边界。
本申请提供一种图像处理装置,所述装置包括:获取模块,用于获取目标对象内部指定位置对应的可见光图像和荧光图像;其中,所述指定位置包括病变组织和正常组织;确定模块,用于从待检测图像中确定所述病变组织对应的待切割边界;其中,所述待检测图像为所述荧光图像,或者,所述待检测图像为所述可见光图像与所述荧光图像的融合图像;生成模块,用于生成目标图像,所述目标图像包括所述可见光图像和所述待切割边界。
本申请提供一种图像处理设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现本申请上述示例公开的图像处理方法。
本申请提供一种机器可读存储介质,其上存储有计算机指令,当所述计算机指令被处理器调用时,所述处理器执行上述图像处理方法。
由以上技术方案可见,本申请实施例中,可以确定出病变组织对应的待切割边界(即病灶区域的边界),并基于可见光图像和待切割边界生成目标图像,即将该待切割边界叠加到可见光图像上进行显示,由于将待切割边界叠加到可见光图像上进行显示,而不是将荧光图像叠加到可见光图像上进行显示,避免可见光图像的大部分区域被遮挡的问题,改善可见光图像被荧光显影遮挡的问题,避免或减轻视觉障碍,目标图像的效果较好,能够清晰显示目标对象内部指定位置的正常组织和病变组织,提高医生手术的质量和效率,使用感受较好。由于目标图像存在待切割边界,使得医生可以获知病变组织对应的待切割边界,可以根据待切割边界进行切割,提高医生手术的质量和效率。
附图说明
为了更加清楚地说明本申请实施例或者现有技术中的技术方案,下面将对本申请实施例或者现有技术描述中所需要使用的附图作简单地介绍,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据本申请实施例的这些附图获得其他的附图。
图1是本申请一种实施方式中的内窥镜系统的结构示意图;
图2是本申请一种实施方式中的内窥镜系统的功能结构示意图;
图3A是本申请一种实施方式中的可见光图像的示意图;
图3B是本申请一种实施方式中的荧光图像的示意图;
图3C是本申请一种实施方式中的融合图像的示意图;
图4是本申请一种实施方式中的图像处理方法的流程示意图;
图5A是本申请一种实施方式中的基于目标分割模型的训练和测试示意图;
图5B是本申请一种实施方式中的分割模型的网络结构示意图;
图5C是本申请一种实施方式中的样本图像和标定掩模图的示意图;
图5D是本申请一种实施方式中的目标图像的示意图;
图6A是本申请一种实施方式中的内窥镜系统的结构示意图;
图6B是本申请一种实施方式中的区域轮廓检测的示意图;
图6C是本申请一种实施方式中的采用区域分割进行轮廓检测的示意图;
图7是本申请一种实施方式中的图像处理装置的结构示意图。
具体实施方式
在本申请实施例使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。本申请和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
参见图1所示,为内窥镜系统的结构示意图,该内窥镜系统可以包括:内窥镜、光源、摄像系统主机、显示装置和存储装置,显示装置和存储装置为外置设备。图1所示的内窥镜系统只是内窥镜系统的一个示例,对此结构不做限制。
示例性的,内窥镜可以插入到目标对象(例如,患者等被检体)内部指定位置(即待检查位置,也就是患者内部需要检查的区域,对此指定位置不做限制),采集目标对象内部指定位置的图像,并将目标对象内部指定位置的图像输出到显示装置和存储装置。使用者(例如,医护人员等)通过观察显示装置显示的图像,来检查目标对象内部指定位置的出血部位、肿瘤部位等异常部位。使用者通过访问存储装置中存储的图像,进行术后回顾和手术培训等。
内窥镜可以采集目标对象内部指定位置的图像,并将图像输入给摄像系统主机。光源可以为内窥镜提供光源,即从内窥镜的前端射出照明光,使得内窥镜可以采集目标对象内部的比较清晰的图像。摄像系统主机在接收到图像之后,可以将图像输入给存储装置,由存储装置存储图像,在后续过程中,使用者可以访问存储装置中的图像,或者,访问存储装置中的视频(由大量图像组成的视频)。摄像系统主机在接收到图像之后,还可以将图像输入给显示装置,由显示装置显示图像,使用者可以实时观察由显示装置 显示的图像。
如图2所示,图2为内窥镜系统的功能结构示意图,内窥镜可以包括摄像光学系统、成像单元、处理单元和操作单元。摄像光学系统用于对来自观察部位的光进行聚光,摄像光学系统由一个或多个透镜构成。成像单元用于对从摄像光学系统接收到的光进行光电转换以生成图像数据,成像单元由CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)或CCD(Charge Coupled Device,电荷耦合器件)等传感器组成。处理单元用于将图像数据转换成数字信号,将转换后的数字信号(如各像素点的像素值)发送到摄像系统主机。操作单元可以包括但不限于开关、按钮和触摸面板等,用于接收内窥镜的切换动作的指示信号、光源的切换动作的指示信号等,并将指示信号输出到摄像系统主机。
光源可以包括照明控制单元和照明单元,照明控制单元用于接收摄像系统主机的指示信号,基于该指示信号控制照明单元向内窥镜提供照明光。
摄像系统主机用于对从内窥镜接收到的图像数据进行处理并传输给显示装置和存储装置,显示装置和存储装置为摄像系统主机的外置设备。
摄像系统主机可以包括图像输入单元、图像处理单元、智能处理单元、视频编码单元、控制单元和操作单元。其中,图像输入单元用于接收内窥镜发送的信号,并将接收到的信号传输给图像处理单元。图像处理单元用于对图像输入单元输入的图像进行ISP(Image Signal Processing,图像信号处理)操作,包括但不限于亮度变换、锐化、荧光染色、缩放等,图像处理单元处理后的图像传输给智能处理单元、视频编码单元或显示装置。智能处理单元对图像进行智能分析,包括但不限于基于深度学习的场景分类(例如甲状腺外科手术场景、肝胆外科手术场景、耳鼻喉外科手术场景)、器械头检测、纱布检测和浓雾分类(浓雾分类是指,手术中用电刀切割组织会产生烟雾,烟雾会影响视野,通过对烟雾的浓度进行分类,例如,无雾类型、淡雾类型、浓雾类型,并在后续可联动气腹机或去雾算法进行除雾处理。),智能处理单元处理后的图像传输给图像处理单元或视频编码单元,图像处理单元对智能处理单元处理后的图像的处理方式包括但不限于亮度变换、叠框(叠框是指在图像上叠加框型图案,例如识别框,以标记目标的识别结果,即叠框起标记和提示作用)和缩放。视频编码单元用于对图像进行编码压缩,并传输给存储装置。控制单元用于控制内窥镜系统的各个模块,包括但不限于光源的照明方式、图像处理方式、智能处理方式和视频编码方式等。操作单元可以包括但不限于开关、按钮和触摸面板,用于接收外部指示信号,将接收到的指示信号输出到控制单元。
在实际使用过程中发现,可见光图像对于病变组织(如癌变组织等)的区分度不够,往往需要医生根据经验来进行病变组织的判断与切除,容易造成正常组织切除过多或病变组织未切除干净等问题。为了区分正常组织与病变组织,在一种可能的实施方式中,可以将白光内窥镜和荧光内窥镜插入到目标对象内部指定位置并采集目标对象内部指定位置的图像,为了区分方便,将白光内窥镜采集的图像称为可见光图像(也称为白光图像),将荧光内窥镜采集的图像称为荧光图像。
其中,预先向目标对象内部指定位置注射荧光显影剂,如ICG(Indocyanine Green,吲哚青绿)等,病变组织对荧光显影剂的吸收较多,当激发光照射到病变组织时,荧光显影剂会产生荧光,使得荧光内窥镜采集到的荧光图像中突出显示病变组织,基于荧光图像就可以区分正常组织与病变组织,帮助医护人员准确区分病变组织。
其中,可见光图像可以是基于可见光产生的图像,如图3A所示,图3A是目标对象内部指定位置的可见光图像的示意图,荧光图像可以是基于荧光产生的图像,如图3B所示,图3B是目标对象内部指定位置的荧光图像的示意图。
综上可以看出,在内窥镜系统的实际使用过程中,可以采集目标对象内部指定位置的可见光图像和荧光图像,即摄像系统主机可以得到可见光图像和荧光图像。在得到可见光图像和荧光图像之后,可以基于可见光图像和荧光图像生成融合图像,融合图像能 够清晰显示目标对象内部指定位置的正常组织和病变组织,基于融合图像能够区分目标对象内部指定位置的正常组织和病变组织。
在采集目标对象内部指定位置的荧光图像时,会存在多种显影模式,不同显影模式下采集的荧光图像的形式不同。比如说,荧光内窥镜在使用过程中分为正显影模式(正显影模式也可以称为正向显影模式)和负显影模式(负显影模式也可以称为负向显影模式)。在正显影模式下,病变组织位置显现荧光,即荧光图像的显影区域对应病变组织。在负显影模式下,病变组织以外位置显现荧光,病变组织位置不显现荧光,即荧光图像的非显影区域对应病变组织。
其中,在正显影模式下,需要采用与正显影模式匹配的荧光显影剂的注射方式,基于与正显影模式匹配的荧光显影剂的注射方式,需要保证病变组织显影,正常组织不显影。在负显影模式下,需要采用与负显影模式匹配的荧光显影剂的注射方式,基于与负显影模式匹配的荧光显影剂的注射方式,需要保证病变组织不显影,正常组织显影。
如图3C所示,图3C中上侧的荧光图像是正显影模式下的荧光图像,荧光图像的显影区域对应病变组织,荧光图像的非显影区域对应正常组织,通过将可见光图像和荧光图像进行融合,就可以得到融合图像。在融合图像中,病变组织对应区域是显影区域,可以对病变组织对应区域进行染色处理,以便于医护人员观察病变组织对应区域。
图3C中下侧的荧光图像是负显影模式下的荧光图像,荧光图像的显影区域对应正常组织,荧光图像的非显影区域对应病变组织,通过将可见光图像和荧光图像进行融合,就可以得到融合图像。在融合图像中,病变组织对应区域是非显影区域,正常组织对应区域是显影区域,可以对正常组织对应区域进行染色处理以凸显病变组织。
由于存在多种显影模式,如正显影模式和负显影模式,因此,可以预先配置采用哪种显影模式采集荧光图像。比如说,可以预先配置采用正显影模式采集荧光图像,在正显影模式下采集的荧光图像,可以参见图3C上侧所示的荧光图像。或者,可以预先配置采用负显影模式采集荧光图像,在负显影模式下采集的荧光图像,可以参见图3C下侧所示的荧光图像。
无论采用哪种显影模式采集荧光图像,在基于可见光图像和荧光图像生成融合图像时,均是将荧光图像叠加到可见光图像上进行显示,导致可见光图像的部分区域被遮挡,造成视觉障碍,导致融合图像的效果较差,无法清晰显示目标对象内部指定位置的正常组织和病变组织,影响医生手术的质量和效率。
比如说,参见图3C所示,针对正显影模式下的荧光图像,可以将荧光图像的显影区域叠加到可见光图像上得到融合图像,从而导致可见光图像上与该显影区域对应的位置被遮挡,造成视觉干扰。针对负显影模式下的荧光图像,可以将荧光图像的非显影区域叠加到可见光图像上得到融合图像,从而导致可见光图像上与该非显影区域对应的位置被遮挡,造成视觉干扰。
有鉴于此,本申请实施例提出一种图像处理方法,可以确定出病变组织对应的待切割边界(即病灶区域的待切割边界),并将病灶区域的待切割边界叠加在可见光图像上进行显示,以改善可见光图像被荧光显影遮挡的问题,避免或减轻视觉干扰,医生可以基于图像获知病灶区域的待切割边界,可以根据病灶区域(正显影模式中的显影区域或负显影模式中的非显影区域)的待切割边界进行切割,提高医生手术的质量和效率。
以下结合具体实施例,对本申请实施例的技术方案进行说明。
本申请实施例中提出一种图像处理方法,该方法可以应用于摄像系统主机,参见图4所示,为该图像处理方法的流程示意图,该方法可以包括:
步骤401、获取目标对象内部指定位置对应的可见光图像和荧光图像,该可见光图像的采集时刻与该荧光图像的采集时刻可以相同。
示例性的,目标对象内部指定位置可以包括病变组织和正常组织,可见光图像可以包括与该病变组织对应的区域和与该正常组织对应的区域,荧光图像可以包括与该病变 组织对应的区域和与该正常组织对应的区域。
示例性的,当需要采集目标对象(如患者等被检体)内部指定位置(即待检查位置,如患者内部需要检查的区域)的图像时,可以将内窥镜(如白光内窥镜和荧光内窥镜)插入到目标对象内部指定位置,由白光内窥镜采集目标对象内部指定位置的可见光图像,由荧光内窥镜采集目标对象内部指定位置的荧光图像,且摄像系统主机可以从白光内窥镜和荧光内窥镜得到该可见光图像和该荧光图像。图3A是目标对象内部指定位置的可见光图像的示意图,图3B是目标对象内部指定位置的荧光图像的示意图。
示例性的,显影模式可以为正显影模式或负显影模式,若显影模式是正显影模式,则获取目标对象内部指定位置对应的荧光图像时,荧光图像是与正显影模式对应的荧光图像,即荧光图像的显影区域对应病变组织,荧光图像的非显影区域对应正常组织。若显影模式是负显影模式,则获取目标对象内部指定位置对应的荧光图像时,荧光图像是与负显影模式对应的荧光图像,即荧光图像的显影区域对应正常组织,荧光图像的非显影区域对应病变组织。
示例性的,在获取目标对象内部指定位置对应的可见光图像和荧光图像时,由于目标对象内部指定位置可能存在病变组织和正常组织,因此,在通过白光内窥镜采集目标对象内部指定位置的可见光图像时,可见光图像可以包括与正常组织对应的区域和与病变组织对应的区域。在通过荧光内窥镜采集目标对象内部指定位置的荧光图像时,荧光图像可以包括与正常组织对应的区域和与病变组织对应的区域。
综上所述,可以得到目标对象内部指定位置对应的可见光图像和荧光图像,可见光图像可以包括与病变组织对应的区域和与正常组织对应的区域,荧光图像可以包括与病变组织对应的区域和与正常组织对应的区域。
步骤402、从待检测图像中确定病变组织对应的待切割边界;其中,待检测图像为荧光图像,或,待检测图像为可见光图像与荧光图像的融合图像。
在一种可能的实施方式中,可以采用如下步骤确定待切割边界:
步骤4021、从待检测图像中确定病变组织对应的目标区域,该目标区域可以是待检测图像中的区域,且待检测图像中的该目标区域与该病变组织对应。
示例性的,待检测图像可以为荧光图像,在该情况下,从荧光图像中确定病变组织对应的目标区域,该目标区域是荧光图像中与病变组织对应的区域。或者,待检测图像可以为可见光图像与荧光图像的融合图像,在该情况下,基于可见光图像和荧光图像生成融合图像,并从融合图像中确定病变组织对应的目标区域,该目标区域是融合图像中与病变组织对应的区域。
其中,在基于可见光图像和荧光图像生成融合图像时,可以包括但不限于如下方式:对荧光图像进行对比度增强,以加强荧光图像的对比度,得到对比度增强后的荧光图像,对比度增强的方式可以包括但不限于:直方图均值化(Histogram Equalization)、局部对比度提升等,对此对比度增强方式不做限制。然后,对对比度增强后的荧光图像进行色彩映射,即对荧光图像进行染色处理,得到色彩映射后的荧光图像,在对荧光图像进行染色处理时,荧光图像中的不同荧光亮度可以对应不同的色调和饱和度,对此色彩映射方式不做限制。然后,对色彩映射后的荧光图像和可见光图像进行融合,得到融合图像(即对可见光图像进行染色后的图像)。比如说,可以将色彩映射后的荧光图像叠加到可见光图像上进行显示,从而得到该融合图像,即基于荧光图像对可见光图像进行染色处理,进而实现在可见光图像上显示荧光图像的显影区域的效果。
当然,上述方式只是生成融合图像的示例,对此生成方式不做限制,图3C示出了基于可见光图像和荧光图像生成融合图像的示例。
本实施例中,可以将荧光图像作为待检测图像,从待检测图像中确定病变组织对应的目标区域,或者,基于可见光图像和荧光图像生成融合图像,并将融合图像作为待检测图像,从待检测图像中确定病变组织对应的目标区域。
在一种可能的实施方式中,在得到待检测图像后,可以采用如下步骤,从待检测图像中确定病变组织对应的目标区域,当然,如下步骤只是示例,对此确定方式不做限制,只要能够得到病变组织对应的目标区域即可。
步骤40211、基于待检测图像中每个像素点分别对应的像素值,从该待检测图像的所有像素点中选取与病变组织对应的目标像素点。
示例性的,针对待检测图像来说,基于待检测图像的特点可知,待检测图像的显影区域(如荧光图像或者融合图像的显影区域)的亮度比较大,待检测图像的非显影区域的亮度比较小,因此,基于荧光图像中每个像素点分别对应的像素值,就能够选取出与病变组织对应的目标像素点。比如说,在正显影模式下,病变组织对应显影区域,正常组织对应非显影区域,即病变组织对应的亮度值比较大,正常组织对应的亮度值比较小,因此,可以将亮度值大的像素点作为目标像素点。在负显影模式下,病变组织对应非显影区域,正常组织对应显影区域,即病变组织对应的亮度值比较小,正常组织对应的亮度值比较大,因此,可以将亮度值小的像素点作为目标像素点。基于上述原理,为了选取与病变组织对应的目标像素点,本实施例中可以采用如下方式:
方式1、若荧光图像是正显影模式下的荧光图像,即在正显影模式下获取的荧光图像,其中,在正显影模式下,荧光图像的显影区域对应病变组织,那么,针对待检测图像(即荧光图像或者融合图像)中的每个像素点,若该像素点对应的像素值(如亮度值)大于第一阈值,则确定该像素点是目标像素点,若该像素点对应的像素值不大于第一阈值,则确定该像素点不是目标像素点。
在对待检测图像中的所有像素点进行上述处理后,就可以从待检测图像的所有像素点中选取出目标像素点,目标像素点的数量可以为多个。综上可以看出,目标像素点是待检测图像中像素值大于第一阈值的像素点。
示例性的,可以采用二值化方式确定待检测图像中的目标像素点,二值化是将图像上像素点的像素值设为0或255。比如说,像素值的取值范围是0-255,可以设置适当的二值化阈值,若像素点的像素值大于二值化阈值,则将该像素点的像素值设为255,若像素点的像素值不大于二值化阈值,则将该像素点的像素值设为0,从而得到二值化图像,二值化图像中像素点的像素值为0或255。
基于上述原理,可以预先配置一个阈值作为第一阈值,第一阈值可以根据经验进行配置,对此不做限制,基于第一阈值能够表示显影区域和非显影区域的边界,并且能够基于该第一阈值区分待检测图像中的显影区域和非显影区域。基于此,由于病变组织对应的亮度值比较大,正常组织对应的亮度值比较小,可以将亮度值大的像素点作为目标像素点,因此,针对待检测图像中的每个像素点,若该像素点对应的像素值大于第一阈值,则可以确定该像素点是目标像素点,若该像素点对应的像素值不大于第一阈值,则可以确定该像素点不是目标像素点。
在上述实施例中,针对待检测图像中的每个像素点来说,该像素点对应的像素值可以是该像素点本身的像素值,也可以是该像素点对应的局部像素均值,比如说,得到以该像素点为中心的子块,该子块包括M个像素点,M为正整数,将M个像素点的像素值的平均值作为该像素点对应的局部像素均值。
方式2、若荧光图像是负显影模式下的荧光图像,即在负显影模式下获取的荧光图像,其中,在负显影模式下,荧光图像的非显影区域对应病变组织,那么,针对待检测图像(即荧光图像或融合图像)中的每个像素点,若该像素点对应的像素值(如亮度值)小于第二阈值,则确定该像素点是目标像素点,若该像素点对应的像素值不小于第二阈值,则确定该像素点不是目标像素点。在对待检测图像中的所有像素点进行上述处理后,就可以从待检测图像的所有像素点中选取目标像素点,即待检测图像中像素值小于第二阈值的像素点。
比如说,可以预先配置一个阈值作为第二阈值,第二阈值可以根据经验进行配置,对此不做限制,基于第二阈值能够表示显影区域和非显影区域的边界,并且能够区分待检测图像中的显影区域和非显影区域。基于此,由于病变组织对应的亮度值比较小,正常组织对应的亮度值比较大,可以将亮度值小的像素点作为目标像素点,因此,针对待检测图像中的每个像素点,若该像素点对应的像素值小于第二阈值,则可以确定该像素点是目标像素点,若该像素点对应的像素值不小于第二阈值,则可以确定该像素点不是目标像素点。
在上述实施例中,针对待检测图像中的每个像素点来说,该像素点对应的像素值可以是该像素点本身的像素值,也可以是该像素点对应的局部像素均值。
方式3、可以将待检测图像输入给已训练的目标分割模型,以使目标分割模型基于待检测图像中每个像素点分别对应的像素值,确定每个像素点分别对应的预测标签值,像素点对应的预测标签值为第一取值或第二取值,第一取值用于表示像素点是与病变组织对应的像素点,第二取值用于表示像素点不是与病变组织对应的像素点。将预测标签值为第一取值的像素点确定为目标像素点。
示例性的,在方式3中,可以采用机器学习算法训练目标分割模型,并通过目标分割模型实现病灶区域(即病变组织对应的目标区域)的分割,即从待检测图像的所有像素点中区分出与病变组织对应的目标像素点。
图5A涉及目标分割模型的训练过程和测试过程。在训练过程中,基于样本图像、标定信息、损失函数和网络结构进行网络训练,从而得到目标分割模型。在测试过程中,可以将测试图像(即待检测图像)输入给已训练的目标分割模型,由目标分割模型对待检测图像进行网络推理,得到待检测图像对应的分割结果,即从待检测图像中区分出病变组织对应的目标像素点。
在一种可能的实施方式中,训练过程和测试过程可以包括以下步骤:
步骤S11、获取初始分割模型,该初始分割模型可以是机器学习模型,如深度学习模型或神经网络模型等,对此初始分割模型的类型不做限制。该初始分割模型可以是分类模型,用于输出图像中每个像素点分别对应的预测标签值,预测标签值为第一取值或第二取值,即初始分割模型的输出对应两种类别,第一取值用于表示像素点与病变组织对应,第二取值用于表示像素点不与病变组织对应,对此初始分割模型的结构不做限制,只要初始分割模型能够实现上述功能即可。
比如说,初始分割模型的网络结构可以参见图5B所示,可以包括输入层(用于接收输入图像)、编码网络、解码网络和输出层(用于输出分割结果),编码网络由卷积层和下采样层组成,解码网络由卷积层和上采样层组成,对此初始分割模型的网络结构不做限制,如采用unet网络模型作为初始分割模型。
步骤S12、获取样本图像和样本图像对应的标定信息,该标定信息包括样本图像中每个像素点分别对应的标定标签值。其中,若像素点是与病变组织对应的像素点,则该像素点对应的标定标签值为第一取值(如1等),以表示像素点与病变组织对应。若像素点不是与病变组织对应的像素点,则该像素点对应的标定标签值为第二取值(如0等),以表示像素点不与病变组织对应。
其中,可以获取大量样本图像,每个样本图像均是目标对象内部的荧光图像或融合图像,样本图像也可以称为训练图像。针对每个样本图像来说,可以由用户手动标定该样本图像对应的标定信息。比如说,需要对病灶区域(即与病变组织对应的区域)进行打标(即标记出病灶区域的轮廓),打标后可以输出标定掩模图,该标定掩模图也就是样本图像对应的标定信息。其中,在该标定掩模图中,病灶区域的轮廓的取值均为第一取值,且轮廓内部的取值均为第一取值,表示轮廓以及轮廓内部均是与病变组织对应的像素点。轮廓外部的取值均为第二取值,表示轮廓外部均不是与病变组织对应的像素点。
参见图5C所示,左侧图像可以是样本图像(如荧光图像),右侧图像可以是样 本图像对应的标定掩模图,即该样本图像对应的标定信息,黑色像素点均是与病变组织对应的像素点,白色像素点均不是与病变组织对应的像素点。
综上可以看出,针对每个样本图像来说,可以得到该样本图像和该样本图像对应的标定信息,该样本图像对应的标定信息可以是标定掩模图。
步骤S13、将样本图像输入给初始分割模型,以使初始分割模型基于样本图像中每个像素点分别对应的像素值,确定每个像素点分别对应的预测标签值及该预测标签值对应的预测概率,像素点对应的预测标签值可以为第一取值或第二取值。
示例性的,由于样本图像(如荧光图像)的显影区域的亮度比较大,非显影区域的亮度比较小,因此,初始分割模型的工作原理是,基于样本图像中每个像素点分别对应的像素值,确定样本图像的分割结果,对此初始分割模型的工作过程不做限制,只要初始分割模型能够输出分割结果即可。综上所述,在将样本图像输入给初始分割模型之后,初始分割模型能够基于该样本图像中每个像素点分别对应的像素值,确定每个像素点分别对应的预测标签值及该预测标签值对应的预测概率,如第1个像素点对应第一取值及该像素点对应第一取值的预测概率为0.8,第2个像素点对应第一取值及该像素点对应第一取值的预测概率为0.6,第3个像素点对应第二取值及该像素点对应第二取值的预测概率为0.8,以此类推。
步骤S14、基于每个像素点分别对应的标定标签值和预测标签值确定目标损失值。比如说,针对每个像素点,基于该像素点对应的标定标签值、该像素点对应的预测标签值和该预测标签值对应的预测概率,确定目标损失值。
示例性的,假设损失函数是交叉熵损失函数,则可以采用如下公式确定损失值,
当然,交叉熵损失函数只是一个示例,对此损失函数的类型不做限制。
Figure PCTCN2022115375-appb-000001
在上述公式中,针对每个像素点来说,loss表示该像素点对应的损失值,M表示类别数量,本实施例中为2,即一共存在两个类别,类别1对应第一取值,类别2对应第二取值。yc是标定标签值对应的数值,可以等于0或1,若该像素点对应的标定标签值是第一取值,则y1为1,y2为0,若该像素点对应的标定标签值是第二取值,则y1为0,y2为1。pc是预测标签值对应的预测概率,若该像素点对应的第一取值的概率是p_1,则对应第二取值的概率是1-p_1。
综上所述,针对样本图像中的每个像素点,可以采用上述公式计算该像素点对应的损失值,然后,基于样本图像中的所有像素点对应的损失值确定目标损失值,例如,基于所有像素点对应的损失值之和的算数平均确定目标损失值。
步骤S15、基于目标损失值对初始分割模型进行训练,得到目标分割模型。
比如说,基于目标损失值对初始分割模型的网络参数进行调整,得到调整后分割模型,网络参数调整的目标是使目标损失值越来越小。在得到调整后的分割模型之后,将调整后的分割模型作为初始分割模型,重新执行步骤S13-步骤S14,一直到目标损失值符合优化目标,将调整后的分割模型作为目标分割模型。
至此,训练得到目标分割模型,基于目标分割模型执行测试过程。
步骤S16、在测试过程中,在得到待检测图像之后,可以将待检测图像输入给目标分割模型(目标分割模型的工作原理参见上述初始分割模型的工作原理),以使目标分割模型基于待检测图像中每个像素点分别对应的像素值,确定每个像素点分别对应的预测标签值。若像素点对应的预测标签值为第一取值,则确定该像素点是与病变组织对应 的像素点,即该像素点是目标像素点。若像素点对应的预测标签值为第二取值,则确定该像素点不是与病变组织对应的像素点,即该像素点不是目标像素点。综上所述,可以将预测标签值为第一取值的像素点确定为目标像素点,从而从待检测图像中找到与病变组织对应的所有目标像素点。
综上可以看出,基于方式1、方式2或方式3,均可以从待检测图像的所有像素点中选取与病变组织对应的目标像素点,对此选取方式不做限制。
步骤40212、获取待检测图像中的所有目标像素点组成的至少一个连通域。
示例性的,在从待检测图像中确定出目标像素点之后,就可以采用连通区域检测算法,确定这些目标像素点组成的连通域(即目标像素点的连通区域),连通域的数量可以为至少一个,对此连通域的确定过程不做限制。
其中,位置相邻的目标像素点组成的区域称为连通域,连通区域检测算法是指将待检测图像中的各个连通域找出并标记,也就是说,基于连通区域检测算法可以获知待检测图像中的各个连通域,对此过程不做限制。
步骤40213、基于连通域从待检测图像中确定病变组织对应的目标区域。
示例性的,连通域的数量可以为至少一个,若连通域的数量为一个,则将该连通域作为待检测图像中的病变组织对应的目标区域。若连通域的数量为至少两个,则将面积最大的连通域作为待检测图像中的病变组织对应的目标区域,也可以将其它连通域作为目标区域,还可以将多个邻近的连通域作为待检测图像中病变组织对应的目标区域,或者,还可以将所有连通域均作为待检测图像中的病变组织对应的目标区域,对此不做限制。
在一种可能的实施方式中,在得到目标像素点组成的连通域之后,还可以对连通域进行形态学处理(如使用9*9的滤波核对连通域进行平滑处理,得到平滑边缘的连通域)以及干扰去除处理(如去除孤立点),得到形态学处理以及干扰去除处理后的连通域,从而基于形态学处理以及干扰去除处理后的连通域,从待检测图像中确定病变组织对应的目标区域。
综上所述,基于步骤40211-步骤40213,就可以得到病变组织对应的目标区域,至此完成步骤4021,基于目标区域执行后续步骤4022。
步骤4022、基于病变组织对应的目标区域确定病变组织对应的区域轮廓。比如说,确定该目标区域中的所有边界像素点,并基于目标区域中的所有边界像素点确定病变组织对应的区域轮廓,即所有边界像素点组成区域轮廓。
步骤4023、基于该区域轮廓确定病变组织对应的待切割边界。
在一种可能的实施方式中,在得到病变组织对应的区域轮廓之后,可以直接将该区域轮廓确定为待切割边界,表示需要沿着该区域轮廓进行切割,该区域轮廓可以是一个不规则形状,可以是任意形状,对此区域轮廓不做限制。
在另一种可能的实施方式中,在得到病变组织对应的区域轮廓之后,还可以判断该区域轮廓与病变组织对应器官的器官轮廓是否存在重合边界。若该区域轮廓与该器官轮廓不存在重合边界,则将该区域轮廓确定为待切割边界,表示需要沿着该区域轮廓进行切割。若该区域轮廓与该器官轮廓存在重合边界,则将该区域轮廓与该器官轮廓的非重合边界(即区域轮廓中除该重合边界之外的剩余边界)确定为待切割边界,表示需要沿着该非重合边界进行切割。
在该实施方式中,可以先获取病变组织对应器官的器官轮廓,本实施例对此获取方式不做限制,只要能够得到病变组织对应器官的器官轮廓即可。比如说,可以预先训练一个用于识别器官轮廓的深度学习模型,对此深度学习模型的训练过程不做限制。在此基础上,可以将上述可见光图像输入给深度学习模型,由深度学习模型输出病变组织对应器官的器官轮廓,对此深度学习模型的工作原理不做限制,只要深度学习模型能够输出该器官轮廓即可。
在得到病变组织对应的区域轮廓和病变组织对应的器官轮廓之后,就可以判断该区域轮廓与该器官轮廓是否存在重合边界,即是否存在重合的线段。
若不存在重合边界,则表示需要沿着该区域轮廓进行切割,才能够将病变组织完全切割,因此,可以将该区域轮廓确定为待切割边界。
若存在重合边界,则表示该器官的边缘已经是病变组织,如病变组织对应器官的整个左侧(或右侧、或上侧、或下侧等)已经是病变组织,区域轮廓会与病变组织对应器官的左侧边缘重合。在该情况下,只需要沿非重合边界切割就能够完成病变组织的完全切割,因此,将该区域轮廓与该器官轮廓的非重合边界确定为待切割边界。该区域轮廓与该器官轮廓的重合边界是病变组织对应器官的边缘,在沿着非重合边界进行切割时,重合边界对应区域也被切割。
综上所述,可以确定出待切割边界,当然,上述只是两个示例,对此确定方式不做限制,只要基于待切割边界能够完成病变组织的完全切割即可。
至此,完成步骤402,得到待切割边界,基于待切割边界执行后续步骤403。
步骤403、生成目标图像,该目标图像可以包括可见光图像和待切割边界。
在一种可能的实施方式中,从待检测图像中确定出病变组织对应的待切割边界之后,可以将该待切割边界叠加到可见光图像上,得到目标图像,该目标图像包括可见光图像和该待切割边界,该待切割边界在可见光图像上的位置与该待切割边界在待检测图像上的位置相同。
在得到该目标图像之后,就可以将该目标图像显示在屏幕上供医护人员查看,由于该待切割边界是病灶区域的边界,使得医护人员可以从该目标图像上清晰的查看到病灶区域的边界,病灶区域的边界可以提供切割的参考。
在另一种可能的实施方式中,为了更加突出的显示目标图像中的该待切割边界,基于可见光图像和待切割边界生成目标图像,可以包括以下步骤:
步骤4031、确定与该待切割边界对应的目标边界特征。
示例性的,该目标边界特征可以包括但不限于目标颜色和/或目标线型,对此目标边界特征不做限制,可以是用于显示待切割边界的任意特征。
其中,目标颜色可以是任意颜色,如蓝色、红色、绿色等,对此目标颜色不做限制,只要目标颜色与目标图像本身颜色不同,能够突出待切割边界即可。
其中,目标线型可以是任意线型,如实线型、虚线型等,对此目标线型不做限制,只要目标线型能够突出待切割边界即可。
步骤4032、基于该待切割边界和该目标边界特征生成目标切割边界。
示例性的,若目标边界特征为目标颜色,则对该待切割边界进行颜色调整,得到目标切割边界,且目标切割边界的颜色为该目标颜色。比如说,若目标颜色是蓝色,则将待切割边界的颜色调整为蓝色,得到蓝色的目标切割边界。
若目标边界特征为目标线型,则对该待切割边界进行线型调整,得到目标切割边界,且目标切割边界的线型为该目标线型。比如说,若目标线型是虚线型,则将待切割边界的线型调整为虚线型,得到虚线型的目标切割边界。
若目标边界特征为目标颜色和目标线型,则对待切割边界进行颜色调整和线型调整,得到目标切割边界,且目标切割边界的颜色为该目标颜色,且目标切割边界的线型为该目标线型。比如说,若该目标颜色是蓝色,该目标线型是虚线型,则可以将该待切割边界的颜色调整为蓝色,并将该待切割边界的线型调整为虚线型,得到颜色是蓝色且线型是虚线型的目标切割边界。
当然,上述只是生成目标切割边界的几个示例,对此不做限制。
步骤4033、在可见光图像上叠加该目标切割边界得到该目标图像。
比如说,可以将目标切割边界叠加到可见光图像上,得到目标图像,该目标图像包括可见光图像和目标切割边界,在得到该目标图像之后,就可以将该目标图像显示在屏 幕上供医护人员查看。由于目标切割边界是病灶区域的边界,且目标切割边界是突出显示的边界,如目标切割边界突出显示为具有目标颜色和/或目标线型的边界,因此,医护人员可以从该目标图像上清晰的查看到病灶区域的边界。
在一种可能的实施方式中,为了更加突出的显示目标图像中的待切割边界和待切割边界周围信息,基于可见光图像和待切割边界生成目标图像之后,若荧光图像是正显影模式下的荧光图像,且荧光图像包括待切割边界外围的显影区域,在目标图像上叠加待切割边界外围的显影区域,即在目标图像上保留待切割边界外围低强度的荧光显影区域。若荧光图像是负显影模式下的荧光图像,且荧光图像包括待切割边界内围的显影区域,在目标图像上叠加待切割边界内围的显影区域,即在目标图像上保留待切割边界内围低强度的荧光显影区域。
比如说,若荧光图像是正显影模式下的荧光图像,那么,荧光图像的显影区域对应病变组织,而待切割边界就是病变组织对应目标区域的边界,在待切割边界内围均是显影区域,在可见光图像上叠加待切割边界得到目标图像之后,不在目标图像上叠加待切割边界内围的显影区域。针对荧光图像来说,待切割边界外围也可能存在显影区域,只是待切割边界外围的显影区域的强度低于待切割边界内围的显影区域的强度,在可见光图像上叠加待切割边界得到目标图像之后,可以不在目标图像上叠加待切割边界外围的显影区域,也可以在目标图像上叠加待切割边界外围的显影区域,即从该待切割边界开始向外围扩展显影区域,在目标图像保留荧光图像中待切割边界外围低强度的荧光显影区域。
若荧光图像是负显影模式下的荧光图像,荧光图像的显影区域对应正常组织,非显影区域对应病变组织,而待切割边界就是病变组织对应目标区域的边界,在待切割边界外围均是显影区域,在可见光图像上叠加待切割边界得到目标图像后,不在目标图像上叠加待切割边界外围的显影区域。针对荧光图像来说,待切割边界内围也可能存在显影区域,只是待切割边界内围的显影区域的强度低于待切割边界外围的显影区域的强度,在可见光图像上叠加待切割边界得到目标图像后,可以不在目标图像上叠加待切割边界内围的显影区域,或在目标图像上叠加待切割边界内围的显影区域,即从待切割边界开始向内围扩展显影区域,在目标图像保留荧光图像中待切割边界内围低强度的荧光显影区域。
以下结合图5D对上述过程进行说明,为了方便描述,在图5D中,是以待切割边界是区域轮廓为例进行说明。参见图5D,左侧图像为可见光图像,针对上侧的5个图像,第一个图像为正显影模式下的荧光图像。第二个图像为荧光图像与可见光图像的融合图像,该融合图像显示荧光图像的显影区域(即将荧光图像的显影区域叠加到可见光图像),而不是显示区域轮廓。第三个图像为轮廓显示1下的目标图像,该目标图像显示第一区域轮廓,即将第一区域轮廓叠加到可见光图像上得到目标图像,轮廓显示1表示第一区域轮廓的颜色是目标颜色(图5D中未示出目标颜色),且第一区域轮廓的线型是实线型。第四个图像为轮廓显示2下的目标图像,该目标图像显示第二区域轮廓,轮廓显示2表示第二区域轮廓的颜色是目标颜色,且第二区域轮廓的线型是虚线型。第五个图像为轮廓显示3下的目标图像,该目标图像显示第三区域轮廓,且该目标图像显示第三区域轮廓外围低强度的荧光显影区域。
针对图5D中下侧的5个图像,第一个图像为负显影模式下的荧光图像。第二个图像为荧光图像与可见光图像的融合图像。第三个图像为轮廓显示1下的目标图像,该目标图像显示第四区域轮廓,轮廓显示1表示第四区域轮廓的颜色是目标颜色,且第四区域轮廓的线型是实线型。第四个图像为轮廓显示2下的目标图像,该目标图像显示第五区域轮廓,轮廓显示2表示第五区域轮廓的颜色是目标颜色,且第五区域轮廓的线型是虚线型。第五个图像为轮廓显示3下的目标图像,该目标图像是显示第六区域轮廓,且该目标图像显示第六区域轮廓内围低强度的荧光显影区域。
当然,图5D的目标图像显示方式只是几个示例,对此显示方式不做限制。
在一种可能的实施方式中,关于步骤401-步骤403的触发时机,若接收到针对荧光图像的显示切换命令,该显示切换命令用于指示显示待切割边界,则可以执行步骤401-步骤403,也就是说,从待检测图像中确定病变组织对应的待切割边界,并将待切割边界叠加到可见光图像上得到目标图像,并显示目标图像。若未接收到针对荧光图像的显示切换命令,则可以基于可见光图像和荧光图像生成融合图像,并显示融合图像,本实施例对此过程不再赘述。
其中,在针对目标对象的手术进入到切除阶段时,医护人员可以发出针对荧光图像的显示切换命令,这样,可以接收针对荧光图像的显示切换命令,继而采用本实施例的图像处理方法,在可见光图像上叠加显示待切割边界。
由以上技术方案可见,本申请实施例中,可以确定出病变组织对应的待切割边界(即病灶区域的边界),并基于可见光图像和待切割边界生成目标图像,即将该待切割边界叠加到可见光图像上进行显示,由于是将待切割边界叠加到可见光图像上进行显示,而不是将荧光图像叠加到可见光图像上进行显示,因此避免了可见光图像的大部分区域被遮挡的问题,改善可见光图像被荧光显影遮挡的问题,避免或减轻视觉干扰,目标图像的显示效果较好,能够清晰显示目标对象内部指定位置的正常组织和病变组织,提高医生手术的质量和效率,使用感受较好。由于目标图像上显示待切割边界,使得医生可以获知病变组织对应的待切割边界,并可以根据待切割边界进行切割,提高医生手术的质量和效率。
以下结合具体应用场景,对本申请实施例的图像处理方法进行说明。
如图6A所示,内窥镜系统可以包括图像采集部、图像处理部和图像显示部。图像采集部用于获取内窥镜视频,该内窥镜视频包括可见光图像和荧光图像。图像处理部用于对可见光图像和荧光图像进行融合,得到融合图像,在融合图像上对病灶区域的轮廓进行检测,得到区域轮廓(即病变组织对应的区域轮廓),或者,直接在荧光图像上对病灶区域的轮廓进行检测,得到区域轮廓。图像显示部用于将区域轮廓叠加到可见光图像上进行显示,提供给医生使用。
1、图像采集部。通过位于镜管前端的两个传感器采集目标对象的指定位置反射的可见光和激发的荧光,并生成图像信号,再将图像信号传递到后端,然后对该图像信号进行处理得到内窥镜图像。其中,这两个传感器分别为用于采集可见光的传感器(记为白光内窥镜)和用于采集荧光的传感器(记为荧光内窥镜)。白光内窥镜用于采集可见光并生成相应的白光图像信号,基于该白光图像信号生成的内窥镜图像是可见光图像。荧光内窥镜用于采集荧光并生成相应的荧光图像信号,基于该荧光图像信号生成的内窥镜图像是荧光图像。综上所述,图像采集部可以采集可见光图像和荧光图像。
2、图像处理部。参见图6B所示,图像处理部可以得到实时视频流,该实时视频流可以包括可见光图像和荧光图像。在融合图像(即对可见光图像和荧光图像进行融合后的图像)或者荧光图像的基础上,图像处理部可以对病灶区域的轮廓进行检测,得到区域轮廓,并保存病灶区域的区域轮廓。
在一种可行的实施方式中,参见图6C所示,可以采用区域分割方式进行轮廓检测,比如说,针对内窥镜图像(如荧光图像或者融合图像)来说,设定一个阈值,使用局部均值二值化得到二值化图像,然后进行形态学处理(如使用一个9*9的滤波核进行平滑处理,得到平滑边缘及去除干扰孤岛等),就可以得到病灶区域的区域轮廓。
在图6C中,是以正显影模式下的荧光图像为例,关于负显影模式下的荧光图像,其实现流程相同,二值化时进行黑白反色即可,在此不再赘述。
针对采用区域分割方式进行轮廓检测的实现过程,可以参见步骤402,且步骤402中的方式1和方式2是区域分割方式,在此不再赘述。
在另一种可行的实施方式中,可以采用机器学习方式进行轮廓检测,比如说,先训 练得到一个目标分割模型,针对内窥镜图像(如荧光图像或者融合图像)来说,可以将内窥镜图像输入给目标分割模型,由目标分割模型输出病灶区域的区域轮廓。针对采用机器学习方式进行轮廓检测的实现过程,可以参见步骤402,且步骤402中的方式3是机器学习方式,在此不再赘述。
3、图像显示部。图像显示部用于将图像处理部得到的区域轮廓叠加至可见光图像中,得到目标图像,并将目标图像显示在屏幕上供医生查看,提供切割的参考,具体叠加方式可以参见步骤403以及图5D,在此不再重复赘述。
基于与上述方法同样的申请构思,本申请实施例中提出一种图像处理装置,参见图7所示,为所述图像处理装置的结构示意图,所述装置可以包括:
获取模块71,用于获取目标对象内部指定位置对应的可见光图像和荧光图像;其中,所述指定位置包括病变组织和正常组织;
确定模块72,用于从待检测图像中确定所述病变组织对应的待切割边界;其中,所述待检测图像为所述荧光图像,或者,所述待检测图像为所述可见光图像与所述荧光图像的融合图像;
生成模块73,用于生成目标图像,所述目标图像包括所述可见光图像和所述待切割边界。
示例性的,所述确定模块72从所述待检测图像中确定所述病变组织对应的待切割边界时具体用于:从待检测图像中确定所述病变组织对应的目标区域;基于所述目标区域确定所述病变组织对应的区域轮廓;基于所述区域轮廓确定所述病变组织对应的所述待切割边界。
示例性的,所述确定模块72从待检测图像中确定病变组织对应的目标区域时具体用于:基于待检测图像中每个像素点分别对应的像素值,从待检测图像的所有像素点中选取与所述病变组织对应的目标像素点;获取待检测图像中的所述目标像素点组成的至少一个连通域;基于所述至少一个连通域从待检测图像中确定目标区域。
示例性的,所述确定模块72基于所述待检测图像中每个像素点对应的像素值,从所述待检测图像的所有像素点中选取与所述病变组织对应的目标像素点时具体用于:若所述荧光图像是正显影模式下的荧光图像,当所述待检测图像中的像素点对应的像素值大于第一阈值时,确定该像素点是目标像素点;其中,在所述正显影模式下,所述荧光图像的显影区域对应所述病变组织;或者,若所述荧光图像是负显影模式下的荧光图像,当所述待检测图像中的像素点对应的像素值小于第二阈值时,确定该像素点是目标像素点;其中,在所述负显影模式下,所述荧光图像的非显影区域对应所述病变组织。
示例性的,所述确定模块72基于所述待检测图像中每个像素点对应的像素值,从所述待检测图像的所有像素点中选取与所述病变组织对应的目标像素点时具体用于:将所述待检测图像输入给已训练的目标分割模型,以使所述目标分割模型基于所述待检测图像中每个像素点分别对应的像素值,确定所述待检测图像中每个像素点分别对应的预测标签值;其中,所述待检测图像中像素点对应的预测标签值为第一取值或第二取值;将所述待检测图像中预测标签值为第一取值的像素点确定为所述目标像素点。
示例性的,所述确定模块72训练所述目标分割模型时具体用于:获取样本图像和所述样本图像对应的标定信息,所述标定信息包括所述样本图像中每个像素点分别对应的标定标签值;其中,若所述待检测图像中像素点是与病变组织对应的像素点,则该像素点对应的标定标签值为所述第一取值,若所述待检测图像中像素点不是与病变组织对应的像素点,则该像素点对应的标定标签值为所述第二取值;将所述样本图像输入给初始分割模型,以使所述初始分割模型基于所述样本图像中每个像素点分别对应的像素值,确定所述待检测图像中每个像素点分别对应的预测标签值;其中,所述待检测图像中像素点对应的预测标签值为所述第一取值或所述第二取值;基于所述待检测图像中每个像素点分别对应的标定标签值和预测标签值确定目标损失值,基于目标损失值对所述初始 分割模型进行训练,得到所述目标分割模型。
示例性的,所述确定模块72基于所述目标区域确定所述病变组织对应的区域轮廓时具体用于:确定所述目标区域中的边界像素点,并基于所述边界像素点确定所述病变组织对应的区域轮廓。
示例性的,所述确定模块72基于所述区域轮廓确定所述病变组织对应的待切割边界时具体用于:将所述区域轮廓确定为所述待切割边界;或者,若所述区域轮廓与所述病变组织对应器官的器官轮廓存在重合边界,则将所述区域轮廓与所述器官轮廓的非重合边界确定为所述待切割边界。
示例性的,所述生成模块73生成目标图像时具体用于:在所述可见光图像上叠加所述待切割边界得到所述目标图像;或者,确定目标边界特征,基于所述待切割边界和所述目标边界特征生成目标切割边界,并在所述可见光图像上叠加所述目标切割边界得到所述目标图像。
示例性的,所述生成模块73基于所述待切割边界和所述目标边界特征生成目标切割边界时具体用于:若所述目标边界特征为目标颜色,则对所述待切割边界进行颜色调整,得到所述目标切割边界,所述目标切割边界的颜色为所述目标颜色;若所述目标边界特征为目标线型,则对所述待切割边界进行线型调整,得到所述目标切割边界,所述目标切割边界的线型为所述目标线型;若所述目标边界特征为目标颜色和目标线型,则对所述待切割边界进行颜色调整和线型调整,得到所述目标切割边界,所述目标切割边界的颜色为所述目标颜色,且所述目标切割边界的线型为所述目标线型。
示例性的,所述生成模块73生成目标图像之后还用于:若所述荧光图像是正显影模式下的荧光图像,且所述荧光图像包括所述待切割边界外围的显影区域,则在所述目标图像上叠加所述待切割边界外围的该显影区域;若所述荧光图像是负显影模式下的荧光图像,且所述荧光图像包括所述待切割边界内围的显影区域,则在所述目标图像上叠加所述待切割边界内围的该显影区域。
示例性的,所述确定模块72从待检测图像中确定所述病变组织对应的待切割边界时具体用于:若接收到针对所述荧光图像的显示切换命令,显示切换命令用于指示显示待切割边界,从所述待检测图像中确定所述病变组织对应的所述待切割边界。
基于与上述方法同样的申请构思,本申请实施例提出一种图像处理设备(即上述实施例的摄像系统主机),图像处理设备可以包括处理器和机器可读存储介质,机器可读存储介质存储有能够被处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现本申请上述示例公开的图像处理方法。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的图像处理方法。
其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取目标对象内部指定位置对应的可见光图像和荧光图像;其中,所述指定位置包括病变组织和正常组织;
    从待检测图像中确定所述病变组织对应的待切割边界;其中,所述待检测图像为所述荧光图像,或,所述待检测图像为所述可见光图像与所述荧光图像的融合图像;
    生成目标图像,所述目标图像包括所述可见光图像和所述待切割边界。
  2. 根据权利要求1所述的方法,其特征在于,从所述待检测图像中确定所述病变组织对应的所述待切割边界,包括:
    从所述待检测图像中确定所述病变组织对应的目标区域;
    基于所述目标区域确定所述病变组织对应的区域轮廓;
    基于所述区域轮廓确定所述病变组织对应的所述待切割边界。
  3. 根据权利要求2所述的方法,其特征在于,从所述待检测图像中确定所述病变组织对应的所述目标区域,包括:
    基于所述待检测图像中每个像素点分别对应的像素值,从所述待检测图像的所有像素点中选取与所述病变组织对应的目标像素点;
    获取所述待检测图像中的所述目标像素点组成的至少一个连通域;
    基于所述至少一个连通域从所述待检测图像中确定所述目标区域。
  4. 根据权利要求3所述的方法,其特征在于,基于所述待检测图像中每个像素点分别对应的像素值,从所述待检测图像的所有像素点中选取与所述病变组织对应的所述目标像素点,包括:
    若所述荧光图像是正显影模式下的荧光图像,当所述待检测图像中的像素点对应的像素值大于第一阈值时,确定该像素点是目标像素点;其中,在所述正显影模式下,所述荧光图像的显影区域对应所述病变组织;或,
    若所述荧光图像是负显影模式下的荧光图像,当所述待检测图像中的像素点对应的像素值小于第二阈值时,确定该像素点是目标像素点;其中,在所述负显影模式下,所述荧光图像的非显影区域对应所述病变组织。
  5. 根据权利要求3所述的方法,其特征在于,基于所述待检测图像中每个像素点分别对应的像素值,从所述待检测图像的所有像素点中选取与所述病变组织对应的所述目标像素点,包括:
    将所述待检测图像输入给已训练的目标分割模型,以使所述目标分割模型基于所述待检测图像中每个像素点分别对应的像素值,确定所述待检测图像中每个像素点分别对应的预测标签值;其中,所述待检测图像中每个像素点对应的预测标签值为第一取值或第二取值;
    将所述待检测图像中预测标签值为第一取值的像素点确定为所述目标像素点。
  6. 根据权利要求5所述的方法,其特征在于,所述目标分割模型的训练过程,包括:
    获取样本图像和所述样本图像对应的标定信息,所述标定信息包括所述样本图像中每个像素点分别对应的标定标签值;其中,若所述样本图像中的像素点是与病变组织对应的像素点,则该像素点对应的标定标签值为所述第一取值,若所述样本图像中的像素点不是与病变组织对应的像素点,则该像素点对应的标定标签值为所述第二取值;
    将所述样本图像输入给初始分割模型,以使所述初始分割模型基于所述样本图像中每个像素点分别对应的像素值,确定所述样本图像中每个像素点分别对应的预测标签值;其中,所述样本图像中每个像素点对应的预测标签值为所述第一取值或 所述第二取值;
    基于所述样本图像中每个像素点分别对应的标定标签值和预测标签值确定目标损失值,基于目标损失值对所述初始分割模型进行训练,得到所述目标分割模型。
  7. 根据权利要求2-6任一所述的方法,其特征在于,基于所述目标区域确定所述病变组织对应的所述区域轮廓,包括:
    确定所述目标区域中的边界像素点,并基于所述边界像素点确定所述病变组织对应的区域轮廓。
  8. 根据权利要求2-7任一所述的方法,其特征在于,基于所述区域轮廓确定所述病变组织对应的所述待切割边界,包括:
    将所述区域轮廓确定为所述待切割边界;或者,
    若所述区域轮廓与所述病变组织对应器官的器官轮廓存在重合边界,则将所述区域轮廓与所述器官轮廓的非重合边界确定为所述待切割边界。
  9. 根据权利要求1-8任一所述的方法,其特征在于,生成所述目标图像,包括:
    在所述可见光图像上叠加所述待切割边界得到所述目标图像;或者,
    确定目标边界特征,基于所述待切割边界和所述目标边界特征生成目标切割边界,并在所述可见光图像上叠加所述目标切割边界得到所述目标图像。
  10. 根据权利要求9所述的方法,其特征在于,基于所述待切割边界和所述目标边界特征生成所述目标切割边界,包括:
    若所述目标边界特征为目标颜色,则对所述待切割边界进行颜色调整,得到所述目标切割边界,所述目标切割边界的颜色为所述目标颜色;
    若所述目标边界特征为目标线型,则对所述待切割边界进行线型调整,得到所述目标切割边界,所述目标切割边界的线型为所述目标线型;
    若所述目标边界特征为目标颜色和目标线型,则对所述待切割边界进行颜色调整和线型调整,得到所述目标切割边界,所述目标切割边界的颜色为所述目标颜色,且所述目标切割边界的线型为所述目标线型。
  11. 根据权利要求1-10任一所述的方法,其特征在于,生成所述目标图像之后,所述方法还包括:
    若所述荧光图像是正显影模式下的荧光图像,且所述荧光图像包括所述待切割边界外围的显影区域,则在所述目标图像上叠加所述待切割边界外围的该显影区域;
    若所述荧光图像是负显影模式下的荧光图像,且所述荧光图像包括所述待切割边界内围的显影区域,则在所述目标图像上叠加所述待切割边界内围的该显影区域。
  12. 根据权利要求1-11任一所述的方法,其特征在于,从所述待检测图像中确定所述病变组织对应的所述待切割边界,包括:
    若接收到针对所述荧光图像的显示切换命令,且所述显示切换命令用于指示显示待切割边界,则从所述待检测图像中确定所述病变组织对应的所述待切割边界。
  13. 一种图像处理装置,其特征在于,所述装置包括:
    获取模块,用于获取目标对象内部指定位置对应的可见光图像和荧光图像;其中,所述指定位置包括病变组织和正常组织;
    确定模块,用于从待检测图像中确定所述病变组织对应的待切割边界;其中,所述待检测图像为所述荧光图像,或者,所述待检测图像为所述可见光图像与所述荧光图像的融合图像;
    生成模块,用于生成目标图像,所述目标图像包括所述可见光图像和所述待切割边界。
  14. 一种图像处理设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行 机器可执行指令,以实现权利要求1-12任一所述的方法步骤。
  15. 一种机器可读存储介质,其上存储有计算机指令,当所述计算机指令被处理器调用时,所述处理器执行权利要求1-12任一所述的方法步骤。
PCT/CN2022/115375 2021-12-09 2022-08-29 图像处理方法、装置及设备 WO2023103467A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111501127.8A CN114298980A (zh) 2021-12-09 2021-12-09 一种图像处理方法、装置及设备
CN202111501127.8 2021-12-09

Publications (1)

Publication Number Publication Date
WO2023103467A1 true WO2023103467A1 (zh) 2023-06-15

Family

ID=80967319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/115375 WO2023103467A1 (zh) 2021-12-09 2022-08-29 图像处理方法、装置及设备

Country Status (2)

Country Link
CN (1) CN114298980A (zh)
WO (1) WO2023103467A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703798A (zh) * 2023-08-08 2023-09-05 西南科技大学 基于自适应干扰抑制的食管多模态内镜图像增强融合方法
CN117575999A (zh) * 2023-11-01 2024-02-20 广州盛安医学检验有限公司 一种基于荧光标记技术的病灶预测系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备
CN114913124B (zh) * 2022-04-13 2023-04-07 中南大学湘雅医院 一种用于肿瘤手术的切缘路径生成方法、系统及存储介质
CN114569874A (zh) * 2022-05-09 2022-06-03 精微致远医疗科技(武汉)有限公司 一种应用于可视化导丝的成像控制器主机及图像处理方法
CN115330624A (zh) * 2022-08-17 2022-11-11 华伦医疗用品(深圳)有限公司 一种获取荧光图像的方法、装置及内窥镜系统
CN115115755B (zh) * 2022-08-30 2022-11-08 南京诺源医疗器械有限公司 基于数据处理的荧光三维成像方法及装置
CN115308004B (zh) * 2022-10-12 2022-12-23 天津云检医学检验所有限公司 激光捕获显微切割方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323072A1 (en) * 2010-03-09 2012-12-20 Olympus Corporation Fluorescence endoscope device
US20130044126A1 (en) * 2011-08-16 2013-02-21 Fujifilm Corporation Image display method and apparatus
US20150016705A1 (en) * 2012-04-04 2015-01-15 Olympus Medical Systems Corp. Fluoroscopy apparatus and fluoroscopy apparatus operating method
WO2020095987A2 (en) * 2018-11-07 2020-05-14 Sony Corporation Medical observation system, signal processing apparatus, and medical observation method
CN111513660A (zh) * 2020-04-28 2020-08-11 深圳开立生物医疗科技股份有限公司 一种应用于内窥镜的图像处理方法、装置及相关设备
WO2021075418A1 (ja) * 2019-10-18 2021-04-22 国立大学法人鳥取大学 画像処理方法、教師データ生成方法、学習済みモデル生成方法、発病予測方法、画像処理装置、画像処理プログラム、およびそのプログラムを記録した記録媒体
CN112837325A (zh) * 2021-01-26 2021-05-25 南京英沃夫科技有限公司 医学影像图像处理方法、装置、电子设备及介质
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323072A1 (en) * 2010-03-09 2012-12-20 Olympus Corporation Fluorescence endoscope device
US20130044126A1 (en) * 2011-08-16 2013-02-21 Fujifilm Corporation Image display method and apparatus
US20150016705A1 (en) * 2012-04-04 2015-01-15 Olympus Medical Systems Corp. Fluoroscopy apparatus and fluoroscopy apparatus operating method
WO2020095987A2 (en) * 2018-11-07 2020-05-14 Sony Corporation Medical observation system, signal processing apparatus, and medical observation method
WO2021075418A1 (ja) * 2019-10-18 2021-04-22 国立大学法人鳥取大学 画像処理方法、教師データ生成方法、学習済みモデル生成方法、発病予測方法、画像処理装置、画像処理プログラム、およびそのプログラムを記録した記録媒体
CN111513660A (zh) * 2020-04-28 2020-08-11 深圳开立生物医疗科技股份有限公司 一种应用于内窥镜的图像处理方法、装置及相关设备
CN112837325A (zh) * 2021-01-26 2021-05-25 南京英沃夫科技有限公司 医学影像图像处理方法、装置、电子设备及介质
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703798A (zh) * 2023-08-08 2023-09-05 西南科技大学 基于自适应干扰抑制的食管多模态内镜图像增强融合方法
CN116703798B (zh) * 2023-08-08 2023-10-13 西南科技大学 基于自适应干扰抑制的食管多模态内镜图像增强融合方法
CN117575999A (zh) * 2023-11-01 2024-02-20 广州盛安医学检验有限公司 一种基于荧光标记技术的病灶预测系统
CN117575999B (zh) * 2023-11-01 2024-04-16 广州盛安医学检验有限公司 一种基于荧光标记技术的病灶预测系统

Also Published As

Publication number Publication date
CN114298980A (zh) 2022-04-08

Similar Documents

Publication Publication Date Title
WO2023103467A1 (zh) 图像处理方法、装置及设备
CN110325100B (zh) 内窥镜系统及其操作方法
KR102028780B1 (ko) 영상 내시경 시스템
US20150339817A1 (en) Endoscope image processing device, endoscope apparatus, image processing method, and information storage device
CN107847117B (zh) 图像处理装置及图像处理方法
US11910994B2 (en) Medical image processing apparatus, medical image processing method, program, diagnosis supporting apparatus, and endoscope system
CN113543694B (zh) 医用图像处理装置、处理器装置、内窥镜系统、医用图像处理方法、及记录介质
JP5011452B2 (ja) 医用画像処理装置および医用画像処理装置の制御方法
US20220125280A1 (en) Apparatuses and methods involving multi-modal imaging of a sample
JP7278202B2 (ja) 画像学習装置、画像学習方法、ニューラルネットワーク、及び画像分類装置
JP6273640B2 (ja) 撮影画像表示装置
WO2020036109A1 (ja) 医用画像処理装置及び内視鏡システム並びに医用画像処理装置の作動方法
JP2023014380A (ja) 内視鏡システム
CN117481579A (zh) 内窥镜系统及其工作方法
JP7146925B2 (ja) 医用画像処理装置及び内視鏡システム並びに医用画像処理装置の作動方法
CN103975364A (zh) 针对宫颈的光学检查的图像选择
TW201121489A (en) Endoscope navigation method and endoscopy navigation system
KR20160118037A (ko) 의료 영상으로부터 병변의 위치를 자동으로 감지하는 장치 및 그 방법
JP7130043B2 (ja) 医用画像処理装置及び内視鏡システム並びに医用画像処理装置の作動方法
JP2020124495A (ja) 強度または輝度に依存する疑似色パターン特性の変化を使用する画像処理装置、画像処理方法およびそのような画像処理装置を備えた医療用観察装置
WO2019087969A1 (ja) 内視鏡システム、報知方法、及びプログラム
CN116134363A (zh) 内窥镜系统及其工作方法
CN114305298A (zh) 一种图像处理方法、装置及设备
WO2022065301A1 (ja) 医療画像装置及びその作動方法
WO2017117710A1 (zh) 内视镜成像系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22902899

Country of ref document: EP

Kind code of ref document: A1