WO2022105902A1 - 荧光内窥镜系统、控制方法和存储介质 - Google Patents

荧光内窥镜系统、控制方法和存储介质 Download PDF

Info

Publication number
WO2022105902A1
WO2022105902A1 PCT/CN2021/131950 CN2021131950W WO2022105902A1 WO 2022105902 A1 WO2022105902 A1 WO 2022105902A1 CN 2021131950 W CN2021131950 W CN 2021131950W WO 2022105902 A1 WO2022105902 A1 WO 2022105902A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
brightness
image
current
current frame
Prior art date
Application number
PCT/CN2021/131950
Other languages
English (en)
French (fr)
Inventor
梁向南
毛昊阳
曹伦
何裕源
何超
Original Assignee
上海微创医疗机器人(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微创医疗机器人(集团)股份有限公司 filed Critical 上海微创医疗机器人(集团)股份有限公司
Priority to EP21894049.2A priority Critical patent/EP4248835A4/en
Publication of WO2022105902A1 publication Critical patent/WO2022105902A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/043Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for fluorescence imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/046Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for infrared imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0638Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements providing two or more wavelengths
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0655Control therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the invention relates to the technical field of medical devices, in particular to a fluorescence endoscope system, a control method and a storage medium.
  • endoscope as a testing instrument that integrates traditional optics, ergonomics, precision machinery, modern electronics, mathematics, and software, has become more and more widely used.
  • the endoscope can enter the body of the subject to be tested (eg, the esophagus) to obtain images of the site to be tested, so as to determine whether there is a lesion in the site to be tested.
  • Using an endoscope can see lesions that X-rays cannot, so it is very useful for doctors.
  • an endoscopist can look at ulcers or tumors in the stomach and use this to determine the best treatment plan.
  • the endoscope system generally has a component that can be inserted into the living body. After the component is inserted into the living body through the oral cavity or other natural orifices, or through a small incision made by surgery, these components are obtained after the acquisition. The image information inside the living body is then transmitted and displayed on the display.
  • endoscopic systems are capable of ordinary light (visible light) imaging.
  • ordinary light imaging is to image the inside of the organism under the illumination of ordinary light or white light.
  • Ordinary light images are formed by acquiring images generated from reflected light from these light beams. Doctors, etc. can make a diagnosis through ordinary light images, but ordinary light images also have certain limitations. For example, some lesions such as squamous cell carcinoma are difficult to be recognized visually, that is, they are difficult to be identified on ordinary light images; another example , In endometrial cancer surgery, sentinel lymph nodes are also difficult to identify on ordinary light images.
  • the special light (such as fluorescence) imaging technology of the endoscope system has been developed, which can provide the observer with information that cannot be distinguished by ordinary light imaging, which provides a richer reference for diagnosis and treatment.
  • the sentinel lymph node is in strong contrast with the surrounding normal target tissue (for example, one is processed and displayed as nearly white, and the other is displayed as approximately white.
  • the general light image and the special light image are generally obtained through the fluorescence endoscope system, and the two are displayed side by side or superimposed.
  • the purpose of the present invention is to provide a fluorescence endoscope system, a control method and a storage medium, which can not only obtain information that cannot be obtained by visible light by imaging a specific tissue through fluorescence, but also solve the problem of visible light environment caused by intraoperative hemorrhage, etc.
  • a fluorescence endoscope system a control method and a storage medium, which can not only obtain information that cannot be obtained by visible light by imaging a specific tissue through fluorescence, but also solve the problem of visible light environment caused by intraoperative hemorrhage, etc.
  • the traditional method of image fusion of front and rear frames causes smearing and aggravates the problem of imaging noise.
  • the present invention provides a fluorescence endoscope system, including an endoscope, an illumination module, an endoscope drive module and a scene fusion module;
  • the working modes of the fluorescence endoscope system include a first mode and a second mode
  • the endoscope includes a visible light image sensor and a near-infrared light image sensor;
  • the illumination module is used to provide visible light to illuminate the target tissue, and to provide excitation light to stimulate the target tissue to generate fluorescence
  • the visible light image sensor is used to acquire a visible light scene image of the target tissue, and output in the form of a first video stream
  • the near-infrared light image sensor is used to acquire the fluorescent scene image of the target tissue, and output in the form of a second video stream
  • the illumination module is configured to provide visible light and near-infrared light to illuminate the target tissue
  • the visible light image sensor is configured to acquire a visible light scene image of the target tissue and output it in the form of a first video stream
  • the near-infrared light image sensor is used to obtain the near-infrared light scene image of the target tissue, and output it in the form of a second video stream;
  • the endoscope driving module includes a first driving unit and a second driving unit, and the first driving unit is configured to drive the visible light image sensor to acquire the visible light according to a first exposure time and a first gain in a second mode a scene image, where the second driving unit is configured to drive the near-infrared light image sensor to acquire the near-infrared light scene image according to a second exposure time and a second gain in a second mode;
  • the scene fusion module is used when the current brightness of the first video stream is within the preset first target brightness range and/or the current brightness of the second video stream is within the preset second target brightness range , based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream, compare the current frame image of the first video stream and the current frame image of the second video stream.
  • the frame images are fused to obtain a luminance fusion image, and based on the chrominance information of the current frame image of the first video stream, the luminance fusion image is fused with the current frame image of the first video stream to obtain scene fusion. image.
  • the fluoroscopic endoscope system further includes an endoscope control module, the endoscope control module includes a first control unit and/or a second control unit, and the first control unit is used for the second control unit.
  • the current brightness of the first video stream is within the preset first target brightness range; the second control unit is configured to make the current brightness of the second video stream within the preset first brightness in the second mode. within the second target brightness range.
  • the first control unit includes:
  • a first brightness acquiring unit configured to acquire the current brightness of the first video stream
  • a first exposure control unit configured to determine whether the current brightness of the first video stream is within the first target brightness range, and if the current brightness of the first video stream is not within the first target brightness range, then adjusting the first exposure of the visible light image sensor so that the current brightness of the first video stream is within the first target brightness range;
  • the second control unit includes:
  • a second brightness acquiring unit configured to acquire the current brightness of the second video stream
  • a second exposure control unit configured to determine whether the current brightness of the second video stream is within the second target brightness range, and if the current brightness of the second video stream is not within the second target brightness range, then The second exposure amount of the near-infrared light image sensor is adjusted so that the current brightness of the second video stream is within the second target brightness range.
  • the first exposure control unit increases the first exposure of the visible light image sensor to adjusting the current brightness of the first video stream;
  • the second exposure control part adjusts the second exposure amount of the near-infrared light image sensor by increasing the second exposure the current brightness of the second video stream.
  • the first exposure control unit is configured to determine whether the maximum first exposure time and the minimum first gain can meet the exposure requirement within the first target brightness range;
  • the second exposure control unit is configured to determine whether the maximum second exposure time and the minimum second gain can meet the exposure requirement within the second target brightness range;
  • the second gain is adjusted based on the maximum second exposure time, so that the current brightness of the second video stream is within the second target brightness range.
  • the first exposure control unit reduces the first exposure of the visible light image sensor to adjusting the current brightness of the first video stream;
  • the second exposure control section reduces the second exposure of the near-infrared light image sensor to adjust the the current brightness of the second video stream.
  • the first exposure control unit is configured to determine whether the maximum first exposure time and the minimum first gain can meet the exposure requirement within the first target brightness range;
  • the second exposure control unit is configured to determine whether the maximum second exposure time and the minimum second gain can meet the exposure requirement within the second target brightness range;
  • the second exposure time is adjusted based on the minimum second gain, so that the current brightness of the second video stream is within the second target brightness range.
  • the lighting module includes a first light source module for providing the visible light and a third light source module for providing near-infrared light;
  • the first control unit further includes a first lighting adjustment part, the The first lighting adjustment part is configured to control the lighting module when the current brightness of the first video stream cannot be within the first target brightness range by adjusting the first gain and the first exposure time adjusting the output power of the first light source module so that the current brightness of the first video stream is within the first target brightness range;
  • the second control unit further includes a second illumination adjustment part for adjusting the second gain and the second exposure time when the current brightness of the second video stream cannot be made within the
  • the lighting module is controlled to adjust the output power of the third light source module, so that the current brightness of the second video stream is within the second target brightness range.
  • the first luminance acquisition unit is used to obtain the average value of the Y values of each or part of the pixels of the current frame image of the first video stream or The weighted value is used as the current brightness;
  • the first brightness obtaining unit is configured to obtain the brightness of the pixel according to the RGB value of the pixel of the current frame image of the first video stream, and then obtain the current value of the first video stream.
  • the average or weighted value of the brightness of each or part of the pixel points of the frame image is used as the current brightness.
  • the endoscope further includes a dichroic prism group, and the dichroic prism group is used to separate the reflected light of the visible light and the reflected light of the near-infrared light irradiated to the target tissue in the second mode.
  • the reflected light of the visible light irradiated to the target tissue and the fluorescence generated by excitation are separated, so that the reflected light of the visible light can be captured by the photosensitive surface of the visible light image sensor, and the reflected light of the near-infrared light Or the fluorescent light can be captured by the photosensitive surface of the near-infrared light image sensor.
  • the dichroic prism group includes a first dichroic prism, a second dichroic prism, a visible light bandpass filter and a near-infrared bandpass filter
  • the visible light bandpass filter is used to allow visible light to pass through, and Cut off light of other wavelengths
  • the near-infrared bandpass filter is used to allow near-infrared light to pass through and cut off light of other wavelengths
  • the near-infrared bandpass filter is arranged on the first beam splitting prism and the second beam splitter
  • the surface adjacent to the first dichroic prism and the second dichroic prism is provided with a transflective film, so that part of the incident light is reflected and part of it is transmitted, and the photosensitive surface of the visible light image sensor is adjacent to the surface.
  • the exit surface of the first beam splitter prism, the visible light bandpass filter is arranged between the exit surface of the first beam splitter prism and the photosensitive surface of the visible light image sensor, and the photosensitive surface of the near-infrared light image sensor is The face is adjacent to the exit face of the second beam splitter prism.
  • the scene fusion module includes:
  • the image fusion unit is configured to combine the pixels of the current frame image of the first video stream with the pixel points of the current frame image of the first video stream based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream Image fusion is performed on the pixels of the current frame image of the corresponding second video stream to obtain a brightness fusion image, and based on the chrominance information of the current frame image of the first video stream, the brightness fusion image and the brightness fusion image are merged. Perform image fusion on the current frame image of the first video stream to obtain a scene fusion image.
  • the image fusion unit is configured to, according to the preset normal distribution of brightness values and weights, the brightness of the pixels of the current frame image of the first video stream, and the corresponding brightness of the second video stream.
  • the brightness of the pixels of the current frame image respectively obtaining the weight of the brightness of the pixels in the current frame image of the first video stream and the corresponding weight of the brightness of the pixels of the current frame image of the second video stream;
  • the first video The brightness of the pixels of the current frame image of the stream is weighted with the corresponding brightness of the pixels of the current frame image of the second video stream to obtain the brightness of the pixels of the brightness fusion image.
  • the scene fusion module further includes an image mode conversion unit, and the image mode conversion unit is configured to convert the output format of the first video stream when the output format of the first video stream is RAW format or RGB format. Convert the pixels of the current frame image to YUV space or YCbCr space, take the Y value of the pixels of the current frame image of the first video stream as the brightness of the pixels, and take the current value of the first video stream.
  • the U value, V value or Cb value and Cr value of the pixel point of the frame image are used as the chromaticity of the pixel point of the current frame image of the first video stream.
  • the endoscope is a three-dimensional endoscope
  • the visible light image sensor includes a first visible light image sensor and a second visible light image sensor;
  • the near-infrared light image sensor includes a first near-infrared light image sensor and a second near-infrared light image sensor;
  • the first video stream includes a first visible light video stream and a second visible light video stream;
  • the second video stream includes a first near-infrared optical video stream and a second near-infrared optical video stream;
  • the first visible light image sensor is used to acquire the first visible light scene image of the target tissue and output it in the form of a first visible light video stream
  • the second visible light image sensor is used to acquire the first visible light scene image.
  • the second visible light scene image of the target tissue is output in the form of a second visible light video stream
  • the first near-infrared light image sensor is used to acquire the first near-infrared light scene image of the target tissue
  • the first near-infrared light scene image is output in the form of a second visible light video stream.
  • the second near-infrared light image sensor is used to acquire a second near-infrared light scene image of the target tissue, and output in the form of a second near-infrared light video stream;
  • the scene fusion module is configured to, based on the brightness information of the current frame image of the first visible light video stream and the brightness information of the current frame image of the first near-infrared video stream, combine the current frame of the first visible light video stream.
  • the frame image is fused with the current frame image of the first near-infrared light video stream to obtain a first luminance fusion image, and based on the chromaticity information of the current frame image of the first visible light video stream, the first luminance
  • the fusion image is fused with the current frame image of the first visible light video stream to obtain a first scene fusion image, and based on the brightness information of the current frame image of the second visible light video stream and the second near-infrared light video stream.
  • the brightness information of the current frame image of the second visible light video stream is fused with the current frame image of the second near-infrared video stream to obtain a second brightness fusion image, and based on the second
  • the fluoroscopic endoscope system further includes a central controller.
  • the central controller accepts the first mode command, it controls the first control unit and the second control unit according to preset settings in the internal control unit.
  • the first target brightness range and the second target brightness range of the endoscope control module adjust the current brightness of the first video stream and the second video stream, or the central controller accepts the first mode instruction.
  • the first target brightness range and the second target brightness range are respectively sent to the first control unit and the second control unit, so that the first control unit and the second control unit
  • the current brightness of the first video stream and the second video stream is adjusted according to the first target brightness range and the second target brightness range preset in the endoscope control module.
  • the fluorescence endoscope system further includes a central controller, and the central controller includes a video overlay unit, and the video overlay unit is used to overlay the scene fusion image output by the scene fusion module and generate a video overlay.
  • the three-dimensional image is transmitted to the display for display.
  • the first driving unit is further configured to drive the visible light image sensor to acquire the visible light scene image according to the third exposure time and the third gain in the first mode; the second driving unit is further configured to In the first mode, the near-infrared light image sensor is driven according to a fourth exposure time and a fourth gain to acquire the fluorescent scene image.
  • the first control unit is further configured to make the current brightness of the first video stream within the preset third target brightness range in the first mode; and/or the second control unit is further configured to for making the current brightness of the second video stream within the preset fourth target brightness range in the first mode.
  • the present invention also provides a control method for a fluorescence endoscope system, the fluorescence endoscope system includes a first mode and a second mode, and the control method includes:
  • visible light and near-infrared light are provided to illuminate the target tissue
  • the current frame image of the first video stream and the current frame of the second video stream are combined the images are fused to obtain a luminance fused image
  • image fusion is performed on the current frame image of the first video stream and the luminance fusion image to obtain a scene fusion image.
  • the first exposure amount is the product of the first exposure time and the first gain
  • the second exposure amount is increased to adjust the current brightness of the second video stream.
  • the increasing the first exposure amount to adjust the current brightness of the first video stream includes:
  • the increasing the second exposure amount to adjust the current brightness of the second video stream includes:
  • the second gain is adjusted based on the maximum second exposure time, so that the current brightness of the second video stream is within the second target brightness range.
  • the second exposure amount is reduced to adjust the current brightness of the second video stream.
  • the reducing the first exposure amount to adjust the current brightness of the first video stream includes:
  • the reducing the second exposure amount to adjust the current brightness of the second video stream includes:
  • the second exposure time is adjusted based on the minimum second gain, so that the current brightness of the second video stream is within the second target brightness range.
  • the current brightness of the first video stream cannot be made to be within the first target brightness range by adjusting the first gain and the first exposure time, then adjust the first brightness for providing the visible light.
  • the output power of a light source module so that the current brightness of the first video stream is within the first target brightness range;
  • the current brightness of the second video stream cannot be made to be within the second target brightness range by adjusting the second gain and the second exposure time, then adjust the first brightness for providing the near-infrared light.
  • the current frame image of the first video stream and the brightness information of the current frame image of the second video stream are fused to obtain a brightness fusion image, including:
  • the pixel points of the current frame image of the first video stream and the corresponding second The pixels of the current frame image of the video stream are image-fused to obtain a brightness-fused image.
  • performing image fusion on the current frame image of the first video stream and the luminance fusion image based on the chrominance information of the current frame image of the first video stream to obtain a scene fusion image including:
  • the chromaticity information of the pixel of the current frame image of the first video stream is assigned to the corresponding pixel of the luminance fusion image as the chromaticity of the corresponding pixel.
  • the The control method also includes:
  • the pixel points and the current frame image of the first video stream are compared.
  • Image fusion is performed on the pixels of the current frame image of the corresponding second video stream, including:
  • the brightness of the pixels of the current frame image of the first video stream, and the corresponding brightness of the pixels of the current frame image of the second video stream respectively obtain The weight of the brightness of the pixels in the current frame image of the first video stream and the corresponding weight of the brightness of the pixels in the current frame image of the second video stream;
  • the first video The brightness of the pixels of the current frame image of the stream is weighted with the corresponding brightness of the pixels of the current frame image of the second video stream to obtain the brightness of the pixels of the brightness fusion image.
  • the endoscope is a three-dimensional endoscope
  • the acquired visible light scene image and the near-infrared light scene image of the target tissue are respectively output in the form of a first video stream and a second video stream, including:
  • the judging whether the current brightness of the first video stream is within the preset first target brightness range includes:
  • the judging whether the current brightness of the second video stream is within the preset second target brightness range includes:
  • the current frame image of the first video stream and the brightness information of the second video stream are compared based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream.
  • the current frame image is fused to obtain a brightness fusion image, including:
  • the current frame image of the first visible light video stream and the first The current frame image of the near-infrared light video stream is fused to obtain a first brightness fusion image, based on the brightness information of the current frame image of the second visible light video stream and the brightness of the current frame image of the second near-infrared light video stream. information, and fuse the current frame image of the second visible light video stream with the current frame image of the second near-infrared light video stream to obtain a second brightness fusion image;
  • the performing image fusion on the current frame image of the first video stream and the luminance fusion image based on the chrominance information of the current frame image of the first video stream to obtain a scene fusion image including:
  • image fusion is performed between the current frame image of the first visible light video stream and the first luminance fusion image to obtain a first scene fusion image, based on the The chromaticity information of the current frame image of the second visible light video stream is performed, and the current frame image of the second visible light video stream is image-fused with the second luminance fusion image to obtain a second scene fusion image.
  • the present invention also provides a storage medium, where a computer program is stored in the storage medium, and the computer program implements the above-mentioned control method when executed by a processor.
  • the fluorescence endoscope system, control method and storage medium provided by the present invention have the following advantages: when in the first mode, the illumination module can provide visible light and excitation light, and the excitation light illuminates the target tissue Fluorescence is generated by excitation to help the operator observe tissue information that cannot be observed under visible light conditions; when in the second mode, the illumination module can provide visible light and near-infrared light to illuminate the target tissue, and the scene fusion module uses When the current brightness of the first video stream and/or the second video stream is within the target brightness range, the current frame image captured by visible light and the current frame image captured by near-infrared light are fused to obtain a scene. The fused image has appropriate brightness and rich details.
  • the fluorescence endoscope system of the present invention can effectively reduce the image blur caused by the motion of the camera, avoid the user's misjudgment of the lesion information caused by the blurred image, and improve the accuracy and safety of the operation.
  • the signal-to-noise ratio can be effectively improved, and more detailed image information can be obtained;
  • the fluorescence endoscope system of the present invention is based on the existing hardware, and a slight improvement can be made to realize various functions to meet the needs of doctors in different operations. demand, improve the convenience of surgical operation.
  • FIG. 1 is a schematic structural diagram of a fluorescence endoscope system in an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the control of the fluorescence endoscope system during mode switching in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a light splitting principle of a light splitting prism group in an embodiment of the present invention.
  • FIG. 5 is a spectrogram of a near-infrared light bandpass filter in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a workflow for adjusting the first exposure amount in the second mode so that the current brightness B1 of the first video stream is located in the first target brightness range (B1min, B1max) according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a normal distribution of brightness and weight in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of image fusion in a second mode according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a control method of a fluorescence endoscope system in a second mode according to an embodiment of the present invention.
  • the core idea of the present invention is to provide a fluorescence endoscope system, a control method and a storage medium, which can not only obtain information that cannot be obtained by visible light by imaging a specific tissue through fluorescence, but also solve the problem of visible light caused by intraoperative hemorrhage, etc.
  • the traditional image fusion method of front and rear frames causes smearing and aggravates the problems of imaging noise.
  • FIG. 1 schematically shows a schematic structural diagram of the fluorescence endoscope system provided by an embodiment of the present invention, as shown in FIG. 1 .
  • the fluorescence endoscope system includes an endoscope 100 , an illumination module 200 , an endoscope driving module 300 and a scene fusion module 500 .
  • the working modes of the fluorescence endoscope system include a first mode (ie, a fluorescence mode) and a second mode (ie, a bleeding mode).
  • the lighting module 200 is used to provide visible light, excitation light and near-infrared light.
  • the visible light and the near-infrared light are used to illuminate the target tissue 600 to form reflected light; the excitation light is used to stimulate the target tissue to generate fluorescence.
  • the present invention has no particular limitation on the specific position of the lighting module 200 .
  • the output light provided by the illumination module 200 may be delivered to the end of the endoscope 100 and to the target tissue 600 through a connector, such as an optical fiber, accommodated in the illumination channel of the endoscope 100 .
  • the illumination module 200 When the fluorescence endoscope system is in the fluorescence mode, the illumination module 200 emits visible light and excitation light to the target tissue 600, so that the target tissue 600 reflects the visible light and the target tissue 600 is subjected to the The excitation light excites and emits fluorescence.
  • the illumination module 200 When the fluoroscopic endoscopy system is in the bleeding mode, the illumination module 200 emits visible light and near-infrared light to the target tissue 600 . Likewise, the target tissue 600 will reflect visible and near-infrared light.
  • the spectral distribution of the excitation light is 803 nm-812 nm
  • the fluorescence spectral distribution after stimulated radiation is 830 nm-840 nm
  • the spectral distribution of the near-infrared light is 935 nm-945 nm.
  • FIG. 2 schematically shows a control diagram of a fluorescence endoscope system during mode switching provided by an embodiment of the present invention.
  • the lighting module 200 includes a light source unit 210 and a lighting controller 220 .
  • the light source unit 210 includes a first light source module 211 for providing visible light, a second light source module 212 for providing excitation light, and a third light source module 213 for providing near-infrared light.
  • the lighting controller 220 includes a mode switching unit 221 for switching between the fluorescence mode and the bleeding mode, and a power control unit 222 for controlling the output power of the light source unit 210 .
  • the mode switching unit 221 is connected to the central controller 800 described below, so that the operator can control the mode switching unit 221 through the central controller 800 to switch between the fluorescence mode and the bleeding mode.
  • the first light source module 211 and the second light source module 212 are turned on; when the fluorescence endoscope system is in the fluorescence mode When in the bleeding mode, under the control of the lighting controller 220, the first light source module 211 and the third light source module 213 are turned on.
  • the endoscope 100 includes a visible light image sensor 111 and a near-infrared light image sensor 121 , and the visible light image sensor 111 is used to capture the reflected light of the visible light to obtain the visible light about the target tissue 600
  • the scene image is output in the form of the first video stream.
  • the fluorescence generated by the target tissue after being stimulated and the near-infrared light generated by the illumination module 200 are both within the near-infrared light spectral range. Therefore, in this embodiment, the fluorescence generated by the excited target tissue and the near-infrared light reflected by the target tissue are both captured by the near-infrared light image sensor 121 .
  • the near-infrared light image sensor 121 is used to capture the fluorescence carrying the scene information of the target tissue 600, and the scene information of the target tissue 600 is photoelectrically converted to obtain information about the target tissue 600.
  • the fluorescence scene image of the target tissue 600 is output in the form of a second video stream;
  • the near-infrared light image sensor 121 is used to capture the near-infrared light image sensor 121 carrying the scene information of the target tissue 600 .
  • the scene information of the target tissue 600 is photoelectrically converted to obtain a near-infrared light scene image about the target tissue 600 and output in the form of a second video stream.
  • the visible light image sensor 111 and the near-infrared light image sensor 121 may be Complementary Metal Oxide Semiconductor (CMOS, Complementary Metal Oxide Semiconductor) or Charge Coupled Device (CCD, Charge Coupled Device).
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the endoscope 100 further includes a dichroic prism group 130, and the dichroic prism set 130 is used to separate the reflected light of the visible light and the reflected light of the near-infrared light irradiated to the target tissue 600 in the bleeding mode.
  • the reflected light of the visible light irradiated to the target tissue 600 and the fluorescence generated by excitation are separated, so that the reflected light of the visible light can be captured by the photosensitive surface of the visible light image sensor 111, and the near-infrared light is The reflected light or fluorescent light can be captured by the photosensitive surface of the near-infrared light image sensor 121 .
  • FIG. 3 schematically shows a schematic diagram of the spectroscopic principle of the beam splitting prism group provided by an embodiment of the present invention
  • FIG. 5 schematically shows the spectral diagram of the near-infrared light bandpass filter provided by an embodiment of the present invention.
  • the dichroic prism group 130 includes a first dichroic prism 133 , a second dichroic prism 134 , a visible light bandpass filter 131 and a near infrared bandpass filter 132 .
  • the visible light bandpass filter 131 is used for allowing visible light to pass through and blocking light of other wavelengths.
  • the near-infrared bandpass filter 132 is used for allowing near-infrared light to pass and blocking light of other wavelengths.
  • the near-infrared bandpass filter 132 is disposed between the first dichroic prism 133 and the second dichroic prism 134 .
  • the surface adjacent to the first dichroic prism 133 and the second dichroic prism 134 is provided with a transflective film, so that part of the incident light is reflected and part of the light is transmitted.
  • the photosensitive surface of the visible light image sensor 111 is adjacent to the emergent surface of the first dichroic prism 133 , and the visible light bandpass filter 131 is disposed between the emergent surface of the first dichroic prism 133 and the visible light image sensor 111 between the photosensitive surfaces.
  • the photosensitive surface of the near-infrared light image sensor 121 is adjacent to the exit surface of the second beam splitter prism 134 .
  • the exit surface of the first beam splitter prism 133 refers to the surface of the first beam splitter prism 134 through which the reflected light leaves the first beam splitter prism 133;
  • the exit surface of the second beam splitter prism 134" refers to the transmission The surface of the second dichroic prism 134 that the light passes through when it leaves the second dichroic prism 134 .
  • the photosensitive surface of the visible light image sensor 111 and the photosensitive surface of the near-infrared light image sensor 121 are parallel to the optical axis, and such arrangement can reduce the limitation of the size of the endoscope on the size of the image sensor , which helps to improve image quality.
  • the mixed light in the fluorescence mode, the reflected light of the target tissue 600 to visible light and the fluorescence generated by the excitation of the target tissue 600; in the bleeding mode, it is the reflected light of the target tissue 600 to visible light and near-infrared light) along the optical axis into the first After a beam splitting prism 133 , a part is reflected, and a part is transmitted out of the first beam splitting prism 133 . After the reflected mixed light is reflected (for example, once reflected in FIG. 3 ), it leaves the first dichroic prism 133 from the exit surface of the first dichroic prism 133 , and then passes through the visible light bandpass filter 131 to make the mixed light.
  • the light with the mid-spectrum at 380nm-780nm passes through and is captured by the visible light image sensor 111; and the transmitted mixed light passes through the near-infrared bandpass filter 132 so that the mixed light has a mid-spectrum at 830nm-840nm and
  • the light of 935-945nm passes through and enters the second beam splitting prism 134 along the optical axis, and then after being reflected (for example, once reflected in FIG. 3 ), it leaves the second beam splitting prism 134 from the exit surface of the second beam splitting prism 134 , and captured by the near-infrared light image sensor 121 .
  • the visible light bandpass filter 131 can pass light with a spectrum of 380 nm-780 nm, thereby ensuring that the reflected light of the visible light generated by the lighting module 200 enters the The visible light image sensor 111; the near-infrared bandpass filter 132 can pass light with a spectrum of 830nm-840nm and 935-945nm, thereby ensuring that the reflected light and fluorescence of the near-infrared light generated by the lighting module 200 enter the Near infrared light image sensor 121 .
  • the endoscope is a three-dimensional endoscope, that is, the fluorescence endoscope system is a three-dimensional endoscope system.
  • the visible light scene image includes a first visible light scene image and a second visible light scene image
  • the near-infrared scene image includes a first near-infrared scene image and a second near-infrared scene image.
  • the first visible light image sensor 111A is used to acquire the first visible light scene image of the target tissue and output it in the form of a first visible light video stream
  • the second visible light image sensor 111B is used to acquire the second visible light scene image of the target tissue , and output it in the form of a second visible light video stream.
  • first near-infrared light image sensor 121A is used to acquire a first fluorescence scene image of the target tissue and output it in the form of a first fluorescence video stream;
  • second near-infrared light image sensor 121B is used for A second fluorescence scene image of the target tissue is acquired and output in the form of a second fluorescence video stream.
  • the first near-infrared light image sensor 121A is used to acquire a first near-infrared light scene image of the target tissue, and output it in the form of a first near-infrared light video stream;
  • the second near-infrared light image The sensor 121B is used to acquire a second near-infrared light scene image of the target tissue, and output it in the form of a second near-infrared light video stream.
  • the components named "first" and "second" in this embodiment do not represent the sequence relationship between the components.
  • the first visible light scene image may be the visible light scene image on the left side of the endoscope, and may be the visible light scene image on the right side of the endoscope.
  • the endoscope driving module 300 includes a first driving unit 310 and a second driving unit 320 .
  • the first driving unit 310 is used for driving the visible light image sensor 111 to acquire the visible light scene image according to the first exposure time T 1 and the first gain G 1 ;
  • the second driving unit 320 is used for The near-infrared light image sensor 121 is driven according to the second exposure time T 2 and the second gain G 2 to acquire the near-infrared light scene image.
  • the first driving unit 310 is configured to drive the visible light image sensor 111 to acquire the visible light scene image according to the third exposure time T3 and the third gain G3; the second driving unit 320 is used for driving the near-infrared light image sensor 121 to acquire the fluorescent scene image according to the fourth exposure time T4 and the fourth gain G4.
  • the fluorescence endoscope system further includes an endoscope control module 400 , and the endoscope drive module 300 is connected in communication with the endoscope control module 400 .
  • the endoscope control module 400 includes a first control unit 410 and/or a second control unit 420 .
  • the first control unit 410 is configured to make the current brightness B 1 of the first video stream within the preset first target brightness range (B 1min , B 1max ) in the bleeding mode; the second control unit 420 It is used to make the current brightness B 2 of the second video stream within the preset second target brightness range (B 2min , B 2max ) in the bleeding mode.
  • the fluorescence endoscopy system When the fluorescence endoscopy system is in the bleeding mode, the fluorescence endoscopy system needs to control the brightness of the first video stream and/or the brightness of the second video stream within a desired brightness range, so as to facilitate subsequent image fusion and so on. Therefore, in the bleeding mode, the first control unit 410 and/or the second control unit 420 determine whether the current brightness B 1 of the first video stream is within the first target brightness range (B 1min , B 1max ) and/or whether the current brightness B 2 of the second video stream is within the second target brightness range (B 2min , B 2max ), if not, by adjusting the exposure of the image sensor (ie, the exposure time The product of T and gain G) to adjust the current brightness.
  • the exposure of the image sensor ie, the exposure time The product of T and gain G
  • the fluorescence endoscope is a fixed aperture endoscope, so the exposure amount can be regarded as the product of the exposure time T and the gain G.
  • the fluorescence endoscope is an aperture endoscope, and the exposure amount can also be regarded as the product of aperture, exposure time T and gain G.
  • the first exposure amount of the visible light image sensor 111 is adjusted by the first control unit 410 (here, the product of the first exposure time T 1 and the first gain G 1 ), so that the first video
  • the current brightness B 1 of the stream is within a preset first target brightness range (B 1min , B 1max ); and/or the second exposure amount of the near-infrared light image sensor 121 is adjusted by the second control unit 420 ( Here is the product of the second exposure time T 2 and the second gain G 2 ), so that the current brightness B 2 of the second video stream is within the preset second target brightness range (B 2min , B 2max ).
  • "current brightness” refers to the brightness of the image of the current frame in the video stream.
  • the current brightness B 1 of the first video stream is the brightness of the current frame image in the first video stream
  • the current brightness B 2 of the second video stream is the brightness of the current frame image in the second video stream.
  • the first driving unit 310 receives parameters such as the first exposure time T 1 and the first gain G 1 output by the first control unit 410 to enable the visible light image sensor 111 Acquire a visible light scene image that meets the brightness requirement
  • the second driving unit 320 receives parameters such as the second exposure time T 2 and the second gain G 2 output by the second control unit 420 to enable the near-infrared light image sensor 121 to obtain a Image of a near-infrared light scene with brightness requirements.
  • the first control unit 410 is further configured to make the current brightness B 1 of the first video stream in the fluorescent mode to be within a preset third target brightness range (B 3min , B 3max ); and/or , the second control unit 420 is configured to make the current brightness B 2 of the second video stream within the preset fourth target brightness range (B 4min , B 4max ) in the fluorescent mode.
  • the fluoroscopy system adjusts the current brightness of the video stream only in bleeding mode.
  • the first target brightness range (B 1min , B 1max ) and the second target brightness range (B 2min , B 2max ) may be set in the first control unit 410 and the second control unit 420 respectively.
  • the endoscope control module 400 receives an instruction from the central controller 800 (described in detail below) to turn on the bleeding mode, the first control unit 410 and the second control unit 420 according to the first target brightness range (B 1min , B 1max ), The second target brightness range (B 2min , B 2max ) adjusts the current brightness of the first video stream and the second video stream.
  • the fluorescence endoscopy system adjusts the current brightness of the video stream in a bleeding mode, a fluorescence mode.
  • the first control unit 410 and the second control unit 420 are always turned on.
  • the first target brightness range (B 1min , B 1max ), the third target brightness range (B 3min , B 3max ) can be set in the first control unit 410, and the second target brightness range (B 2min ) , B 2max ) and the fourth target brightness range (B 4min , B 4max ) may be set in the second control unit 420 .
  • the central controller 800 controls the first control unit 410 and the second control unit 420 according to the first target brightness range (B 1min , B 1max ), the second target The brightness range (B 2min , B 2max ) adjusts the current brightness of the first video stream and the second video stream; when the central controller 800 receives the instruction of the fluorescence mode, it controls the first control unit 410 , all The second control unit 420 adjusts the first video stream and the second video stream according to the third target brightness range (B 3min , B 3max ) and the fourth target brightness range (B 4min , B 4max ) the current brightness.
  • the fourth target brightness range (B 4min , B 4max ) can also be transmitted to the endoscope control module 400 by the central controller 800 when changing the endoscope system mode.
  • the first control unit 410 includes a first brightness acquisition unit 411 and a first exposure control unit 412 .
  • the first brightness acquisition part 411 is connected to the visible light image sensor 111
  • the first exposure control part 412 is connected to the first brightness acquisition part 411 and the first driving unit 310 .
  • the first brightness obtaining unit 411 is configured to receive the first video stream output by the visible light image sensor 111, and obtain the brightness of the image of the current frame in the first video stream in real time, that is, the current brightness of the first video stream. B 1 , and the current brightness B 1 is sent to the first exposure control unit 412 .
  • the first exposure control unit 412 is used to determine whether the current brightness B 1 of the received first video stream is within the preset first target brightness range (B 1min , B 1max ). Within the set first target brightness range (B 1min , B 1max ), the first exposure of the visible light image sensor 111 is adjusted and output to the first driving unit 310 , so that the current The brightness B 1 is within a preset first target brightness range (B 1min , B 1max ).
  • the current frame of the first video stream is dynamically changing, that is, the current frame image of the first video stream changes with time
  • "making the current brightness B1 of the first video stream at the preset first target “within the brightness range” should be understood as the first exposure time T 1 obtained based on the current frame image
  • the first gain G 1 adjusts the visible light scene image newly acquired by the visible light image sensor 111, so that subsequent frames (ie, the newly acquired visible light scene When the image) becomes the current frame, the brightness B 1 of the current frame image is within the preset first target brightness range (B 1min , B 1max ).
  • the second control unit 420 includes a second luminance acquisition unit 421 and a second exposure control unit 422 .
  • the second brightness acquisition part 421 is connected to the near-infrared light image sensor 121
  • the second exposure control part 422 is connected to the second brightness acquisition part 421 and the second driving unit 320 .
  • the second brightness acquiring unit 421 is configured to receive the second video stream output by the near-infrared light image sensor 121, and acquire the brightness of the image of the current frame in the second video stream in real time, that is, the brightness of the second video stream.
  • the current brightness B 2 is sent to the second exposure control unit 422 .
  • the second exposure control unit 422 is configured to determine whether the current brightness B 2 of the received second video stream is within a preset second target brightness range (B 2min , B 2max ), and when the received second video stream is When the current brightness B 2 of the video stream is not within the preset second target brightness range (B 2min , B 2max ), adjust the second exposure amount of the near-infrared light image sensor 121 and output it to the second driving unit 320, so that the current brightness B 2 of the second video stream is within a preset second target brightness range (B 2min , B 2max ).
  • the current frame of the second video stream changes dynamically, that is, the current frame image of the second video stream changes with time, "making the current brightness B2 of the second video stream at the preset No. "Within the two target brightness ranges" should be understood as the second exposure time T 2 obtained based on the current frame image, and the second gain G 2 adjusts the near-infrared light scene image newly acquired by the near-infrared light image sensor 121, so that the subsequent frame ( When the newly acquired near-infrared light scene image) becomes the current frame, the brightness B 2 of the near-infrared light scene image is within the preset second target brightness range (B 2min , B 2max ).
  • the present invention has no particular limitation on the specific method for obtaining the current brightness B1 of the first video stream.
  • the first luminance obtaining unit 411 may obtain the average value of the Y values of each or part of the pixels of the current frame of the first video stream Or the weighted value as the current brightness B 1 .
  • the first brightness obtaining unit 411 first obtains the RGB value of the pixel point of the current frame image of the first video stream according to the RGB value of the pixel point.
  • the method for obtaining the current brightness B 2 of the second video stream may refer to the above-mentioned method for obtaining the current brightness B 1 of the first video stream, or other methods may be used to obtain the current brightness B 2 according to the characteristics of the near-infrared image, which will not be described in detail here. Repeat.
  • the first control unit 410 further includes a first illumination adjustment part 413, the first illumination adjustment part 413 is connected in communication with the first exposure control part 412, and is used for when the first exposure control part 412
  • the first control unit 410 controls the lighting module 200 to adjust the output power of the first light source module 211 (ie, the luminous flux of visible light), so that the current brightness B1 of the first video stream is within the first target brightness range ( B1min , B 1max )
  • the second control unit 420 further includes a second illumination adjustment part 423, and the second illumination adjustment part 423 is connected in communication with the second exposure control part 422, for when the second exposure control part 423
  • the second gain G 2 and the second exposure time T 2 output by the unit 422 cannot make the current brightness B 2 of the second video stream within the second target brightness range (B 2min , B 2max ) when the lighting module
  • the first exposure control unit 412 receives the current luminance B 1 of the first video stream from the first luminance acquisition unit 411 . Further, the first exposure control unit 412 determines whether the acquired current brightness B 1 is within the preset first target brightness range (B 1min , B 1max ): if the current brightness B 1 is within the preset first target brightness range (B 1min , B 1max ), the first exposure time T 1 and the first gain G 1 are output to the first driving unit 310 , or there is no occurrence between the first exposure control unit 412 and the first driving unit 310 data communication , keep the status within the set first target brightness range (B 1min , B 1max ).
  • the first exposure control unit 412 feeds back information to the The first lighting adjustment part 413 connected to the lighting module 200 adjusts the output power of the first light source module 211, that is, the luminous flux of visible light, so as to adjust the current brightness B 1 of the first video stream, so that the The current brightness B 1 is within the first target brightness range (B 1min , B 1max ).
  • the second exposure control unit 422 receives the current brightness B 2 of the second video stream from the second brightness acquisition unit 421 . Further, the second exposure control unit 422 determines whether the acquired current brightness B 2 is within the preset second target brightness range (B 2min , B 2max ): if the current brightness B 2 is within the preset second target brightness range (B 2min , B 2max ), the second exposure time T 2 and the second gain G 2 are output to the second driving unit 320 , or there is no occurrence between the second exposure control part 422 and the second driving unit 320 Data communication, keep the status quo; if the current brightness B 2 is outside the preset second target brightness range (B 2min , B 2max ), adjust the second exposure of the near-infrared light image sensor 121 to make the current brightness B 2 Within the preset second target brightness range (B 2min , B 2max ).
  • the second exposure control part 422 feeds back information to and
  • the second lighting adjustment part 423 connected to the lighting module 200 adjusts the output power of the third light source module 213, that is, the luminous flux of the near-infrared light, so as to adjust the current brightness B 2 of the second video stream, so that the second video
  • the current brightness B 2 of the stream is within the second target brightness range (B 2min , B 2max ).
  • FIG. 6 schematically shows the work of adjusting the first exposure amount to make the current brightness B 1 of the first video stream be located in the first target brightness range (B 1min , B 1max ) provided by an embodiment of the present invention
  • the schematic flow chart, as shown in Figure 6, specifically includes:
  • the first exposure control part 412 increases the brightness of the visible light image sensor 111 by increasing the The first exposure amount is used to adjust the current brightness B 1 of the first video stream.
  • increase the first exposure time T 1 to determine whether the maximum first exposure time T 1max and the minimum first gain G 1min can meet the exposure requirements within the first target brightness range (B 1min , B 1max ):
  • the first exposure time T 1 is adjusted based on the minimum first gain G 1min so that the current brightness B 1 of the first video stream is within the first target brightness range (B 1min , B 1max ), and output the minimum first gain G 1min and the adjusted first exposure time T 1 to the first driving unit 310;
  • the first exposure control unit 412 keeps the current brightness B1 in the first target brightness range ( B1min , B 1max ) information (for example, the information includes that the first exposure time T 1 is the maximum first exposure time T 1max , and the first gain G 1 is the maximum first gain G 1max ) is fed back to the first illumination adjustment part 413 If it is satisfied, then based on the maximum first exposure time T 1max , adjust the first gain G 1 so that the current brightness B 1 of the first video stream is in the first target brightness range (B 1 min , B 1max ), and output the maximum first exposure time T 1max and the adjusted first gain G 1 to the first driving unit 310 .
  • the first exposure control unit 412 reduces the The first exposure of the visible light image sensor 111 is reduced to adjust the current brightness B 1 of the first video stream.
  • first reduce the first gain G 1 and determine whether the maximum first exposure time T 1max and the minimum first gain G 1min can meet the exposure requirements within the first target brightness range (B 1min , B 1max ): If so, adjust the first gain G 1 based on the maximum first exposure time T 1max , so that the current brightness B 1 of the first video stream is within the first target brightness range (B 1min , B 1max ), and output the maximum first exposure time T 1max and the adjusted first gain G 1 to the first driving unit 310;
  • the first exposure control unit 412 keeps the current brightness B 1 outside the target brightness range (B 1min , B 1max ) information (for example, the information includes that the first exposure time T 1 is the minimum exposure time T 1min and the first gain G 1 is the minimum first gain G 1min ) is fed back to the first lighting adjustment part 413 connected to the lighting module 200; , then based on the minimum first gain G 1min , adjust the first exposure time T 1 so that the current brightness B 1 of the first video stream is within the first target brightness range (B 1min , B 1max ), and output the minimum first gain G 1min and the adjusted first exposure time T 1 to the first driving unit 310 .
  • the method of adjusting the second exposure time T 2 and the second gain G 2 so that the current brightness B 2 is within the preset second target brightness range (B 2min , B 2max ) specifically includes:
  • the second exposure control unit 422 increases the The second exposure amount of the near-infrared light image sensor 121 is used to adjust the current brightness B 2 of the second video stream.
  • increase the exposure time T 2 and determine whether the maximum second exposure time T 2max and the minimum second gain G 2min can meet the exposure requirements within the second target brightness range (B 2min , B 2max ):
  • the second exposure time T 2 is adjusted based on the minimum second gain G 2min so that the current brightness B 2 of the second video stream is within the second target brightness range (B 2min , B 2max ), and output the minimum second gain G 2min and the adjusted second exposure time T 2 to the second driving unit 320;
  • the second exposure control part 422 keeps the current brightness B2 in the second target brightness range ( B2min , B 2max ) information (for example, the information includes that the second exposure time T 2 is the maximum second exposure time T 2max , and the second gain G 2 is the maximum second gain G 2max ) is fed back to the second illumination adjustment part 423 If it is satisfied, then based on the maximum second exposure time T 2max , adjust the second gain G 2 so that the current brightness B 2 of the second video stream is in the second target brightness range (B 2min , B 2max ), and output the maximum second exposure time T 2max and the adjusted second gain G 2 to the second driving unit 320 .
  • the second exposure control unit 422 reduces the The second exposure amount of the near-infrared light image sensor 121 is reduced to adjust the current brightness B 2 of the second video stream.
  • first reduce the second gain G 2 and determine whether the maximum second exposure time T 2max and the minimum second gain G 2min can meet the exposure requirements within the second target brightness range (B 2min , B 2max ): If satisfied, adjust the second gain G 2 based on the maximum second exposure time T 2max , so that the current brightness B 2 of the second video stream is in the second target brightness range (B 2min , B 2max ), and output the maximum second exposure time T 2max and the adjusted second gain G 2 to the second driving unit 320;
  • the second exposure control unit 422 keeps the current brightness B 2 outside the target brightness range (B 2min , B 2max )
  • the information (for example, the information includes that the second exposure time T2 is the minimum exposure time T2min , and the second gain G2 is the minimum second gain G2min ) is fed back to the second lighting adjustment part 423 connected to the lighting module 200; , then based on the minimum second gain G 2min , adjust the second exposure time T 2 so that the current brightness B 2 of the second video stream is within the second target brightness range (B 2min , B 2max ), and output the minimum second gain G 2min and the adjusted second exposure time T 2 to the second driving unit 320 .
  • the first exposure control unit 412 adjusts the current brightness B 1 of the first video stream by adjusting the first gain G 1 and the first exposure time T 1 , so that the current brightness B of the first video stream is 1 is within the first target brightness range (B 1min , B 1max ), and outputs the corresponding first gain G 1 and first exposure time T 1 to the first driving unit 310 , and the first driving unit 310 receives
  • the obtained first gain G 1 and first exposure time T 1 drive the visible light image sensor 111 to acquire the visible light scene of the target tissue 600, so that the brightness of the subsequently acquired visible light scene image is within the first target brightness range (B 1min , B 1max );
  • the second exposure control unit 422 adjusts the current brightness B 2 of the second video stream by adjusting the second gain G 2 and the second exposure time T 2 , so that the current brightness B 2 of the second video stream is within the second within the target brightness range (B 2min , B 2max ), and output the corresponding second gain G 2 and second exposure time T
  • the endoscope is a three-dimensional endoscope.
  • the first exposure control unit 412 includes a first visible light control unit and a second visible light control unit;
  • the second exposure control unit 422 includes a first near-infrared light control unit and a second near-infrared light control unit.
  • the first driving unit 310 includes a first visible light driving unit 311 and a second visible light driving unit 312 ;
  • the second driving unit 320 includes a first near-infrared light driving unit 321 and a second near-infrared light driving unit 322 .
  • the first visible light driving unit 311 and the second visible light driving unit 312 are respectively used to drive the first visible light image sensor 111A and the second visible light image sensor 111B to acquire the first visible light scene image and the second visible light scene image.
  • the first near-infrared light driving unit 321 and the second near-infrared light driving unit 322 are respectively used to drive the first near-infrared light image sensor 121A and the second near-infrared light image sensor 121B to obtain the first near-infrared light scene image, A second near-infrared light scene image.
  • only the first visible light control unit or the second visible light control unit may be turned on, and the current current value of the first visible light video stream can be adjusted by adjusting the first exposure time T 1 and the first gain G 1 brightness or the current brightness of the second visible light video stream; it may be that only the first visible light control unit or the second visible light control unit is turned on, and the current brightness of the first visible light video stream is adjusted by adjusting the first exposure time T 1 and the first gain G 1 Brightness B 11 and the current brightness B 12 of the second visible light video stream; the first visible light control unit and the second visible light control unit may be turned on at the same time, and the first exposure time T 1 and the first gain G 1 are respectively adjusted through a set of The current brightness B 11 of one visible light video stream and the current brightness B 12 of the second visible light video stream.
  • the first near-infrared light control unit or the second near-infrared light control unit can be turned on, and the current brightness B 21 of the first near-infrared light video stream can be adjusted by adjusting the second exposure time T 2 and the second gain G 2 .
  • the current brightness B 22 of the second near-infrared light video stream it may be that only the first near-infrared light control unit or the second near-infrared light control unit is turned on, and adjusted by adjusting the second exposure time T 2 and the second gain G 2
  • the scene fusion module 500 is used for when the current brightness B 1 of the first video stream is within the preset first target brightness range (B 1min , B 1max ) and/or the current brightness B of the second video stream 2 When within the preset second target brightness range (B 2min , B 2max ), based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream, The current frame image of the first video stream is fused with the current frame image of the second video stream to obtain a luminance fusion image, and based on the chrominance information of the current frame image of the first video stream, the luminance The fusion image is fused with the current frame image of the first video stream to obtain a scene fusion image.
  • the scene fusion module 500 fuses the current frame image of the first video stream with the current frame image of the second video stream. Further, the scene fusion module 500 is connected in communication with the central controller 800 . After receiving the bleeding mode instruction, the central controller 800 controls the scene fusion module 500 to fuse the current frame image of the first video stream with the current frame image of the second video stream to obtain a scene fusion image.
  • the central controller 800 controls the scene fusion module 500 to directly output the received first video stream and the second video stream. Therefore, in this embodiment, by fusing scene images obtained by different types of sensors to obtain a fused image of the target tissue 600, image blur caused by the motion of the camera can be effectively reduced, and the user's concern about the lesion caused by the blurred image can be avoided.
  • the misjudgment of information can improve the accuracy and safety of surgery, and at the same time, it can effectively improve the signal-to-noise ratio and obtain more image details.
  • the scene fusion module 500 includes an image fusion unit configured to, based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream, perform a Image fusion is performed between the pixels of the current frame image of a video stream and the corresponding pixels of the current frame image of the second video stream to obtain a luminance fusion image, and based on the color of the current frame image of the second video stream degree information, and perform image fusion on the brightness fusion image and the current frame image of the second video stream to obtain a scene fusion image.
  • the scene fusion module 500 is connected in communication with the visible light image sensor 111 and the near-infrared light image sensor 121 to obtain a first video stream and a second video stream, respectively.
  • the image fusion unit is configured to take the Y value of the pixel of the current frame image of the first video stream as the pixel value of the pixel.
  • the U value, V value, or C b value and C r value of a pixel in the current frame image of the first video stream are taken as the chromaticity of the pixel.
  • the scene fusion module further includes an image mode conversion unit, and the image mode conversion unit is configured to convert the current frame image of the first video stream to In the YUV space or the YC b C r space, the Y value of the pixel of the current frame image of the first video stream is used as the brightness of the pixel point, and the current frame image of the first video stream is The U value, V value or C b value and C r value of a pixel point are used as the chromaticity of the pixel point.
  • the specific conversion method is similar to the method of converting the video stream in RGB format into YUV format or YC b C r format when the first brightness obtaining unit 411 obtains the current brightness B 1 of the first video stream. Redundant again.
  • the brightness information of the current frame image of the second video stream may refer to the above-mentioned method for obtaining the brightness information of the current frame image of the first video stream, or other methods may be used to obtain the brightness information according to the characteristics of the near-infrared image. The detailed description is omitted.
  • the algorithm for fusion based on the brightness of the pixels of the current frame image of the first video stream and the corresponding brightness of the pixels of the current frame image of the second video stream is not particularly limited, such as arithmetic Average method, weighted average method, extreme value method (take the maximum or minimum value of the two), etc.
  • the brightness of the pixels of the current frame image of the first video stream and the corresponding brightness of the pixels of the current frame image of the second video stream respectively obtaining the weight of the brightness of the pixel in the current frame image of the first video stream and the corresponding weight of the brightness of the pixel in the current frame image of the second video stream; and according to the obtained
  • the weight of the brightness of the pixels in the current frame image of a video stream and the corresponding weight of the brightness of the pixels in the current frame image of the second video stream are respectively related to the current frame image of the first video stream.
  • the brightness of the pixel points is weighted with the brightness of the corresponding pixel points of the current frame image of the second video stream to obtain the brightness of the pixel points of the brightness fusion image.
  • FIG. 7 schematically shows a normal distribution curve of brightness and weight in an embodiment of the present invention, wherein P 1 is the first brightness, P 2 is the second brightness, and W 1 is the first brightness The weight of the brightness P1, and W2 is the weight of the second brightness P2.
  • the relationship between the brightness P3 of the brightness fusion image and the first brightness P1 of the pixels of the current frame image of the first video stream and the second brightness P2 of the pixels of the current frame image of the second video stream is satisfied. The following relationship:
  • the scene fusion module 500 is configured to be based on the brightness information of the current frame image of the first visible light video stream and the current image of the first near-infrared light video stream.
  • the brightness information of the frame image, the current frame image of the first visible light video stream and the current frame image of the first near-infrared light video stream are fused to obtain a first brightness fusion image, and based on the first visible light video the chrominance information of the current frame image of the stream, and fuse the first luminance fusion image with the current frame image of the first visible light video stream to obtain a first scene fusion image; based on the current frame image of the second visible light video stream
  • the brightness information of the second near-infrared light video stream and the brightness information of the current frame image of the second near-infrared light video stream, the current frame image of the second visible light video stream and the current frame image of the second near-infrared light video stream are fused to obtain the first two luminance fusion images, and based on the chromaticity information of the current frame image of the second visible light video stream, the second luminance fusion image is fused with the current frame image of the second visible light video stream to obtain a second scene Fused images.
  • the first one can acquire the light scene image and the first near-infrared light scene image through the same optical channel
  • the second can acquire the light scene image and the second near-infrared light scene image through the same optical channel
  • the first visible light There is lateral parallax between the scene image, the second visible light scene image, the first near-infrared light scene image, and the second near-infrared light scene image, so the first scene fusion image and the second scene fusion image obtained after image fusion
  • There is horizontal parallax between the images which conforms to the characteristics of the human eye to achieve a three-dimensional effect.
  • the image fusion unit may include a first image fusion unit 510 and a second image fusion unit 520, and the first image fusion unit 510 is configured to perform an image fusion based on the current frame image of the first visible light video stream.
  • Brightness information and the brightness information of the current frame image of the first near-infrared video stream the pixel points of the current frame image of the first visible video stream and the corresponding current frame of the first near-infrared video stream
  • Image fusion is performed on the pixels of the image to obtain a first brightness fusion image, and based on the chromaticity information of the current frame image of the first visible light video stream, the first brightness fusion image and the first visible light video stream are combined.
  • the current frame image is fused to obtain the first scene fusion image.
  • the second image fusion unit 520 is configured to, based on the brightness information of the current frame image of the second visible light video stream and the brightness information of the current frame image of the second near-infrared video stream, merge the second visible light video Perform image fusion between the pixels of the current frame image of the stream and the corresponding pixels of the current frame image of the second near-infrared video stream to obtain a second brightness fusion image, and based on the current image of the second visible video stream
  • the chrominance information of the frame image is obtained by fusing the second luminance fusion image and the current frame image of the second visible light video stream to obtain a second scene fusion image.
  • the fluorescence endoscope system further includes a video pipeline 700 and a central controller 800 .
  • the central controller 800 includes a video overlay unit 810 , a user input device 820 and a user interface 830 .
  • the scene fusion image output by the scene fusion module 500 is transmitted to the video pipeline 700, and the video overlay unit 810 superimposes the scene fusion image on the video pipeline 700 to obtain a three-dimensional image.
  • the user input device 820 is used to receive an operator's operation instruction, such as switching the working mode, and transmit the operation instruction to the user interface 830 .
  • the user interface 830 will control the scene fusion module 500 , the endoscope control module 400 , the lighting module 200 and so on according to the received operation commands and the control commands generated by the internal control conditions of the system. And the user interface 830 is transmitted to the display 900 in the console of the surgeon by entering the video overlay unit 810 and superimposed with the three-dimensional image and displayed.
  • the present invention also provides a control method of the fluorescence endoscope system.
  • FIG. 9 schematically shows a flow chart of a control method of a fluorescence endoscope system in a bleeding mode provided by an embodiment of the present invention.
  • the working modes of the fluorescence endoscope system include a first mode and a second mode, and the control method includes the following steps:
  • Step S1 in the second mode, providing visible light and near-infrared light to illuminate the target tissue;
  • Step S2 acquiring the visible light scene image and the near-infrared light scene image of the target tissue, and outputting them in the form of a first video stream and a second video stream respectively;
  • Step S3 judging whether the current brightness B 1 of the first video stream is within the preset first target brightness range (B 1min , B 1max ) and/or whether the current brightness B 2 of the second video stream is within the preset first target brightness range (B 1min , B 1max ) Within the set second target brightness range (B 2min , B 2max );
  • Step S4 based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream, compare the current frame image of the first video stream and the second video stream.
  • the current frame image is fused to obtain a brightness fusion image
  • Step S5 Perform image fusion on the current frame image of the first video stream and the luminance fusion image based on the chrominance information of the current frame image of the first video stream to obtain a scene fusion image.
  • the fluorescence endoscope system in this embodiment includes two modes, wherein in the second mode, when the current brightness of the first video stream and/or the second video stream is within the target brightness range,
  • the current frame image captured by visible light and the current frame image captured by near-infrared light are image-fused, and the resulting scene fusion image has suitable brightness, which can effectively improve the signal-to-noise ratio, rich in details, and can effectively reduce the image caused by the motion of the camera.
  • Blur avoid users' misjudgment of lesion information due to blurred images, and improve the accuracy and safety of surgery.
  • the fluorescence endoscope system is based on the existing hardware, and a slight improvement can be made to realize various functions, which can satisfy the needs of doctors. Different surgical needs, improve the convenience of surgical operation.
  • the acquired visible light scene image and the near-infrared light scene image of the target tissue are respectively output in the form of a first video stream and a second video stream, including:
  • the judging whether the current brightness B 2 of the second video stream is within the preset second target brightness range (B 2min , B 2max ) includes:
  • the current frame image of the first video stream and the brightness information of the second video stream are compared based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream.
  • the current frame image is fused to obtain a brightness fusion image, including:
  • the current frame image of the first visible light video stream and the first The current frame image of the near-infrared light video stream is fused to obtain a first brightness fusion image, based on the brightness information of the current frame image of the second visible light video stream and the brightness of the current frame image of the second near-infrared light video stream. information, and fuse the current frame image of the second visible light video stream with the current frame image of the second near-infrared light video stream to obtain a second brightness fusion image;
  • the performing image fusion on the current frame image of the first video stream and the luminance fusion image based on the chrominance information of the current frame image of the first video stream to obtain a scene fusion image including:
  • image fusion is performed between the current frame image of the first visible light video stream and the first luminance fusion image to obtain a first scene fusion image, based on the The chromaticity information of the current frame image of the second visible light video stream is performed, and the current frame image of the second visible light video stream is image-fused with the second luminance fusion image to obtain a second scene fusion image.
  • the first exposure is adjusted so that the current brightness of the first video stream B 1 is within the first target brightness range (B 1min , B 1max ), wherein the first exposure amount is the product of the first exposure time T 1 and the first gain G 1 ;
  • the second exposure amount is the product of the second exposure time T 2 and the second gain G 2 .
  • the first exposure of the visible light image sensor is increased amount to adjust the current brightness B 1 of the first video stream.
  • increasing the first exposure amount of the visible light image sensor 111 to adjust the current brightness B 1 of the first video stream includes: judging whether the maximum first exposure time T 1max and the minimum first gain G 1min are Can meet the exposure requirement within the first target brightness range (B 1min , B 1max );
  • the first exposure time T 1 is adjusted based on the minimum first gain G 1min so that the current brightness B 1 of the first video stream is within the first target brightness range (B 1min , B 1max );
  • the first gain G 1 is adjusted based on the maximum first exposure time T 1max , so that the current brightness B 1 of the first video stream is within the first target brightness range.
  • the first brightness of the visible light image sensor 111 is reduced. exposure to adjust the current brightness B 1 of the first video stream.
  • the reducing the first exposure amount of the visible light image sensor 111 to adjust the current brightness B 1 of the first video stream includes: judging whether the maximum first exposure time T 1max and the minimum first gain G 1min are not Can meet the exposure requirement within the first target brightness range (B 1min , B 1max );
  • the first exposure time T 1 is adjusted based on the minimum first gain G 1min , so that the current brightness B 1 of the first video stream is within the first target brightness range.
  • the current brightness B 2 of the second video stream is lower than the lower limit value of the second target brightness range (B 2min , B 2max ), increase the second brightness of the near-infrared light image sensor 121 exposure to adjust the current brightness B 2 of the second video stream.
  • increasing the second exposure amount of the near-infrared light image sensor 121 to adjust the current brightness B 2 of the second video stream includes: judging the maximum second exposure time T 2max and the minimum second gain G Whether 2min can meet the exposure requirement within the second target brightness range (B 2min , B 2max );
  • the second exposure time T 2 is adjusted based on the minimum second gain G 2min so that the current brightness B 2 of the second video stream is within the second target brightness range (B 2min , B 2max );
  • the current brightness B 2 of the second video stream is higher than the upper limit B 2max of the second target brightness range (B 2min , B 2max ), by reducing the near-infrared light image sensor 121 to adjust the current brightness B 2 of the second video stream.
  • reducing the second exposure amount of the near-infrared light image sensor 121 to adjust the current brightness B 2 of the second video stream includes: judging the maximum second exposure time T 2max and the minimum second gain G Whether 2min can meet the exposure requirement within the second target brightness range (B 2min , B 2max );
  • the current brightness B 1 of the first video stream cannot be made within the first target brightness range (B 1min , B 1max ) ), then adjust the output power of the first light source module for providing the visible light, and then adjust the current brightness B 1 of the first video stream, so that the current brightness B 1 of the first video stream is within Within the first target brightness range (B 1min , B 1max ).
  • the current brightness B 2 of the second video stream cannot be made within the second target brightness range (B 2min , B 2max ) ), then adjust the output power of the third light source module for providing the near-infrared light, and then adjust the current brightness B 2 of the second video stream, so that the current brightness B of the second video stream 2 is within the second target luminance range (B 2min , B 2max ).
  • the current frame image of the first video stream and the second video stream are combined.
  • the current frame image of the video stream is fused to obtain a brightness fusion image, which includes: based on the brightness information of the current frame image of the first video stream and the brightness information of the current frame image of the second video stream, the first video stream.
  • Image fusion is performed between the pixels of the current frame image of the video stream and the corresponding pixels of the current frame image of the second video stream to obtain a brightness fusion image.
  • performing image fusion on the current frame image of the first video stream and the luminance fusion image based on the chrominance information of the current frame image of the first video stream to obtain a scene fusion image comprising:
  • the chromaticity information of the pixel of the current frame image of the first video stream is assigned to the pixel corresponding to the luminance fusion image as the chromaticity of the corresponding pixel.
  • the The control method further includes: converting the current frame image of the first video stream into YUV space or YC b C r space.
  • the pixel points of the current frame image of the first video stream and the corresponding The pixel points of the current frame image of the second video stream perform image fusion, including:
  • the brightness of the pixels of the current frame image of the first video stream, and the corresponding brightness of the pixels of the current frame image of the second video stream respectively obtain The weight of the brightness of the pixels in the current frame image of the first video stream and the corresponding weight of the brightness of the pixels in the current frame image of the second video stream;
  • the first video The brightness of the pixels of the current frame image of the stream is weighted with the corresponding brightness of the pixels of the current frame image of the second video stream to obtain the brightness of the pixels of the brightness fusion image.
  • the present invention also provides a storage medium, in which a computer program is stored, and the computer program implements the above-mentioned control method when executed by a processor.
  • the fluorescence endoscope system in this embodiment includes two modes, wherein in the second mode, when the current brightness of the first video stream and/or the second video stream is within the target brightness range,
  • the current frame image captured by visible light and the current frame image captured by near-infrared light are image-fused, and the resulting scene fusion image has suitable brightness, which can effectively improve the signal-to-noise ratio, rich in details, and can effectively reduce the image caused by the motion of the camera.
  • Blur avoid users' misjudgment of lesion information due to blurred images, and improve the accuracy and safety of surgery.
  • the fluorescence endoscope system is based on the existing hardware, and a slight improvement can be made to realize various functions, which can satisfy the needs of doctors. Different surgical needs, improve the convenience of surgical operation.
  • the storage medium in the embodiments of the present invention may adopt any combination of one or more computer-readable mediums.
  • the readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including object-oriented programming languages - such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet) ).
  • LAN local area network
  • WAN wide area network
  • Internet service provider to connect through the Internet
  • the fluorescence endoscope system, control method and storage medium provided by the present invention have the following advantages: when in the first mode, the lighting module can provide visible light and excitation light, and the excitation light Light is irradiated on the target tissue to stimulate and generate fluorescence to help the operator observe tissue information that cannot be observed under visible light conditions; when in the second mode, the illumination module can provide visible light and near-infrared light to illuminate the target tissue, so The scene fusion module is used to image the current frame image captured by visible light and the current frame image captured by near-infrared light when the current brightness of the first video stream and/or the second video stream is within the target brightness range. Fusion, the resulting scene fusion image has suitable brightness and rich details.
  • the fluorescence endoscope system of the present invention can effectively reduce the image blur caused by the motion of the camera, avoid the user's misjudgment of the lesion information caused by the blurred image, and improve the accuracy and safety of the operation.
  • the signal-to-noise ratio can be effectively improved, and more detailed image information can be obtained;
  • the fluorescence endoscope system of the present invention is based on the existing hardware, and a slight improvement can be made to realize various functions to meet the needs of doctors in different operations. demand, improve the convenience of surgical operation.
  • each block in the flowchart or block diagrams may represent a module, program segment, or portion of code, which comprises one or more configurable functions for implementing the specified logical function(s) Execution instructions, the module, segment, or portion of code containing one or more executable instructions for implementing specified logical functions.
  • each functional module in the various embodiments herein may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Endoscopes (AREA)

Abstract

一种荧光内窥镜系统,该荧光内窥镜系统的工作模式包括第一模式和第二模式,包括内窥镜(100)、照明模块(200)、内窥镜驱动模块(300)和场景融合模块(500)。内窥镜(100)包括可见光图像传感器(111)和近红外光图像传感器(121);在第二模式下,照明模块(200)用于提供可见光和近红外光,可见光图像传感器(111)获取可见光场景图像,并以第一视频流的形式输出,近红外光图像传感器(121)获取近红外光场景图像,并以第二视频流的形式输出;场景融合模块(500)用于当第一视频流和/或第二视频流的当前亮度在目标亮度范围内时,将第一视频流的当前帧图像和第二视频流的当前帧图像进行图像融合。还公开了一种荧光内窥镜系统的控制方法和存储介质,该方法可以有效提高信噪比,获取更多的图像细节信息。

Description

荧光内窥镜系统、控制方法和存储介质 技术领域
本发明涉及医疗器械技术领域,特别涉及一种荧光内窥镜系统、控制方法和存储介质。
背景技术
随着医疗技术的不断发展,内窥镜作为集中传统光学、人体工程学、精密机械、现代电子、数学、软件等于一体的检测仪器,应用范围越来越广。内窥镜可以进入待检测者的体内(例如食道),以获取待检测部位的图像,进而确定待检测部位是否存在病变。利用内窥镜可以看到X射线不能显示的病变,因此它对医生非常有用。例如,借助内窥镜医生可以观察胃内的溃疡或肿瘤,据此制定出最佳的治疗方案。
具体地,内窥镜系统一般具有能够插入到生物体内部的部件,通过将该部件经口腔或其他天然孔道、或者经手术做的小切口等伸入到生物体内部后,这部分部件在获取到生物体内部的图像信息,再传输出来并被显示于显示器。
通常地,内窥镜系统能够进行普通光(可见光)成像。顾名思议,普通光成像即是对普通光或者说白光照射下的生物体内部进行成像,例如内窥镜系统向生物体内部的目标组织依次应用三种颜色的光束R、G和B,并且通过获取从这些光束的反射光产生的图像形成普通光图像。医生等通过普通光图像可以进行诊断,但是普通光图像也有一定的局限性,例如一些诸如鳞状细胞癌等病变在视觉上很难被识别,即在普通光图像上很难被识别;又例如,在子宫内膜癌手术中,前哨淋巴结在普通光图像上也难以被识别。
因此内窥镜系统的特殊光(例如荧光)成像技术被发展起来,该技术能够提供给观察者普通光成像不能够辨别的信息,这给诊断和治疗提供了更丰富的参考依据。例如应用特殊光成像技术对子宫内膜癌相关目标组织进行成像时,在所得到的特殊光图像中前哨淋巴结与周围正常目标组织对比强烈(例如一者被处理显示为近似白色,另一者被处理显示为近似黑色),因此可以被较容易地区分,但是也正是因为在特殊光图像中癌变目标组织与正常目标组 织对比强烈(例如上述所述的一者被处理显示为基本为白色,另一者被处理显示为基本为黑色),所以使得特殊光图像也缺乏目标组织本身的结构和形态方面的细节。因此一般都会通过荧光内窥镜系统获取普通光图像和特殊光图像,并将这两者并列或叠加显示。
当术中发生出血,因血红蛋白吸光特性,会使得腔体内亮度下降,这造成成像细节不清晰。传统方法通过提高内窥镜的曝光时间和增益,但是,此方法会引起拖影和加重成像噪声。
发明内容
本发明的目的在于提供一种荧光内窥镜系统、控制方法和存储介质,在实现通过荧光对特定组织成像以获得可见光无法获取的信息的同时,还可以解决当术中发生出血等导致可见光环境下成像细节不清晰时,传统的前后帧图像融合方法引起拖影和加重成像噪声的问题。
为达到上述目的,本发明提供一种荧光内窥镜系统,包括内窥镜、照明模块、内窥镜驱动模块和场景融合模块;
所述荧光内窥镜系统的工作模式包括第一模式和第二模式;
所述内窥镜包括可见光图像传感器和近红外光图像传感器;
在第一模式下,所述照明模块用于提供可见光,以照明目标组织,并用于提供激励光,以激励目标组织产生荧光,所述可见光图像传感器用于获取所述目标组织的可见光场景图像,并以第一视频流的形式输出,所述近红外光图像传感器用于获取所述目标组织的荧光场景图像,并以第二视频流的形式输出;
在第二模式下,所述照明模块用于提供可见光和近红外光,以照明目标组织,所述可见光图像传感器用于获取所述目标组织的可见光场景图像,并以第一视频流的形式输出,所述近红外光图像传感器用于获取所述目标组织的近红外光场景图像,并以第二视频流的形式输出;
所述内窥镜驱动模块包括第一驱动单元和第二驱动单元,所述第一驱动单元用于在第二模式下根据第一曝光时间和第一增益驱动所述可见光图像传 感器获取所述可见光场景图像,所述第二驱动单元用于在第二模式下根据第二曝光时间和第二增益驱动所述近红外光图像传感器获取所述近红外光场景图像;
所述场景融合模块用于当所述第一视频流的当前亮度在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度在预设的第二目标亮度范围内时,基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,得到亮度融合图像,并基于所述第一视频流的当前帧图像的色度信息,将所述亮度融合图像与所述第一视频流的当前帧图像进行融合,获得场景融合图像。
可选的,所述荧光内窥镜系统还包括内窥镜控制模块,所述内窥镜控制模块包括第一控制单元和/或第二控制单元,所述第一控制单元用于在第二模式下使所述第一视频流的当前亮度在预设的第一目标亮度范围内;所述第二控制单元用于在第二模式下使所述第二视频流的当前亮度在预设的第二目标亮度范围内。
可选的,所述第一控制单元包括:
第一亮度获取部,用于获取所述第一视频流的当前亮度;以及
第一曝光控制部,用于判断所述第一视频流的当前亮度是否在所述第一目标亮度范围内,如果所述第一视频流的当前亮度不在所述第一目标亮度范围内,则调整所述可见光图像传感器的第一曝光量,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
所述第二控制单元包括:
第二亮度获取部,用于获取所述第二视频流的当前亮度;以及
第二曝光控制部,用于判断所述第二视频流的当前亮度是否在所述第二目标亮度范围内,如果所述第二视频流的当前亮度不在所述第二目标亮度范围内,则调整所述近红外光图像传感器的第二曝光量,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
可选的,当所述第一视频流的当前亮度低于所述第一目标亮度范围的下 限值时,所述第一曝光控制部通过增大所述可见光图像传感器的第一曝光量以调节所述第一视频流的当前亮度;
当所述第二视频流的当前亮度低于所述第二目标亮度范围的下限值时,所述第二曝光控制部通过增大所述近红外光图像传感器的第二曝光量以调节所述第二视频流的当前亮度。
可选的,所述第一曝光控制部被配置为判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足需求,则以最小第一增益为基础,调整所述第一曝光时间以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
如果不满足需求,则判断最大第一曝光时间和最大第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
所述第二曝光控制部被配置为判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
如果满足需求,则以所述最小第二增益为基础,调整所述第二曝光时间以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
如果不满足需求,则判断最大第二曝光时间和最大第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
可选的,当所述第一视频流的当前亮度高于所述第一目标亮度范围的上限值时,所述第一曝光控制部通过减小所述可见光图像传感器的第一曝光量以调节所述第一视频流的当前亮度;
当所述第二视频流的当前亮度高于所述第二目标亮度范围的上限值时,所述第二曝光控制部通过减小所述近红外光图像传感器的第二曝光量以调节所述第二视频流的当前亮度。
可选的,所述第一曝光控制部被配置为判断最大第一曝光时间和最小第 一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
如果不满足,则判断最小第一曝光时间和最小第一增益是否满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最小第一增益为基础,调整所述第一曝光时间,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
所述第二曝光控制部被配置为判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
如果不满足,则判断最小第二曝光时间和最小第二增益是否满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最小第二增益为基础,调整所述第二曝光时间,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
可选的,所述照明模块包括用于提供所述可见光的第一光源模组和用于提供近红外光的第三光源模组;所述第一控制单元还包括第一照明调节部,所述第一照明调节部用于当通过调整所述第一增益和所述第一曝光时间不能使所述第一视频流的当前亮度在所述第一目标亮度范围内时,控制所述照明模块调整所述第一光源模组的输出功率,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
所述第二控制单元还包括第二照明调节部,所述第二照明调节部用于当通过调整所述第二增益和所述第二曝光时间不能使所述第二视频流的当前亮度在所述第二目标亮度范围内时,控制所述照明模块调整所述第三光源模组的输出功率,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
可选的,若所述第一视频流为YUV编码或YCbCr编码,所述第一亮度获取部用于取所述第一视频流当前帧图像的每个或者部分像素点的Y值的均值或者加权值作为当前亮度;
若所述第一视频流为RAW编码或RGB编码,所述第一亮度获取部用于根据第一视频流当前帧图像的像素点的RGB值获取像素点的亮度,然后取第一视频流当前帧图像的每个或者部分像素点的亮度的均值或者加权值作为当前亮度。
可选的,所述内窥镜还包括分光棱镜组,所述分光棱镜组用于在第二模式下将照射至目标组织的可见光的反射光和近红外光的反射光进行分离,在第一模式下将照射至目标组织的可见光的反射光和受激励产生的荧光进行分离,以使得所述可见光的反射光能被所述可见光图像传感器的感光面所捕获,所述近红外光的反射光或所述荧光能被所述近红外光图像传感器的感光面所捕获。
可选的,所述分光棱镜组包括第一分光棱镜、第二分光棱镜、可见光带通滤光片和近红外带通滤光片,所述可见光带通滤光片用于允许可见光通过,并截止其他波长的光,所述近红外带通滤光片用于允许近红外光通过并截止其他波长的光,所述近红外带通滤光片设置在所述第一分光棱镜和第二分光棱镜之间,所述第一分光棱镜与第二分光棱镜相邻接的面设有半透半反膜,以使入射的光一部分被反射一部分被透射,所述可见光图像传感器的感光面邻近于所述第一分光棱镜的出射面,所述可见光带通滤光片设置在所述第一分光棱镜的出射面与所述可见光图像传感器的感光面之间,所述近红外光图像传感器的感光面邻近于所述第二分光棱镜的出射面。
可选的,所述场景融合模块包括:
图像融合单元,用于基于所述第一视频流的当前帧图像的亮度信息和所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,以获得亮度融合图像,并基于所述第一视频流的当前帧图像的色度信息,将所述亮度融合图像和所述第一视频流的当前帧图像进行图像融合,以获得场景融合图像。
可选的,所述图像融合单元用于根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频 流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像的像素点的亮度的权重;
根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的所述第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
可选的,所述场景融合模块还包括图像模式转换单元,所述图像模式转换单元用于当所述第一视频流的输出格式为RAW格式或者RGB格式时,将所述第一视频流的当前帧图像的像素点转换至YUV空间或者YCbCr空间内,取所述第一视频流的当前帧图像的像素点的Y值作为所述像素点的亮度,并取所述第一视频流的当前帧图像的像素点的U值、V值或者Cb值、Cr值作为所述第一视频流的当前帧图像的像素点的色度。
可选的,所述内窥镜为三维内窥镜;
所述可见光图像传感器包括第一可见光图像传感器和第二可见光图像传感器;
所述近红外光图像传感器包括第一近红外光图像传感器和第二近红外光图像传感器;
所述第一视频流包括第一可见光视频流和第二可见光视频流;
所述第二视频流包括第一近红外光视频流和第二近红外光视频流;
在第二模式下,所述第一可见光图像传感器用于获取所述目标组织的第一可见光场景图像,并以第一可见光视频流的形式输出,所述第二可见光图像传感器用于获取所述目标组织的第二可见光场景图像,并以第二可见光视频流的形式输出,所述第一近红外光图像传感器用于获取所述目标组织的第一近红外光场景图像,并以第一近红外光视频流的形式输出,所述第二近红外光图像传感器用于获取所述目标组织的第二近红外光场景图像,并以第二近红外光视频流的形式输出;
所述场景融合模块用于基于所述第一可见光视频流的当前帧图像的亮度 信息与所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,得到第一亮度融合图像,并基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一亮度融合图像与所述第一可见光视频流的当前帧图像进行融合,获得第一场景融合图像,以及基于所述第二可见光视频流的当前帧图像的亮度信息与所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,得到第二亮度融合图像,并基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二亮度融合图像与所述第二可见光视频流的当前帧图像进行融合,获得第二场景融合图像。
可选的,所述荧光内窥镜系统还包括中央控制器,所述中央控制器接受第一模式指令后,控制所述第一控制单元、所述第二控制单元根据预设于所述内窥镜控制模块的所述第一目标亮度范围、所述第二目标亮度范围调整所述第一视频流、所述第二视频流的当前亮度,或者,所述中央控制器接受第一模式指令后,将所述第一目标亮度范围、所述第二目标亮度范围分别发送至所述第一控制单元、所述第二控制单元,以使所述第一控制单元、所述第二控制单元根据预设于所述内窥镜控制模块的所述第一目标亮度范围、所述第二目标亮度范围调整所述第一视频流、所述第二视频流的当前亮度。
可选的,所述荧光内窥镜系统还包括中央控制器,所述中央控制器包括视频叠加单元,所述视频叠加单元用于将所述场景融合模块输出的场景融合图像进行叠加并将生成的三维图像传递至显示器予以显示。
可选的,所述第一驱动单元还用于在第一模式下根据第三曝光时间和第三增益驱动所述可见光图像传感器获取所述可见光场景图像;所述第二驱动单元还用于在第一模式下根据第四曝光时间和第四增益驱动所述近红外光图像传感器获取所述荧光场景图像。
可选的,所述第一控制单元还用于在第一模式下使所述第一视频流的当前亮度在预设的第三目标亮度范围内;和/或,所述第二控制单元还用于在第一模式下使所述第二视频流的当前亮度在预设的第四目标亮度范围内。
为达到上述目的,本发明还提供一种荧光内窥镜系统的控制方法,所述荧光内窥镜系统包括第一模式和第二模式,所述控制方法包括:
在第二模式下,提供可见光和近红外光,以照明目标组织;
获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出;
判断所述第一视频流的当前亮度是否在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度是否在预设的第二目标亮度范围内;
若所述第一视频流的当前亮度在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度在预设的第二目标亮度范围内,则
基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像;以及
基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像。
可选的,如果所述第一视频流的当前亮度不在所述第一目标亮度范围内,则调整第一曝光量以使所述第一视频流的当前亮度在所述第一目标亮度范围内,其中所述第一曝光量为第一曝光时间和第一增益的乘积;
如果所述第二视频流的当前亮度不在所述第二目标亮度范围内,则调整第二曝光量,以使所述第二视频流的当前亮度在所述第二目标亮度范围内,其中所述第二曝光量为第二曝光时间和第二增益的乘积。
可选的,当所述第一视频流的当前亮度低于所述第一目标亮度范围的下限值时,增大所述第一曝光量以调节所述第一视频流的当前亮度;
当所述第二视频流的当前亮度低于所述第二目标亮度范围的下限值时,增大所述第二曝光量以调节所述第二视频流的当前亮度。
可选的,所述增大所述第一曝光量以调节所述第一视频流的当前亮度,包括:
判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足需求,则以最小第一增益为基础,调整所述第一曝光时间以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
如果不满足需求,则判断最大第一曝光时间和最大第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
所述增大所述第二曝光量以调节所述第二视频流的当前亮度,包括:
判断最大第二曝光时间和最小第二增益是否能够满足第二目标亮度范围内曝光量需求;
如果满足需求,则以所述最小第二增益为基础,调整所述第二曝光时间以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
如果不满足需求,则判断最大第二曝光时间和最大第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
可选的,当所述第一视频流的当前亮度高于所述第一目标亮度范围的上限值时,减小所述第一曝光量以调节所述第一视频流的当前亮度;
当所述第二视频流的当前亮度高于所述第二目标亮度范围的上限值时,减小所述第二曝光量以调节所述第二视频流的当前亮度。
可选的,所述减小所述第一曝光量以调节所述第一视频流的当前亮度,包括:
判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
如果不满足,则判断最小第一曝光时间和最小第一增益是否满足所述第一目标亮度范围内曝光量需求;
如果满足,则以所述最小第一增益为基础,调整所述第一曝光时间,以 使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
所述减小所述第二曝光量以调节所述第二视频流的当前亮度,包括:
判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
如果不满足,则判断最小第二曝光时间和最小第二增益是否满足所述第二目标亮度范围内曝光量需求;
如果满足,则以所述最小第二增益为基础,调整所述第二曝光时间,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
可选的,如果通过调整所述第一增益和第一曝光时间,不能使所述第一视频流的当前亮度在所述第一目标亮度范围内时,则调整用于提供所述可见光的第一光源模组的输出功率,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
如果通过调整所述第二增益和所述第二曝光时间,不能使所述第二视频流的当前亮度在所述第二目标亮度范围内时,则调整用于提供所述近红外光的第三光源模组的输出功率,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
可选的,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像,包括:
基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,获得亮度融合图像。
可选的,所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:
将所述第一视频流的当前帧图像的像素点的色度信息,赋予给所述亮度 融合图像对应的像素点作为对应像素点的色度。
可选的,若所述第一视频流的输出格式为RAW格式或者RGB格式,则在将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合之前,所述控制方法还包括:
将所述第一视频流的当前帧图像转换至YUV空间或者YCbCr空间内。
可选的,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,包括:
根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重;
根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的所述第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
可选的,所述内窥镜为三维内窥镜;
所述获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出,包括:
获取目标组织的第一可见光场景图像、第二可见光场景图像、第一近红外光场景图像和第二近红外光场景图像,并分别以第一可见光视频流、第二可见光视频流、第一近红外光视频流和第二近红外光视频流的形式输出;
所述判断所述第一视频流的当前亮度是否在预设的第一目标亮度范围内,包括:
判断所述第一可见光视频流和/或所述第二可见光视频流的当前亮度是否在预设的第一目标亮度范围内;
所述判断所述第二视频流的当前亮度是否在预设的第二目标亮度范围内,包括:
判断所述第一近红外光视频流和/或所述第二近红外光视频流的当前亮度是否在预设的第二目标亮度范围内;
所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像,包括:
基于所述第一可见光视频流的当前帧图像的亮度信息与所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,获得第一亮度融合图像,基于所述第二可见光视频流的当前帧图像的亮度信息与所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,获得第二亮度融合图像;
所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:
基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一可见光视频流的当前帧图像与所述第一亮度融合图像进行图像融合,获得第一场景融合图像,基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二可见光视频流的当前帧图像与所述第二亮度融合图像进行图像融合,获得第二场景融合图像。
为达到上述目的,本发明还提供一种存储介质,所述存储介质内存储有计算机程序,所述计算机程序在被处理器执行时实现如上文所述的控制方法。
与现有技术相比,本发明提供的荧光内窥镜系统、控制方法和存储介质具有以下优点:当处于第一模式时,所述照明模块可以提供可见光和激励光,激励光照射于目标组织以激励产生荧光,以帮助操作者观察可见光条件下无法观察到的组织信息;当处于第二模式时,所述照明模块可以提供可见光和近红外光,以照明目标组织,所述场景融合模块用于当所述第一视频流和/或第二视频流的当前亮度在目标亮度范围内时,将通过可见光捕获的当前帧图像和通过近红外光捕获的当前帧图像进行图像融合,得到的场景融合图像亮度合适,细节丰富。由此,一方面,本发明的荧光内窥镜系统可以有效降低 由摄像机的运动而导致的图像模糊,避免因图像模糊而造成用户对病灶信息的误判断,提高手术的精准性和安全性,同时可以有效提高信噪比,获取更多的图像细节信息;另一方面,本发明的荧光内窥镜系统基于现有的硬件,做稍微的改进就可以实现多种功能,满足医生不同的手术需求,提高手术操作的便利性。
附图说明
图1为本发明一实施方式中的荧光内窥镜系统的结构示意图;
图2为本发明一实施方式中的模式转换时荧光内窥镜系统的控制示意图;
图3为本发明一实施方式中的分光棱镜组的分光原理示意图;
图4为本发明一实施方式中的可见光带通滤光片的光谱图;
图5为本发明一实施方式中的近红外光带通滤光片的光谱图;
图6为本发明一实施方式中的在第二模式下,通过调节第一曝光量使第一视频流的当前亮度B1位于第一目标亮度范围(B1min,B1max)的工作流程示意图;
图7为本发明一实施方式中的亮度与权重的正态分布示意图;
图8为本发明一实施方式中的在第二模式下的图像融合示意图;
图9为本发明一实施方式中的在第二模式下的荧光内窥镜系统的控制方法的流程示意图。
其中,附图标记如下:
内窥镜-100;照明模块-200;内窥镜驱动模块-300;内窥镜控制模块-400;场景融合模块-500;目标组织-600;可见光图像传感器-111;第一可见光图像传感器-111A;第二可见光图像传感器-111B;近红外光图像传感器-121;第一近红外光图像传感器-121A;第二近红外光图像传感器-121B;第一控制单元-410;第一驱动单元-310;第一可见光驱动单元-311;第二可见光驱动单元-312;第二驱动单元-320;第一近红外光驱动单元-321;第二近红外光驱动单元-322;第一亮度获取部-411;第一曝光控制部-412;第二亮度获取部-421;第二曝光控制部-422;第一照明调节部-413;第二照明调节部-423;分光棱镜组-130; 可见光带通滤光片-131;近红外带通滤光片-132;第一分光棱镜-133;第二分光棱镜-134;第一图像融合单元-510;第二图像融合单元-520;图像模式转换单元-530;光源单元-210;照明控制器-220;第一光源模组-211;第二光源模组-212;第三光源模组-213;模式切换单元-221;功率控制单元-222;视频流水线-700;中央控制器-800;视频叠加单元-810;用户输入装置-820;用户界面-830;显示器-900。
具体实施方式
以下结合附图1至9和具体实施方式对本发明提出的荧光内窥镜系统、控制方法和存储介质作进一步详细说明。根据下面说明,本发明的优点和特征将更清楚。需要说明的是,附图采用非常简化的形式且均使用非精准的比例,仅用以方便、明晰地辅助说明本发明实施方式的目的。为了使本发明的目的、特征和优点能够更加明显易懂,请参阅附图。须知,本说明书所附图式所绘示的结构、比例、大小等,均仅用以配合说明书所揭示的内容,以供熟悉此技术的人士了解与阅读,并非用以限定本发明实施的限定条件,故不具技术上的实质意义,任何结构的修饰、比例关系的改变或大小的调整,在不影响本发明所能产生的功效及所能达成的目的下,均应仍落在本发明所揭示的技术内容能涵盖的范围内。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本发明的核心思想在于提供一种荧光内窥镜系统、控制方法和存储介质,在实现通过荧光对特定组织成像以获得可见光无法获取的信息的同时,还可 以解决当术中发生出血等导致可见光环境下成像细节不清晰时,传统的前后帧图像融合方法引起拖影和加重成像噪声的问题。
为实现上述思想,本发明提供一种荧光内窥镜系统,请参考图1,其示意性地给出了本发明一实施方式提供的荧光内窥镜系统的结构示意图,如图1所示,所述荧光内窥镜系统包括内窥镜100、照明模块200、内窥镜驱动模块300和场景融合模块500。所述荧光内窥镜系统的工作模式包括第一模式(即荧光模式)和第二模式(即出血模式)。
其中,所述照明模块200用于提供可见光、激励光和近红外光。所述可见光、近红外光用以照明目标组织600,并形成反射光;所述激励光用于激励目标组织产生荧光。本发明对照明模块200的具体位置没有特别的限制。例如,所述照明模块200提供的输出光可以通过容纳于内窥镜100的照明通道中的连接器输送至内窥镜100的末端并到达目标组织600,所述连接器例如为光纤。
当所述荧光内窥镜系统处于荧光模式时,所述照明模块200向所述目标组织600发射可见光和激励光,以使所述目标组织600将可见光反射以及使所述目标组织600受所述激励光激发发射出荧光。当所述荧光内窥镜系统处于出血模式时,所述照明模块200向所述目标组织600发射可见光和近红外光。同样,所述目标组织600将反射可见光和近红外光。在本实施例中,所述激励光的光谱分布在803nm-812nm,受激辐射后产生的荧光光谱分布在830nm-840nm;近红外光的光谱分布在935nm-945nm。请参考图1和图2,其中图2示意性地给出了本发明一实施方式提供的模式转换时荧光内窥镜系统的控制示意图。如图1和图2所示,所述照明模块200包括光源单元210和照明控制器220。所述光源单元210包括用于提供可见光的第一光源模组211、用于提供激励光的第二光源模组212、用于提供近红外光的第三光源模组213。所述照明控制器220包括用于进行荧光模式和出血模式切换的模式切换单元221和用于控制所述光源单元210的输出功率的功率控制单元222。所述模式切换单元221与下述的中央控制器800相连,由此操作者可以通过所述中央控制器800控制所述模式切换单元221以进行荧光模式和出血模式的切换。 当所述荧光内窥镜系统处于荧光模式时,在所述照明控制器220的控制下,所述第一光源模组211和所述第二光源模组212开启;当所述荧光内窥镜处于出血模式时,在所述照明控制器220的控制下,所述第一光源模组211和所述第三光源模组213开启。
如图1所示,所述内窥镜100包括可见光图像传感器111和近红外光图像传感器121,所述可见光图像传感器111用于捕获所述可见光的反射光,获得关于所述目标组织600的可见光场景图像,并以第一视频流的形式输出。由于本实施例中目标组织受激后产生的荧光,以及所述照明模块200产生的近红外光均在近红外光光谱范围内。因此,在本实施例中,目标组织受激产生的荧光、目标组织反射的近红外光均由所述近红外光图像传感器121捕获。即,当所述荧光内窥镜系统处于荧光模式时,所述近红外光图像传感器121用于捕获承载目标组织600场景信息的荧光,将目标组织600场景信息经过光电转换后以获取关于所述目标组织600的荧光场景图像,并以第二视频流的形式输出;当所述荧光内窥镜系统处于出血模式时,所述近红外光图像传感器121用于捕获承载目标组织600场景信息的近红外光的反射光,将目标组织600场景信息经过光电转换后以获取关于所述目标组织600的近红外光场景图像,并以第二视频流的形式输出。
所述可见光图像传感器111和所述近红外光图像传感器121可以为互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)或电荷耦合器件(CCD,Charge Coupled Device)。
进一步,所述内窥镜100还包括分光棱镜组130,所述分光棱镜组130用于在出血模式下将照射至目标组织600的可见光的反射光和近红外光的反射光进行分离,在荧光模式下将照射至目标组织600的可见光的反射光和受激励产生的荧光进行分离,以使得所述可见光的反射光能被所述可见光图像传感器111的感光面所捕获,所述近红外光的反射光或荧光能被所述近红外光图像传感器121的感光面所捕获。
优选的,请参考图3至图5,其中图3示意性地给出了本发明一实施方式提供的分光棱镜组的分光原理示意图;图4示意性地给出了本发明一实施方 式提供的可见光带通滤光片的光谱图;图5示意性地给出了本发明一实施方式提供的近红外光带通滤光片的光谱图。如图3所示,所述分光棱镜组130包括第一分光棱镜133、第二分光棱镜134、可见光带通滤光片131和近红外带通滤光片132。所述可见光带通滤光片131用于允许可见光通过,并截止其他波长的光。所述近红外带通滤光片132用于允许近红外光通过并截止其他波长的光。所述近红外带通滤光片132设置在所述第一分光棱镜133和第二分光棱镜134之间。所述第一分光棱镜133与第二分光棱镜134相邻接的面设有半透半反膜,以使入射的光一部分被反射一部分被透射。所述可见光图像传感器111的感光面邻近于所述第一分光棱镜133的出射面,所述可见光带通滤光片131设置在所述第一分光棱镜133的出射面与所述可见光图像传感器111的感光面之间。同样,所述近红外光图像传感器121的感光面邻近于所述第二分光棱镜134的出射面。这里“第一分光棱镜133的出射面”是指被反射的光离开所述第一分光棱镜133时所经过的第一分光棱镜134的面;“第二分光棱镜134的出射面”是指透射的光离开所述第二分光棱镜134时所经过的第二分光棱镜134的面。如图3所示,所述可见光图像传感器111的感光面、所述近红外光图像传感器121的感光面与所述光轴平行,如此布置可以降低内窥镜的尺寸对图像传感器的尺寸的限制,有助于提高成像质量。当混合光(在荧光模式下为目标组织600对可见光的反射光和目标组织600受激产生的荧光;在出血模式下为目标组织600对可见光、近红外光的反射光)沿光轴进入第一分光棱镜133后,一部分被反射,一部分透射离开第一分光棱镜133。被反射的混合光经过反射(例如图3所示一次反射)后,从所述第一分光棱镜133的出射面离开第一分光棱镜133,然后经过所述可见光带通滤光片131使得混合光中光谱在380nm-780nm的光(即可见光)通过,并被所述可见光图像传感器111捕获;而透射的混合光经过所述近红外带通滤光片132使得混合光中光谱在830nm-840nm以及935-945nm的光通过,并沿光轴进入第二分光棱镜134,然后经过反射(例如图3所示一次反射)后,从所述第二分光棱镜134的出射面离开第二分光棱镜134,并被所述近红外光图像传感器121捕获。如图4和图5所示,在本实施例中所述可见光带通滤光 片131可以使得光谱在380nm-780nm的光通过,由此可以保证照明模块200产生的可见光的反射光进入所述可见光图像传感器111;所述近红外带通滤光片132可以使得光谱在830nm-840nm以及935-945nm的光通过,由此可以保证照明模块200产生的近红外光的反射光、荧光进入所述近红外光图像传感器121。在一个替代性实施例中,所述内窥镜为三维内窥镜,即所述荧光内窥镜系统为三维内窥镜系统。此时,所述可见光场景图像包括第一可见光场景图像和第二可见光场景图像;所述近红外场景图像包括第一近红外场景图像和第二近红外场景图像。所述分光棱镜组130为两个。相应的,所述可见光图像传感器为两个,即第一可见光图像传感器111A,第二可见光图像传感器111B。所述第一可见光图像传感器111A用于获取目标组织的第一可见光场景图像,并以第一可见光视频流的形式输出;所述第二可见光图像传感器111B用于获取目标组织的第二可见光场景图像,并以第二可见光视频流的形式输出。所述近红外光图像传感器也是两个,即第一近红外光图像传感器121A,第二近红外光图像传感器121B。在荧光模式下,所述第一近红外光图像传感器121A用于获取目标组织的第一荧光场景图像,并以第一荧光视频流的形式输出;所述第二近红外光图像传感器121B用于获取目标组织的第二荧光场景图像,并以第二荧光视频流的形式输出。在出血模式下,所述第一近红外光图像传感器121A用于获取目标组织的第一近红外光场景图像,并以第一近红外光视频流的形式输出;所述第二近红外光图像传感器121B用于获取目标组织的第二近红外光场景图像,并以第二近红外光视频流的形式输出。需强调的是,在本实施例中以“第一”和“第二”命名的部件,不代表部件之间的先后顺序关系。例如,第一可见光场景图像,可能是内窥镜左侧的可见光场景图像,可能是内窥镜右侧的可见光场景图像。
所述内窥镜驱动模块300包括第一驱动单元310和第二驱动单元320。在出血模式下,所述第一驱动单元310用于根据第一曝光时间T 1和第一增益G 1驱动所述可见光图像传感器111获取所述可见光场景图像;所述第二驱动单元320用于根据第二曝光时间T 2和第二增益G 2驱动所述近红外光图像传感器121获取所述近红外光场景图像。优选的,在荧光模式下,所述第一驱动单元 310用于根据第三曝光时间T 3和第三增益G 3驱动所述可见光图像传感器111获取所述可见光场景图像;所述第二驱动单元320用于根据第四曝光时间T 4和第四增益G 4驱动所述近红外光图像传感器121获取所述荧光场景图像。
优选的,所述荧光内窥镜系统还包括内窥镜控制模块400,所述内窥镜驱动模块300与所述内窥镜控制模块400通信连接。所述内窥镜控制模块400包括第一控制单元410和/或第二控制单元420。所述第一控制单元410用于在出血模式下使所述第一视频流的当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内;所述第二控制单元420用于在出血模式下使所述第二视频流的当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内。当所述荧光内窥镜系统处于出血模式时,所述荧光内窥镜系统需要控制第一视频流的亮度和/或第二视频流的亮度在期望的亮度范围内,以便于后续的图像融合等处理。因此,在出血模式下,所述第一控制单元410和/或所述第二控制单元420判断所述第一视频流的当前亮度B 1是否在所述第一目标亮度范围(B 1min,B 1max)内和/或所述第二视频流的当前亮度B 2是否在所述第二目标亮度范围(B 2min,B 2max)内,如果不在,则通过调整图像传感器的曝光量(即曝光时间T和增益G的乘积)来调整当前亮度。在本实施例中,所述荧光内窥镜为定光圈内镜,因此曝光量可视为曝光时间T和增益G的乘积。在其他实施例中,所述荧光内窥镜为有光圈内镜,则曝光量还可以视为光圈、曝光时间T和增益G的乘积。即此时通过所述第一控制单元410调整所述可见光图像传感器111的第一曝光量(此处为第一曝光时间T 1和第一增益G 1的乘积),以使得所述第一视频流的当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内;和/或通过所述第二控制单元420调整所述近红外光图像传感器121的第二曝光量(此处为第二曝光时间T 2和第二增益G 2的乘积),以使得所述第二视频流的当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内。本实施例中,“当前亮度”,是指所述视频流中当前帧的图像的亮度。相应地,第一视频流的当前亮度B 1,即为第一视频流中当前帧图像的亮度;第二视频流的当前亮度B 2,即为第二视频流中当前帧图像的亮度。当所述荧光内窥镜系统处于出血模式时,所述第一驱动单元310接收所述第一控制单元410输出的 第一曝光时间T 1、第一增益G 1等参数以使可见光图像传感器111获取满足亮度要求的可见光场景图像;所述第二驱动单元320接收所述第二控制单元420输出的第二曝光时间T 2、第二增益G 2等参数以使近红外光图像传感器121获取满足亮度要求的近红外光场景图像。优选的,所述第一控制单元410还用于在荧光模式下使所述第一视频流的当前亮度B 1在预设的第三目标亮度范围(B 3min,B 3max)内;和/或,所述第二控制单元420用于在荧光模式下使所述第二视频流的当前亮度B 2在预设的第四目标亮度范围(B 4min,B 4max)内。
在一个实施例中,所述荧光内窥镜系统只在出血模式下调整所述视频流的当前亮度。此时,所述第一目标亮度范围(B 1min,B 1max)、第二目标亮度范围(B 2min,B 2max)可以分别设置在第一控制单元410、第二控制单元420。当内窥镜控制模块400接收到中央控制器800(下面详细描述)开启出血模式的指令后,第一控制单元410、第二控制单元420根据第一目标亮度范围(B 1min,B 1max)、第二目标亮度范围(B 2min,B 2max)调整第一视频流、第二视频流的当前亮度。在另外一个实施例中,所述荧光内窥镜系统在出血模式、荧光模式下调整所述视频流的当前亮度。此时,所述第一控制单元410、第二控制单元420一直开启。所述第一目标亮度范围(B 1min,B 1max)、所述第三目标亮度范围(B 3min,B 3max)可以设置在所述第一控制单元410,所述第二目标亮度范围(B 2min,B 2max)、所述第四目标亮度范围(B 4min,B 4max)可以设置在所述第二控制单元420。当中央控制器800接收到出血模式的指令后,控制所述第一控制单元410、所述第二控制单元420根据所述第一目标亮度范围(B 1min,B 1max)、所述第二目标亮度范围(B 2min,B 2max)调整所述第一视频流、所述第二视频流的当前亮度;当中央控制器800接收到荧光模式的指令后,控制所述第一控制单元410、所述第二控制单元420根据所述第三目标亮度范围(B 3min,B 3max)、所述第四目标亮度范围(B 4min,B 4max)调整所述第一视频流、所述第二视频流的当前亮度。替代性的,所述第一目标亮度范围(B 1min,B 1max)、所述第二目标亮度范围(B 2min,B 2max)、所述第三目标亮度范围(B 3min,B 3max)和所述第四目标亮度范围(B 4min,B 4max)也可以由中央控制器800在改变内窥镜系统模式时传输给所述内窥镜控制模块400。
优选的,所述第一控制单元410包括第一亮度获取部411和第一曝光控制部412。所述第一亮度获取部411与所述可见光图像传感器111相连,所述第一曝光控制部412连接所述第一亮度获取部411、所述第一驱动单元310。所述第一亮度获取部411用于接收所述可见光图像传感器111输出的第一视频流,实时获取所述第一视频流中当前帧的图像的亮度,即所述第一视频流的当前亮度B 1,并将所述当前亮度B 1输送至所述第一曝光控制部412。所述第一曝光控制部412用于判断接收的第一视频流当前亮度B 1是否在预设的第一目标亮度范围(B 1min,B 1max),如果第一视频流当前亮度B 1不在预设的第一目标亮度范围(B 1min,B 1max)内则调整所述可见光图像传感器111的第一曝光量,并输出至所述第一驱动单元310,以使所述第一视频流的当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内。本领域人员应理解,第一视频流的当前帧为动态变化的,即第一视频流的当前帧图像随着时间而改变,“使第一视频流当前亮度B 1在预设的第一目标亮度范围内”应理解为基于当前帧图像得到的第一曝光时间T 1,第一增益G 1调整所述可见光图像传感器111新获取的可见光场景图像,使后续的帧(即新获取的可见光场景图像)成为当前帧时,当前帧图像的亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内。
类似的,所述第二控制单元420包括第二亮度获取部421和第二曝光控制部422。所述第二亮度获取部421与所述近红外光图像传感器121相连,所述第二曝光控制部422连接所述第二亮度获取部421、所述第二驱动单元320。所述第二亮度获取部421用于接收所述近红外光图像传感器121输出的第二视频流,实时获取所述第二视频流中当前帧的图像的亮度,即所述第二视频流的当前亮度B 2,并将所述当前亮度B 2输送至所述第二曝光控制部422。所述第二曝光控制部422用于判断接收的所述第二视频流的当前亮度B 2是否在预设的第二目标亮度范围(B 2min,B 2max)内,当接收的所述第二视频流的当前亮度B 2不在预设的第二目标亮度范围(B 2min,B 2max)内时,调整所述近红外光图像传感器121的第二曝光量,并输出至所述第二驱动单元320,以使所述第二视频流的当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内。 同样,本领域人员应理解,第二视频流的当前帧为动态变化的,即第二视频流的当前帧图像随着时间而改变,“使第二视频流当前亮度B 2在预设的第二目标亮度范围内”应理解为基于当前帧图像得到的第二曝光时间T 2,第二增益G 2调整所述近红外光图像传感器121新获取的近红外光场景图像,使后续的帧(新获取的近红外光场景图像)成为当前帧时,近红外光场景图像的亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内。
本发明对获得第一视频流的当前亮度B 1具体方法没有特别的限制。例如,如果所述第一视频流为YUV编码或YC bC r编码,所述第一亮度获取部411可以取所述第一视频流当前帧图像的每个或者部分像素点的Y值的均值或者加权值作为当前亮度B 1。又例如,当所述可见光图像传感器111输出的第一视频流为RAW编码或RGB编码,所述第一亮度获取部411先根据第一视频流当前帧图像的像素点的RGB值获取像素点的亮度,然后取第一视频流当前帧图像的每个或者部分像素点的亮度的均值或者加权值作为当前亮度B 1。根据像素点的RGB值获取像素点的亮度Y可以是只取人眼比较敏感的G值作为亮度,也可以对R值,G值以及B值做加权获得,例如Y=0.2126*R+0.7152*G+0.0722*B,或者Y=0.299*R+0.587*G+0.114*B。第二视频流的当前亮度B 2的获取方法可以参照上述的第一视频流的当前亮度B 1的获取方法,或者根据近红外图像特点采用其他方式获取当前亮度B 2,在此不再进行详细赘述。
优选的,所述第一控制单元410还包括第一照明调节部413,所述第一照明调节部413与所述第一曝光控制部412通信连接,用于当所述第一曝光控制部412输出的所述第一增益G 1和所述第一曝光时间T 1,不能使所述第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内时,控制所述照明模块200调整所述第一光源模组211的输出功率(即可见光的光通量),以使所述第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内;所述第二控制单元420还包括第二照明调节部423,所述第二照明调节部423与所述第二曝光控制部422通信连接,用于当所述第二曝光控制部422输出的所述第二增益G 2和所述第二曝光时间T 2,不能使所述第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内时,控制所述照明模块200 调整所述第三光源模组213的输出功率(即近红外光的光通量),以使所述第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内。
具体而言,所述第一曝光控制部412从所述第一亮度获取部411接收所述第一视频流的当前亮度B 1。进一步,所述第一曝光控制部412判断获取的当前亮度B 1是否在预设的第一目标亮度范围(B 1min,B 1max)内:如果当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内,则将第一曝光时间T 1和第一增益G 1输出至第一驱动单元310,或者所述第一曝光控制部412与第一驱动单元310之间不发生数据通信,保持现状;如果当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)外,则调整所述可见光图像传感器111的第一曝光量以使当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内。如果通过调整第一曝光时间T 1和第一增益G 1,无法实现当前亮度B 1在预设的目标亮度范围(B 1min,B 1max)内,则第一曝光控制部412将信息反馈至与照明模块200连接的第一照明调节部413,以通过对第一光源模组211的输出功率即可见光的光通量进行调节,进而调整第一视频流的当前亮度B 1,以使第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内。
同理,所述第二曝光控制部422从所述第二亮度获取部421接收第二视频流的当前亮度B 2。进一步,所述第二曝光控制部422判断获取的当前亮度B 2是否在预设的第二目标亮度范围(B 2min,B 2max)内:如果当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内,则将第二曝光时间T 2和第二增益G 2输出至第二驱动单元320,或者所述第二曝光控制部422与第二驱动单元320之间不发生数据通信,保持现状;如果当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)外,则调整所述近红外光图像传感器121的第二曝光量以使当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内。如果通过调整第二曝光时间T 2和第二增益G 2,无法实现当前亮度B 2在预设的目标亮度范围(B 2min,B 2max)内,则第二曝光控制部422将信息反馈至与照明模块200连接的第二照明调节部423,以通过对第三光源模组213的输出功率即近红外光的光通量进行调节,进而调整第二视频流的当前亮度B 2,以使第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内。
请参考图6,其示意性地给出了本发明一实施方式提供的通过调整第一曝光量使第一视频流的当前亮度B 1位于第一目标亮度范围(B 1min,B 1max)的工作流程示意图,如图6所示,具体包括:
一方面,当所述第一视频流的当前亮度B 1低于所述第一目标亮度范围的下限值B 1min时,所述第一曝光控制部412通过增大所述可见光图像传感器111的第一曝光量以调节所述第一视频流的当前亮度B 1。首先,提高第一曝光时间T 1,判断最大第一曝光时间T 1max和最小第一增益G 1min是否能够满足第一目标亮度范围(B 1min,B 1max)内曝光量需求:
如果满足需求,则以最小第一增益G 1min为基础,调整所述第一曝光时间T 1以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内,并将最小第一增益G 1min、调整得到的第一曝光时间T 1输出至所述第一驱动单元310;
如果不满足需求,则提高所述第一增益G 1,判断最大第一曝光时间T 1max和最大第一增益G 1max是否能够满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求:如果不满足,则维持当前的第一曝光时间T 1和第一增益G 1不变,此时所述第一曝光控制部412将当前亮度B 1仍在第一目标亮度范围(B 1min,B 1max)外的信息(例如,信息包括第一曝光时间T 1为最大第一曝光时间T 1max,第一增益G 1为最大第一增益G 1max)反馈至所述第一照明调节部413;如果满足,则以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1,以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内,并将所述最大第一曝光时间T 1max、调整得到的第一增益G 1输出至所述第一驱动单元310。
示范性的,以最小第一增益G 1min为基础,调整所述第一曝光时间T 1的方法为以上一帧的第一曝光量(T 1’×G 1’)为基础获得T 1c=T 1’×G 1’/G 1min,然后以T 1c为基础往上调整第一曝光时间T 1直至所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内。示范性的,以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1的方法为以上一帧的第一曝光量(T 1’×G 1’)为基础获得G 1c=T 1’×G 1’/T 1max,然后以G 1c为基础往上调整第一增益 G 1直至所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内。
另一方面,当所述第一视频流的当前亮度B 1高于所述第一目标亮度范围(B 1min,B 1max)的上限值B 1max时,所述第一曝光控制部412通过减小所述可见光图像传感器111的第一曝光量以调节所述第一视频流的当前亮度B 1。具体而言,首先减小第一增益G 1,判断最大第一曝光时间T 1max和最小第一增益G 1min是否能够满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求:如果满足,则以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1,以使所述第一视频流的当前亮度B 1处于第一目标亮度范围(B 1min,B 1max)内,并将所述最大第一曝光时间T 1max以及调整得到的第一增益G 1输出至所述第一驱动单元310;
如果不满足,则减小所述第一曝光时间T 1,判断最小第一曝光时间T 1min和最小第一增益G 1min是否满足第一目标亮度范围(B 1min,B 1max)内曝光量需求:如果不满足,则维持当前第一曝光时间T 1和第一视频流增益G 1不变,所述第一曝光控制部412将当前亮度B 1仍在目标亮度范围(B 1min,B 1max)外的信息(例如,信息包括第一曝光时间T 1为最小曝光时间T 1min,第一增益G 1为最小第一增益G 1min)反馈至与照明模块200连接的第一照明调节部413;如果满足,则以所述最小第一增益G 1min为基础,调整所述第一曝光时间T 1,以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内,并将所述最小第一增益G 1min以及调整得到的第一曝光时间T 1输出至所述第一驱动单元310。
示范性的,以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1的方法为以上一帧的第一曝光量(T 1’×G 1’)为基础获得G 1c=T 1’×G 1’/T 1max,然后以G 1c为基础往下调整第一增益G 1直至所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内。示范性的,以所述最小第一增益G 1min为基础,调整所述第一曝光时间T 1的方法为以上一帧的第一曝光量(T 1’×G 1’)为基础获得T 1c=T 1’×G 1’/G 1min,然后以T 1c为基础往下调整第一曝光时间T 1直至所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min, B 1max)内。
同理,通过调整第二曝光时间T 2、第二增益G 2以使当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)的方法,具体包括:
一方面,当所述第二视频流的当前亮度B 2低于所述第二目标亮度范围(B 2min,B 2max)的下限值B 2min时,所述第二曝光控制部422通过增大所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2。首先,提高曝光时间T 2,判断最大第二曝光时间T 2max和最小第二增益G 2min是否能够满足第二目标亮度范围(B 2min,B 2max)内曝光量需求:
如果满足需求,则以最小第二增益G 2min为基础,调整所述第二曝光时间T 2以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内,并将最小第二增益G 2min、调整得到的第二曝光时间T 2输出至所述第二驱动单元320;
如果不满足需求,则提高所述第二增益G 2,判断最大第二曝光时间T 2max和最大第二增益G 2max是否能够满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求:如果不满足,则维持当前的第二曝光时间T 2和第二增益G 2不变,此时所述第二曝光控制部422将当前亮度B 2仍在第二目标亮度范围(B 2min,B 2max)外的信息(例如,信息包括第二曝光时间T 2为最大第二曝光时间T 2max,第二增益G 2为最大第二增益G 2max)反馈至所述第二照明调节部423;如果满足,则以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 2,以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内,并将所述最大第二曝光时间T 2max、调整得到的第二增益G 2输出至所述第二驱动单元320。
示范性的,以最小第二增益G 2min为基础,调整所述第二曝光时间T 2的方法为以上一帧的第二曝光量(T 2’×G 2’)为基础获得T 2c=T 2’×G 2’/G 2min,然后以T 2c为基础往上调整第二曝光时间T 2直至所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内。示范性的,以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 2的方法为以上一帧的第二曝光量(T 2’×G 2’)为基础获得G 2c=T 2’×G 2’/T 2max,然后以G 2c为基础往上调整第二增益 G 2直至所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内。
另一方面,当所述第二视频流的当前亮度B 2高于所述第二目标亮度范围(B 2min,B 2max)的上限值B 2max时,所述第二曝光控制部422通过减小所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2。具体而言,首先减小第二增益G 2,判断最大第二曝光时间T 2max和最小第二增益G 2min是否能够满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求:如果满足,则以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 2,以使所述第二视频流的当前亮度B 2处于第二目标亮度范围(B 2min,B 2max)内,并将所述最大第二曝光时间T 2max以及调整得到的第二增益G 2输出至所述第二驱动单元320;
如果不满足,则减小所述第二曝光时间T 2,判断最小第二曝光时间T 2min和最小第二增益G 2min是否满足第二目标亮度范围(B 2min,B 2max)内曝光量需求:如果不满足,则维持当前第二曝光时间T 2和第二视频流增益G 2不变,所述第二曝光控制部422将当前亮度B 2仍在目标亮度范围(B 2min,B 2max)外的信息(例如,信息包括第二曝光时间T 2为最小曝光时间T 2min,第二增益G 2为最小第二增益G 2min)反馈至与照明模块200连接的第二照明调节部423;如果满足,则以所述最小第二增益G 2min为基础,调整所述第二曝光时间T 2,以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内,并将所述最小第二增益G 2min以及调整得到的第二曝光时间T 2输出至所述第二驱动单元320。
示范性的,以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 1的方法为以上一帧的第二曝光量(T 2’×G 2’)为基础获得G 2c=T 2’×G 2’/T 2max,然后以G 2c为基础往下调整第二增益G 2直至所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内。示范性的,以所述最小第二增益G 2min为基础,调整所述第二曝光时间T 2的方法为以上一帧的第二曝光量(T 2’×G 2’)为基础获得T 2c=T 2’×G 2’/G 2min,然后以T 2c为基础往下调整第二曝光时间T 2直至所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min, B 2max)内。
也就是说,本实施方式中,第一曝光控制部412通过调整第一增益G 1、第一曝光时间T 1来调节第一视频流的当前亮度B 1,使第一视频流的当前亮度B 1在第一目标亮度范围(B 1min,B 1max)内,并输出对应的第一增益G 1、第一曝光时间T 1至所述第一驱动单元310,所述第一驱动单元310根据接收到的第一增益G 1和第一曝光时间T 1,驱动所述可见光图像传感器111获取目标组织600的可见光场景,从而使得后续获取的可见光场景图像的亮度位于第一目标亮度范围(B 1min,B 1max)内;第二曝光控制部422通过调整第二增益G 2、第二曝光时间T 2来调节第二视频流的当前亮度B 2,使第二视频流的当前亮度B 2在第二目标亮度范围(B 2min,B 2max)内,并输出对应的第二增益G 2、第二曝光时间T 2至所述第二驱动单元320,所述第二驱动单元320根据接收到的第二增益G 2和第二曝光时间T 2,驱动所述近红外光图像传感器121获取目标组织600的近红外光场景,从而使得后续获取的所述近红外光场景图像的亮度位于第二目标亮度范围(B 2min,B 2max)内。在本发明的其他实施方式中也可以采用其他方式调节所述第一视频流和所述第二视频流的当前亮度。在一个替代性实施例中,所述内窥镜为三维内窥镜。此时,所述第一曝光控制部412包括第一可见光控制单元和第二可见光控制单元;第二曝光控制部422包括第一近红外光控制单元和第二近红外光控制单元。所述第一驱动单元310包括第一可见光驱动单元311和第二可见光驱动单元312;第二驱动单元320包括第一近红外光驱动单元321和第二近红外光驱动单元322。所述第一可见光驱动单元311和第二可见光驱动单元312分别用于驱动第一可见光图像传感器111A、第二可见光图像传感器111B,获取所述第一可见光场景图像、第二可见光场景图像。优选的,所述第一可见光场景图像、第二可见光场景图像之间存在横向视差,以符合人眼特点实现三维效果。第一近红外光驱动单元321和第二近红外光驱动单元322分别用于驱动第一近红外光图像传感器121A、第二近红外光图像传感器121B,获取所述第一近红外光场景图像、第二近红外光场景图像。优选的,所述第一近红外光场景图像、第二近红外光场景图像之间存在横向视差,以符合人眼特点实现三维效果。在本实施例中, 在出血模式下,可以是只开启第一可见光控制单元或第二可见光控制单元,通过调整第一曝光时间T 1、第一增益G 1来调整第一可见光视频流的当前亮度或第二可见光视频流的当前亮度;可以是只开启第一可见光控制单元或第二可见光控制单元,通过调整第一曝光时间T 1、第一增益G 1来调整第一可见光视频流的当前亮度B 11和第二可见光视频流的当前亮度B 12;可以是同时开启第一可见光控制单元和第二可见光控制单元,分别通过一组第一曝光时间T 1、第一增益G 1来调整第一可见光视频流的当前亮度B 11和第二可见光视频流的当前亮度B 12。同理,可以只开启第一近红外光控制单元或第二近红外光控制单元,通过调整第二曝光时间T 2、第二增益G 2来调整第一近红外光视频流的当前亮度B 21或第二近红外光视频流的当前亮度B 22;可以是只开启第一近红外光控制单元或第二近红外光控制单元,通过调整第二曝光时间T 2、第二增益G 2来调整第一近红外光视频流的当前亮度B 21和第二近红外光视频流的当前亮度B 22;可以是同时开启第一近红外光控制单元和第二近红外光控制单元,分别通过一组第二曝光时间T 2、第二增益G 2来调整第一近红外光视频流的当前亮度B 21和第二近红外光视频流的当前亮度B 22
所述场景融合模块500用于当所述第一视频流的当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内和/或所述第二视频流的当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内时,基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,得到亮度融合图像,并基于所述第一视频流的当前帧图像的色度信息,将所述亮度融合图像与所述第一视频流的当前帧图像进行融合,获得场景融合图像。如果所述荧光内窥镜系统处于出血模式,则当所述第一视频流的当前亮度B 1处于第一目标亮度范围(B 1min,B 1max)内和/或所述第二视频流的当前亮度B 2处于第二目标亮度范围(B 2min,B 2max)内时,所述场景融合模块500将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合。进一步,所述场景融合模块500与所述中央控制器800通信连接。所述中央控制器800在接收到出血模式指令后,控制场景融合模块500将所述第一视频流的当前帧图像 与所述第二视频流的当前帧图像进行融合,获得场景融合图像。进一步,接收到荧光模式指令后,所述中央控制器800控制所述场景融合模块500将接收到的第一视频流和第二视频流直接输出。由此,本实施例通过将不同类型的传感器获得的场景图像进行融合,以获得目标组织600的融合图像,可以有效降低由摄像机的运动而导致的图像模糊,避免因图像模糊而造成用户对病灶信息的误判断,提高手术的精准性和安全性,同时可以有效提高信噪比,获取更多的图像细节信息。
具体而言,所述场景融合模块500包括图像融合单元,用于基于所述第一视频流的当前帧图像的亮度信息和所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,以获得亮度融合图像,并基于所述第二视频流的当前帧图像的色度信息,对所述亮度融合图像和所述第二视频流的当前帧图像进行图像融合,以获得场景融合图像。
进一步,所述场景融合模块500与所述可见光图像传感器111和所述近红外光图像传感器121通信连接,以分别获取第一视频流和第二视频流。如果所述第一视频流的输出格式为YUV格式或者YC bC r格式,所述图像融合单元用于取所述第一视频流的当前帧图像的像素点的Y值作为所述像素点的亮度,取所述第一视频流的当前帧图像中的像素点的U值、V值或者C b值、C r值作为所述像素点的色度。如果所述第一视频流的输出格式为RAW格式或者RGB格式,所述场景融合模块还包括图像模式转换单元,所述图像模式转换单元用于将所述第一视频流的当前帧图像转换至YUV空间或者YC bC r空间内,再将所述第一视频流的当前帧图像的像素点的Y值作为所述像素点的亮度,并将所述第一视频流的当前帧图像中的像素点的U值、V值或者C b值、C r值作为所述像素点的色度。具体的转换方法,与上述第一亮度获取部411获取第一视频流的当前亮度B 1的方法时,将RGB格式的视频流转换为YUV格式或者YC bC r格式的方法类似,在这里不再冗赘。所述第二视频流的当前帧图像的亮度信息可以参照上述的所述第一视频流的当前帧图像的亮度信息的获取方法,或者根据近近红外图像特点采用其他方式获取亮度信息,在此不 再进行详细赘述。
本实施例中,基于所述第一视频流的当前帧图像的像素点的亮度和对应的所述第二视频流的当前帧图像的像素点的亮度进行融合的算法没有特别的限制,例如算术平均方法,加权平均法,取极值法(取两者最大值或最小值)等。优选的,根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重;以及根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
请参考图7,其示意性地给出了本发明一实施方式中的亮度与权重的正态分布曲线示意图,其中,P 1为第一亮度,P 2为第二亮度,W 1为第一亮度P 1的权重,W 2为第二亮度P 2的权重。所述正态分布曲线示意图为人为设定的,优选正态分布曲线的数学期望u=128,方差δ=50。亮度融合图像的亮度P 3与所述第一视频流的当前帧图像的像素点的第一亮度P 1和所述第二视频流的当前帧图像的像素点的第二亮度P 2之间满足如下关系:
P 3=W 1P 1+W 2P 2
如图7所示,亮度值越趋于数学期望,其所占的权重越大,由此,通过采用上述的正态分布的方式对所述第一视频流的当前帧图像的亮度信息和所述第二视频流的当前帧图像的亮度信息进行融合,可以使得获得的亮度融合图像具有合适的亮度。
如图1所示,当所述内窥镜为三维内窥镜时,所述场景融合模块500用于基于第一可见光视频流的当前帧图像的亮度信息与第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,得到第一亮度融合图像,并基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一亮度融合图像与 所述第一可见光视频流的当前帧图像进行融合,获得第一场景融合图像;基于第二可见光视频流的当前帧图像的亮度信息与第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,得到第二亮度融合图像,并基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二亮度融合图像与所述第二可见光视频流的当前帧图像进行融合,获得第二场景融合图像。由于第一可将光场景图像、第一近红外光场景图像通过同一光通道获取,而第二可将光场景图像、第二近红外光场景图像通过同一光通道获取,而所述第一可见光场景图像、第二可见光场景图像之间、所述第一近红外光场景图像、第二近红外光场景图像之间存在横向视差,所以图像融合后获得的第一场景融合图像、第二场景融合图像之间存在横向视差,符合人眼的特点实现三维效果。
具体地,请参考图8,其示意性地给出了本发明一实施方式提供的出血模式下的图像融合示意图。如图8所示,所述图像融合单元可包括第一图像融合单元510和第二图像融合单元520,所述第一图像融合单元510用于基于所述第一可见光视频流的当前帧图像的亮度信息和所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像的像素点和对应的所述第一近红外光视频流的当前帧图像的像素点进行图像融合,以获得第一亮度融合图像,并基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一亮度融合图像和所述第一可见光视频流的当前帧图像进行融合,以获得第一场景融合图像。所述第二图像融合单元520用于基于所述第二可见光视频流的当前帧图像的亮度信息和所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像的像素点和对应的所述第二近红外光视频流的当前帧图像的像素点进行图像融合,以获得第二亮度融合图像,并基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二亮度融合图像和所述第二可见光视频流的当前帧图像进行融合,以获得第二场景融合图像。具体实现场景融合图像的方法可参考上述实施例,这里不再冗赘描述。
优选的,所述荧光内窥镜系统还包括视频流水线700和中央控制器800。所述中央控制器800包括视频叠加单元810、用户输入装置820和用户界面830。所述场景融合模块500输出的场景融合图像传递至所述视频流水线700,所述视频叠加单元810将所述视频流水线700上的场景融合图像进行叠加以获得三维图像。所述用户输入装置820用于接收操作者的操作指令,例如切换工作模式,并将操作指令传输至用户界面830。用户界面830将根据接收到的操作指令以及系统内部控制条件产生的控制指令控制场景融合模块500、内窥镜控制模块400和照明模块200等等。而用户界面830通过进入视频叠加单元810与三维图像进行叠加后输送至外科医生的控制台中的显示器900并予以显示。
与上述的荧光内窥镜系统相对应,本发明还提供一种荧光内窥镜系统的控制方法。请参考图9,其示意性地给出了本发明一实施方式提供的出血模式下的荧光内窥镜系统的控制方法的流程示意图。如图9所示,所述荧光内窥镜系统的工作模式包括第一模式和第二模式,所述控制方法包括如下步骤:
步骤S1、在第二模式下,提供可见光和近红外光,以照明目标组织;
步骤S2、获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出;
步骤S3、判断所述第一视频流的当前亮度B 1是否在预设的第一目标亮度范围(B 1min,B 1max)内和/或所述第二视频流的当前亮度B 2是否在预设的第二目标亮度范围(B 2min,B 2max)内;
若所述第一视频流的当前亮度B 1在预设的第一目标亮度范围(B 1min,B 1max)内和/或所述第二视频流的当前亮度B 2在预设的第二目标亮度范围(B 2min,B 2max)内,则执行下述步骤:
步骤S4、基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像;
步骤S5、基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像。
由此,本实施例中的荧光内窥镜系统包括两种模式,其中在第二模式下通过在所述第一视频流和/或第二视频流的当前亮度在目标亮度范围内时,将通过可见光捕获的当前帧图像和通过近红外光捕获的当前帧图像进行图像融合,得到的场景融合图像亮度合适,可以有效提高信噪比,细节丰富,可以有效降低由摄像机的运动而导致的图像模糊,避免因图像模糊而造成用户对病灶信息的误判断,提高手术的精准性和安全性,同时荧光内窥镜系统基于现有的硬件,做稍微的改进就可以实现多种功能,满足医生不同的手术需求,提高手术操作的便利性。
优选的,当所述内窥镜为三维内窥镜时,所述获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出,包括:
获取目标组织的第一可见光场景图像、第二可见光场景图像、第一近红外光场景图像和第二近红外光场景图像,并分别以第一可见光视频流、第二可见光视频流、第一近红外光视频流和第二近红外光视频流的形式输出;
所述判断所述第一视频流的当前亮度B 1是否在预设的第一目标亮度范围(B 1min,B 1max)内,
包括:判断所述第一可见光视频流和/或所述第二可见光视频流的当前亮度是否在预设的第一目标亮度范围(B 1min,B 1max)内;
所述判断所述第二视频流的当前亮度B 2是否在预设的第二目标亮度范围(B 2min,B 2max)内,包括:
判断所述第一近红外光视频流和/或所述第二近红外光视频流的当前亮度是否在预设的第二目标亮度范围(B 2min,B 2max)内;
所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像,包括:
基于所述第一可见光视频流的当前帧图像的亮度信息与所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,获得第一亮度融合图像, 基于所述第二可见光视频流的当前帧图像的亮度信息与所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,获得第二亮度融合图像;
所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:
基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一可见光视频流的当前帧图像与所述第一亮度融合图像进行图像融合,获得第一场景融合图像,基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二可见光视频流的当前帧图像与所述第二亮度融合图像进行图像融合,获得第二场景融合图像。
优选的,如果所述第一视频流的当前亮度B 1不在所述第一目标亮度范围(B 1min,B 1max)内,则调整第一曝光量,以使所述第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内,其中所述第一曝光量为第一曝光时间T 1和第一增益G 1的乘积;
如果所述第二视频流的当前亮度B 2不在所述第二目标亮度范围(B 2min,B 2max)内,则调整第二曝光量,以使所述第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内,其中所述第二曝光量为第二曝光时间T 2和第二增益G 2的乘积。
优选的,当所述第一视频流的当前亮度B 1低于所述第一目标亮度范围(B 1min,B 1max)的下限值B 1min时,增大所述可见光图像传感器的第一曝光量以调节所述第一视频流的当前亮度B 1
优选的,所述增大所述可见光图像传感器111的第一曝光量以调节所述第一视频流的当前亮度B 1,包括:判断最大第一曝光时间T 1max和最小第一增益G 1min是否能够满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求;
如果满足需求,则以最小第一增益G 1min为基础,调整所述第一曝光时间T 1以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内;
如果不满足需求,则提高所述第一增益G 1,判断最大第一曝光时间T 1max 和最大第一增益G 1max是否能够满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求;
如果满足,则以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1,以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围内。
优选的,当所述第一视频流的当前亮度B 1高于所述第一目标亮度范围(B 1min,B 1max)的上限值B 1max时,减小所述可见光图像传感器111的第一曝光量以调节所述第一视频流的当前亮度B 1
优选的,所述减小所述可见光图像传感器111的第一曝光量以调节所述第一视频流的当前亮度B 1,包括:判断最大第一曝光时间T 1max和最小第一增益G 1min是否能够满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求;
如果满足,则以所述最大第一曝光时间T 1max为基础,调整所述第一增益G 1,以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围(B 1min,B 1max)内;
如果不满足,则减小所述第一曝光时间T 1,判断最小第一曝光时间T 1min和最小第一增益G 1min是否满足所述第一目标亮度范围(B 1min,B 1max)内曝光量需求;
如果满足,则以所述最小第一增益G 1min为基础,调整所述第一曝光时间T 1,以使所述第一视频流的当前亮度B 1处于所述第一目标亮度范围内。
优选的,当所述第二视频流的当前亮度B 2低于所述第二目标亮度范围(B 2min,B 2max)的下限值时,增大所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2
优选的,所述增大所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2,包括:判断最大第二曝光时间T 2max和最小第二增益G 2min是否能够满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求;
如果满足需求,则以所述最小第二增益G 2min为基础,调整所述第二曝光时间T 2以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内;
如果不满足需求,则提高所述第二增益G 2,判断最大第二曝光时间T 2max 和最大第二增益G 2max是否能够满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求;
如果满足,则以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 2,以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内。
优选的,当所述第二视频流的当前亮度B 2高于所述第二目标亮度范围(B 2min,B 2max)的上限值B 2max时,通过减小所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2
优选的,所述减小所述近红外光图像传感器121的第二曝光量以调节所述第二视频流的当前亮度B 2,包括:判断最大第二曝光时间T 2max和最小第二增益G 2min是否能够满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求;
如果满足,则以所述最大第二曝光时间T 2max为基础,调整所述第二增益G 2,以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内;
如果不满足,则减小所述第二曝光时间T 2,判断最小第二曝光时间T 2min和最小第二增益G 2min是否满足所述第二目标亮度范围(B 2min,B 2max)内曝光量需求;
如果满足,则以所述最小第二增益G 2min为基础,调整所述第二曝光时间T 2,以使所述第二视频流的当前亮度B 2处于所述第二目标亮度范围(B 2min,B 2max)内。
优选的,如果通过调整所述第一增益G 1和所述第一曝光时间T 1,不能使所述第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内时,则调整用于提供所述可见光的第一光源模组的输出功率,进而调整所述第一视频流的当前亮度B 1,以使所述第一视频流的当前亮度B 1在所述第一目标亮度范围(B 1min,B 1max)内。
优选的,如果通过调整所述第二增益G 2和所述第二曝光时间T 2,不能使所述第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内时,则调整用于提供所述近红外光的第三光源模组的输出功率,进而调整所述第 二视频流的当前亮度B 2,以使所述第二视频流的当前亮度B 2在所述第二目标亮度范围(B 2min,B 2max)内。
优选的,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像,包括:基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,获得亮度融合图像。
优选的,所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:将所述第一视频流的当前帧图像的像素点的色度信息,赋予给所述亮度融合图像对应的像素点作为对应像素点的色度。
优选的,若所述第一视频流的输出格式为RAW格式或者RGB格式,则在将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合之前,所述控制方法还包括:将所述第一视频流的当前帧图像转换至YUV空间或者YC bC r空间内。
优选的,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,包括:
根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重;
根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的所述第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
为实现上述思想,本发明还提供一种存储介质,所述存储介质内存储有 计算机程序,所述计算机程序在被处理器执行时实现如上文所述的控制方法。
由此,本实施例中的荧光内窥镜系统包括两种模式,其中在第二模式下通过在所述第一视频流和/或第二视频流的当前亮度在目标亮度范围内时,将通过可见光捕获的当前帧图像和通过近红外光捕获的当前帧图像进行图像融合,得到的场景融合图像亮度合适,可以有效提高信噪比,细节丰富,可以有效降低由摄像机的运动而导致的图像模糊,避免因图像模糊而造成用户对病灶信息的误判断,提高手术的精准性和安全性,同时荧光内窥镜系统基于现有的硬件,做稍微的改进就可以实现多种功能,满足医生不同的手术需求,提高手术操作的便利性。
本发明实施方式的存储介质,可以采用一个或多个计算机可读的介质的任意组合。可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机硬盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其组合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言-诸如“C”语言或类似的 程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
综上所述,与现有技术相比,本发明提供的荧光内窥镜系统、控制方法和存储介质具有以下优点:当处于第一模式时,所述照明模块可以提供可见光和激励光,激励光照射于目标组织以激励产生荧光,以帮助操作者观察可见光条件下无法观察到的组织信息;当处于第二模式时,所述照明模块可以提供可见光和近红外光,以照明目标组织,所述场景融合模块用于当所述第一视频流和/或第二视频流的当前亮度在目标亮度范围内时,将通过可见光捕获的当前帧图像和通过近红外光捕获的当前帧图像进行图像融合,得到的场景融合图像亮度合适,细节丰富。由此,一方面,本发明的荧光内窥镜系统可以有效降低由摄像机的运动而导致的图像模糊,避免因图像模糊而造成用户对病灶信息的误判断,提高手术的精准性和安全性,同时可以有效提高信噪比,获取更多的图像细节信息;另一方面,本发明的荧光内窥镜系统基于现有的硬件,做稍微的改进就可以实现多种功能,满足医生不同的手术需求,提高手术操作的便利性。
应当注意的是,在本文的实施方式中所揭露的装置和方法,也可以通过其他的方式实现。以上所描述的装置实施方式仅仅是示意性的,例如,附图中的流程图和框图显示了根据本文的多个实施方式的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可 以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用于执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本文各个实施方式中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
上述描述仅是对本发明较佳实施方式的描述,并非对本发明范围的任何限定,本发明领域的普通技术人员根据上述揭示内容做的任何变更、修饰,均属于权利要求书的保护范围。显然,本领域的技术人员可以对发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包括这些改动和变型在内。

Claims (32)

  1. 一种荧光内窥镜系统,其特征在于,包括内窥镜、照明模块、内窥镜驱动模块和场景融合模块;
    所述荧光内窥镜系统的工作模式包括第一模式和第二模式;
    所述内窥镜包括可见光图像传感器和近红外光图像传感器;
    在第一模式下,所述照明模块用于提供可见光,以照明目标组织,并用于提供激励光,以激励目标组织产生荧光,所述可见光图像传感器用于获取所述目标组织的可见光场景图像,并以第一视频流的形式输出,所述近红外光图像传感器用于获取所述目标组织的荧光场景图像,并以第二视频流的形式输出;
    在第二模式下,所述照明模块用于提供可见光和近红外光,以照明目标组织,所述可见光图像传感器用于获取所述目标组织的可见光场景图像,并以第一视频流的形式输出,所述近红外光图像传感器用于获取所述目标组织的近红外光场景图像,并以第二视频流的形式输出;
    所述内窥镜驱动模块包括第一驱动单元和第二驱动单元,所述第一驱动单元用于在第二模式下根据第一曝光时间和第一增益驱动所述可见光图像传感器获取所述可见光场景图像,所述第二驱动单元用于在第二模式下根据第二曝光时间和第二增益驱动所述近红外光图像传感器获取所述近红外光场景图像;
    所述场景融合模块用于当所述第一视频流的当前亮度在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度在预设的第二目标亮度范围内时,基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,得到亮度融合图像,并基于所述第一视频流的当前帧图像的色度信息,将所述亮度融合图像与所述第一视频流的当前帧图像进行融合,获得场景融合图像。
  2. 根据权利要求1所述的荧光内窥镜系统,其特征在于,所述荧光内窥 镜系统还包括内窥镜控制模块,所述内窥镜控制模块包括第一控制单元和/或第二控制单元,所述第一控制单元用于在第二模式下使所述第一视频流的当前亮度在预设的第一目标亮度范围内;所述第二控制单元用于在第二模式下使所述第二视频流的当前亮度在预设的第二目标亮度范围内。
  3. 根据权利要求2所述的荧光内窥镜系统,其特征在于,
    所述第一控制单元包括:
    第一亮度获取部,用于获取所述第一视频流的当前亮度;以及
    第一曝光控制部,用于判断所述第一视频流的当前亮度是否在所述第一目标亮度范围内,如果所述第一视频流的当前亮度不在所述第一目标亮度范围内,则调整所述可见光图像传感器的第一曝光量,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
    所述第二控制单元包括:
    第二亮度获取部,用于获取所述第二视频流的当前亮度;以及
    第二曝光控制部,用于判断所述第二视频流的当前亮度是否在所述第二目标亮度范围内,如果所述第二视频流的当前亮度不在所述第二目标亮度范围内,则调整所述近红外光图像传感器的第二曝光量,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
  4. 根据权利要求3所述的荧光内窥镜系统,其特征在于,当所述第一视频流的当前亮度低于所述第一目标亮度范围的下限值时,所述第一曝光控制部通过增大所述可见光图像传感器的第一曝光量以调节所述第一视频流的当前亮度;
    当所述第二视频流的当前亮度低于所述第二目标亮度范围的下限值时,所述第二曝光控制部通过增大所述近红外光图像传感器的第二曝光量以调节所述第二视频流的当前亮度。
  5. 根据权利要求4所述的荧光内窥镜系统,其特征在于,
    所述第一曝光控制部被配置为判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
    如果满足需求,则以最小第一增益为基础,调整所述第一曝光时间以使 所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    如果不满足需求,则判断最大第一曝光时间和最大第一增益是否能够满足所述第一目标亮度范围内曝光量需求,如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    所述第二曝光控制部被配置为判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
    如果满足需求,则以所述最小第二增益为基础,调整所述第二曝光时间以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
    如果不满足需求,则判断最大第二曝光时间和最大第二增益是否能够满足所述第二目标亮度范围内曝光量需求,如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
  6. 根据权利要求3所述的荧光内窥镜系统,其特征在于,当所述第一视频流的当前亮度高于所述第一目标亮度范围的上限值时,所述第一曝光控制部通过减小所述可见光图像传感器的第一曝光量以调节所述第一视频流的当前亮度;
    当所述第二视频流的当前亮度高于所述第二目标亮度范围的上限值时,所述第二曝光控制部通过减小所述近红外光图像传感器的第二曝光量以调节所述第二视频流的当前亮度。
  7. 根据权利要求6所述的荧光内窥镜系统,其特征在于,
    所述第一曝光控制部被配置为判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
    如果满足需求,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    如果不满足需求,则判断最小第一曝光时间和最小第一增益是否满足所述第一目标亮度范围内曝光量需求,如果满足,则以所述最小第一增益为基础,调整所述第一曝光时间,以使所述第一视频流的当前亮度处于所述第一 目标亮度范围内;
    所述第二曝光控制部被配置为判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
    如果满足需求,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
    如果不满足需求,则判断最小第二曝光时间和最小第二增益是否满足所述第二目标亮度范围内曝光量需求,如果满足,则以所述最小第二增益为基础,调整所述第二曝光时间,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
  8. 根据权利要求3所述的荧光内窥镜系统,其特征在于,所述照明模块包括用于提供所述可见光的第一光源模组和用于提供近红外光的第三光源模组;所述第一控制单元还包括第一照明调节部,所述第一照明调节部用于当通过调整所述第一增益和所述第一曝光时间不能使所述第一视频流的当前亮度在所述第一目标亮度范围内时,控制所述照明模块调整所述第一光源模组的输出功率,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
    所述第二控制单元还包括第二照明调节部,所述第二照明调节部用于当通过调整所述第二增益和所述第二曝光时间不能使所述第二视频流的当前亮度在所述第二目标亮度范围内时,控制所述照明模块调整所述第三光源模组的输出功率,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
  9. 根据权利要求3所述的荧光内窥镜系统,其特征在于,
    若所述第一视频流为YUV编码或YC bC r编码,所述第一亮度获取部用于取所述第一视频流当前帧图像的每个或者部分像素点的Y值的均值或者加权值作为当前亮度;
    若所述第一视频流为RAW编码或RGB编码,所述第一亮度获取部用于根据第一视频流当前帧图像的像素点的RGB值获取像素点的亮度,然后取第一视频流当前帧图像的每个或者部分像素点的亮度的均值或者加权值作为当前亮度。
  10. 根据权利要求1所述的荧光内窥镜系统,其特征在于,所述内窥镜 还包括分光棱镜组,所述分光棱镜组用于在第二模式下,将照射至目标组织的可见光的反射光和近红外光的反射光进行分离,在第一模式下将照射至目标组织的可见光的反射光和受激励产生的荧光进行分离,以使得所述可见光的反射光能被所述可见光图像传感器的感光面所捕获,所述近红外光的反射光或所述荧光能被所述近红外光图像传感器的感光面所捕获。
  11. 根据权利要求10所述的荧光内窥镜系统,其特征在于,所述分光棱镜组包括第一分光棱镜、第二分光棱镜、可见光带通滤光片和近红外带通滤光片,所述可见光带通滤光片用于允许可见光通过,并截止其他波长的光,所述近红外带通滤光片用于允许近红外光通过并截止其他波长的光,所述近红外带通滤光片设置在所述第一分光棱镜和第二分光棱镜之间,所述第一分光棱镜与第二分光棱镜相邻接的面设有半透半反膜,以使入射的光一部分被反射一部分被透射,所述可见光图像传感器的感光面邻近于所述第一分光棱镜的出射面,所述可见光带通滤光片设置在所述第一分光棱镜的出射面与所述可见光图像传感器的感光面之间,所述近红外光图像传感器的感光面邻近于所述第二分光棱镜的出射面。
  12. 根据权利要求1所述的荧光内窥镜系统,其特征在于,所述场景融合模块包括:
    图像融合单元,用于基于所述第一视频流的当前帧图像的亮度信息和所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,以获得亮度融合图像,并基于所述第一视频流的当前帧图像的色度信息,将所述亮度融合图像和所述第一视频流的当前帧图像进行图像融合,以获得场景融合图像。
  13. 根据权利要求12所述的荧光内窥镜系统,其特征在于,所述图像融合单元用于根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像的像素点的亮度的权重;
    根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的所述第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
  14. 根据权利要求11所述的荧光内窥镜系统,其特征在于,所述场景融合模块还包括图像模式转换单元,所述图像模式转换单元用于当所述第一视频流的输出格式为RAW格式或者RGB格式时,将所述第一视频流的当前帧图像的像素点转换至YUV空间或者YC bC r空间内,取所述第一视频流的当前帧图像的像素点的Y值作为所述像素点的亮度,并取所述第一视频流的当前帧图像的像素点的U值、V值或者C b值、C r值作为所述第一视频流的当前帧图像的像素点的色度。
  15. 根据权利要求1所述的荧光内窥镜系统,其特征在于,所述内窥镜为三维内窥镜;
    所述可见光图像传感器包括第一可见光图像传感器和第二可见光图像传感器;
    所述近红外光图像传感器包括第一近红外光图像传感器和第二近红外光图像传感器;
    所述第一视频流包括第一可见光视频流和第二可见光视频流;
    所述第二视频流包括第一近红外光视频流和第二近红外光视频流;
    在第二模式下,所述第一可见光图像传感器用于获取所述目标组织的第一可见光场景图像,并以第一可见光视频流的形式输出,所述第二可见光图像传感器用于获取所述目标组织的第二可见光场景图像,并以第二可见光视频流的形式输出,所述第一近红外光图像传感器用于获取所述目标组织的第一近红外光场景图像,并以第一近红外光视频流的形式输出,所述第二近红外光图像传感器用于获取所述目标组织的第二近红外光场景图像,并以第二近红外光视频流的形式输出;
    所述场景融合模块用于基于所述第一可见光视频流的当前帧图像的亮度信息与所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见 光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,得到第一亮度融合图像,并基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一亮度融合图像与所述第一可见光视频流的当前帧图像进行融合,获得第一场景融合图像,以及基于所述第二可见光视频流的当前帧图像的亮度信息与所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,得到第二亮度融合图像,并基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二亮度融合图像与所述第二可见光视频流的当前帧图像进行融合,获得第二场景融合图像。
  16. 根据权利要求2所述的荧光内窥镜系统,其特征在于,所述荧光内窥镜系统还包括中央控制器,所述中央控制器接受第一模式指令后,控制所述第一控制单元、所述第二控制单元根据预设于所述内窥镜控制模块的所述第一目标亮度范围、所述第二目标亮度范围调整所述第一视频流、所述第二视频流的当前亮度,或者,所述中央控制器接受第一模式指令后,将所述第一目标亮度范围、所述第二目标亮度范围分别发送至所述第一控制单元、所述第二控制单元,以使所述第一控制单元、所述第二控制单元根据预设于所述内窥镜控制模块的所述第一目标亮度范围、所述第二目标亮度范围调整所述第一视频流、所述第二视频流的当前亮度。
  17. 根据权利要求15所述的荧光内窥镜系统,其特征在于,所述荧光内窥镜系统还包括中央控制器,所述中央控制器包括视频叠加单元,所述视频叠加单元用于将所述场景融合模块输出的场景融合图像进行叠加并将生成的三维图像传递至显示器予以显示。
  18. 根据权利要求1所述的荧光内窥镜系统,其特征在于,所述第一驱动单元还用于在第一模式下根据第三曝光时间和第三增益驱动所述可见光图像传感器获取所述可见光场景图像;所述第二驱动单元还用于在第一模式下根据第四曝光时间和第四增益驱动所述近红外光图像传感器获取所述荧光场景图像。
  19. 根据权利要求2所述的荧光内窥镜系统,其特征在于,所述第一控 制单元还用于在第一模式下使所述第一视频流的当前亮度在预设的第三目标亮度范围内;和/或,所述第二控制单元还用于在第一模式下使所述第二视频流的当前亮度在预设的第四目标亮度范围内。
  20. 一种荧光内窥镜系统的控制方法,其特征在于,所述荧光内窥镜系统的工作模式包括第一模式和第二模式,所述控制方法包括:
    在第二模式下,提供可见光和近红外光,以照明目标组织;
    获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出;
    判断所述第一视频流的当前亮度是否在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度是否在预设的第二目标亮度范围内;
    若所述第一视频流的当前亮度在预设的第一目标亮度范围内和/或所述第二视频流的当前亮度在预设的第二目标亮度范围内,则
    基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像;以及
    基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像。
  21. 根据权利要求20所述的荧光内窥镜系统的控制方法,其特征在于,如果所述第一视频流的当前亮度不在所述第一目标亮度范围内,则调整第一曝光量以使所述第一视频流的当前亮度在所述第一目标亮度范围内,其中所述第一曝光量为第一曝光时间和第一增益的乘积;
    如果所述第二视频流的当前亮度不在所述第二目标亮度范围内,则调整第二曝光量,以使所述第二视频流的当前亮度在所述第二目标亮度范围内,其中所述第二曝光量为第二曝光时间和第二增益的乘积。
  22. 根据权利要求21所述的荧光内窥镜系统的控制方法,其特征在于,当所述第一视频流的当前亮度低于所述第一目标亮度范围的下限值时,增大所述第一曝光量以调节所述第一视频流的当前亮度;
    当所述第二视频流的当前亮度低于所述第二目标亮度范围的下限值时, 增大所述第二曝光量以调节所述第二视频流的当前亮度。
  23. 根据权利要求22所述的荧光内窥镜系统的控制方法,其特征在于,所述增大所述第一曝光量以调节所述第一视频流的当前亮度,包括:
    判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度范围内曝光量需求;
    如果满足需求,则以最小第一增益为基础,调整所述第一曝光时间以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    如果不满足需求,则判断最大第一曝光时间和最大第一增益是否能够满足所述第一目标亮度范围内曝光量需求,如果满足,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    所述增大所述第二曝光量以调节所述第二视频流的当前亮度,包括:
    判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
    如果满足需求,则以所述最小第二增益为基础,调整所述第二曝光时间以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
    如果不满足需求,则判断最大第二曝光时间和最大第二增益是否能够满足所述第二目标亮度范围内曝光量需求,如果满足,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
  24. 根据权利要求21所述的荧光内窥镜系统的控制方法,其特征在于,当所述第一视频流的当前亮度高于所述第一目标亮度范围的上限值时,减小所述第一曝光量以调节所述第一视频流的当前亮度;
    当所述第二视频流的当前亮度高于所述第二目标亮度范围的上限值时,减小所述第二曝光量以调节所述第二视频流的当前亮度。
  25. 根据权利要求24所述的荧光内窥镜系统的控制方法,其特征在于,所述减小所述第一曝光量以调节所述第一视频流的当前亮度,包括:
    判断最大第一曝光时间和最小第一增益是否能够满足所述第一目标亮度 范围内曝光量需求;
    如果满足需求,则以所述最大第一曝光时间为基础,调整所述第一增益,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    如果不满足需求,则判断最小第一曝光时间和最小第一增益是否满足所述第一目标亮度范围内曝光量需求,如果满足,则以所述最小第一增益为基础,调整所述第一曝光时间,以使所述第一视频流的当前亮度处于所述第一目标亮度范围内;
    所述减小所述第二曝光量以调节所述第二视频流的当前亮度,包括:
    判断最大第二曝光时间和最小第二增益是否能够满足所述第二目标亮度范围内曝光量需求;
    如果满足需求,则以所述最大第二曝光时间为基础,调整所述第二增益,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内;
    如果不满足需求,则判断最小第二曝光时间和最小第二增益是否满足所述第二目标亮度范围内曝光量需求,如果满足,则以所述最小第二增益为基础,调整所述第二曝光时间,以使所述第二视频流的当前亮度处于所述第二目标亮度范围内。
  26. 根据权利要求21所述的荧光内窥镜系统的控制方法,其特征在于,如果通过调整所述第一增益和所述第一曝光时间,不能使所述第一视频流的当前亮度在所述第一目标亮度范围内时,则调整用于提供所述可见光的第一光源模组的输出功率,以使所述第一视频流的当前亮度在所述第一目标亮度范围内;
    如果通过调整所述第二增益和所述第二曝光时间,不能使所述第二视频流的当前亮度在所述第二目标亮度范围内时,则调整用于提供所述近红外光的第三光源模组的输出功率,以使所述第二视频流的当前亮度在所述第二目标亮度范围内。
  27. 根据权利要求20所述的荧光内窥镜系统的控制方法,其特征在于,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当 前帧图像进行融合,获得亮度融合图像,包括:
    基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,获得亮度融合图像。
  28. 根据权利要求20所述的荧光内窥镜系统的控制方法,其特征在于,所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:
    将所述第一视频流的当前帧图像的像素点的色度信息,赋予给所述亮度融合图像对应的像素点作为对应像素点的色度。
  29. 根据权利要求20所述的荧光内窥镜系统的控制方法,其特征在于,若所述第一视频流的输出格式为RAW格式或者RGB格式,则在将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合之前,所述控制方法还包括:
    将所述第一视频流的当前帧图像转换至YUV空间或者YC bC r空间内。
  30. 根据权利要求27所述的荧光内窥镜系统的控制方法,其特征在于,所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,对所述第一视频流的当前帧图像的像素点和对应的所述第二视频流的当前帧图像的像素点进行图像融合,包括:
    根据预设的关于亮度值与权重的正态分布、所述第一视频流的当前帧图像的像素点的亮度以及对应的所述第二视频流的当前帧图像的像素点的亮度,分别获取所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重;
    根据获取的所述第一视频流的当前帧图像中的像素点的亮度的权重以及对应的所述第二视频流的当前帧图像中的像素点的亮度的权重,分别对所述第一视频流的当前帧图像的像素点的亮度与对应的所述第二视频流的当前帧图像的像素点的亮度进行加权,获得亮度融合图像的像素点的亮度。
  31. 根据权利要求20所述的荧光内窥镜系统的控制方法,所述内窥镜为三维内窥镜;
    所述获取目标组织的可见光场景图像和近红外光场景图像,并分别以第一视频流和第二视频流的形式输出,包括:
    获取目标组织的第一可见光场景图像、第二可见光场景图像、第一近红外光场景图像和第二近红外光场景图像,并分别以第一可见光视频流、第二可见光视频流、第一近红外光视频流和第二近红外光视频流的形式输出;
    所述判断所述第一视频流的当前亮度是否在预设的第一目标亮度范围内,包括:
    判断所述第一可见光视频流和/或所述第二可见光视频流的当前亮度是否在预设的第一目标亮度范围内;
    所述判断所述第二视频流的当前亮度是否在预设的第二目标亮度范围内,包括:
    判断所述第一近红外光视频流和/或所述第二近红外光视频流的当前亮度是否在预设的第二目标亮度范围内;
    所述基于所述第一视频流的当前帧图像的亮度信息与所述第二视频流的当前帧图像的亮度信息,将所述第一视频流的当前帧图像与所述第二视频流的当前帧图像进行融合,获得亮度融合图像,包括:
    基于所述第一可见光视频流的当前帧图像的亮度信息与所述第一近红外光视频流的当前帧图像的亮度信息,将所述第一可见光视频流的当前帧图像与所述第一近红外光视频流的当前帧图像进行融合,获得第一亮度融合图像,基于所述第二可见光视频流的当前帧图像的亮度信息与所述第二近红外光视频流的当前帧图像的亮度信息,将所述第二可见光视频流的当前帧图像与所述第二近红外光视频流的当前帧图像进行融合,获得第二亮度融合图像;
    所述基于所述第一视频流的当前帧图像的色度信息,将所述第一视频流的当前帧图像与所述亮度融合图像进行图像融合,获得场景融合图像,包括:
    基于所述第一可见光视频流的当前帧图像的色度信息,将所述第一可见光视频流的当前帧图像与所述第一亮度融合图像进行图像融合,获得第一场景融合图像,基于所述第二可见光视频流的当前帧图像的色度信息,将所述第二可见光视频流的当前帧图像与所述第二亮度融合图像进行图像融合,获 得第二场景融合图像。
  32. 一种存储介质,其特征在于:所述存储介质内存储有计算机程序,所述计算机程序在被处理器执行时实现如权利要求20至31中任一项所述的控制方法。
PCT/CN2021/131950 2020-11-20 2021-11-19 荧光内窥镜系统、控制方法和存储介质 WO2022105902A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21894049.2A EP4248835A4 (en) 2020-11-20 2021-11-19 FLUORESCENCE DOSCOPIC SYSTEM, CONTROL METHODS AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011309665.2 2020-11-20
CN202011309665.2A CN114027765B (zh) 2020-11-20 2020-11-20 荧光内窥镜系统、控制方法和存储介质

Publications (1)

Publication Number Publication Date
WO2022105902A1 true WO2022105902A1 (zh) 2022-05-27

Family

ID=80134135

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131950 WO2022105902A1 (zh) 2020-11-20 2021-11-19 荧光内窥镜系统、控制方法和存储介质

Country Status (3)

Country Link
EP (1) EP4248835A4 (zh)
CN (1) CN114027765B (zh)
WO (1) WO2022105902A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100147A (zh) * 2022-06-24 2022-09-23 华中科技大学协和深圳医院 一种智能切换的脊柱内镜系统、装置和计算机可读介质

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268573A1 (en) * 2009-06-10 2012-10-25 W.O.M. World Of Medicine Ag Imaging system and method for the fluorescence-optical visualization of an object
CN103300812A (zh) * 2013-06-27 2013-09-18 中国科学院自动化研究所 基于内窥镜的多光谱视频导航系统和方法
KR20180006668A (ko) * 2016-07-11 2018-01-19 을지대학교 산학협력단 복강경 수술용 융합영상장치
CN109618099A (zh) * 2019-01-10 2019-04-12 深圳英飞拓科技股份有限公司 双光谱摄像机图像融合方法及装置
CN109924938A (zh) * 2019-03-26 2019-06-25 华中科技大学苏州脑空间信息研究院 外置式双光源阴道镜成像系统
WO2019191497A1 (en) * 2018-03-30 2019-10-03 Blaze Bioscience, Inc. Systems and methods for simultaneous near-infrared light and visible light imaging
CN110811498A (zh) * 2019-12-19 2020-02-21 中国科学院长春光学精密机械与物理研究所 可见光和近红外荧光3d融合图像内窥镜系统
CN110893095A (zh) * 2018-09-12 2020-03-20 上海逸思医学影像设备有限公司 一种用于可见光和激发荧光实时成像的系统和方法
WO2020198315A1 (en) * 2019-03-26 2020-10-01 East Carolina University Near-infrared fluorescence imaging for blood flow and perfusion visualization and related systems and computer program products
CN111803013A (zh) * 2020-07-21 2020-10-23 深圳市博盛医疗科技有限公司 一种内窥镜成像方法和内窥镜成像系统
CN111818707A (zh) * 2020-07-20 2020-10-23 浙江华诺康科技有限公司 荧光内窥镜曝光参数调整的方法、设备和荧光内窥镜
CN111948798A (zh) * 2020-08-21 2020-11-17 微创(上海)医疗机器人有限公司 内窥镜系统及用于检测内窥镜的末端与组织接触的方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8498695B2 (en) * 2006-12-22 2013-07-30 Novadaq Technologies Inc. Imaging system with a single color image sensor for simultaneous fluorescence and color video endoscopy
CN107744382A (zh) * 2017-11-20 2018-03-02 北京数字精准医疗科技有限公司 光学分子影像导航系统
CN108040243B (zh) * 2017-12-04 2019-06-14 南京航空航天大学 多光谱立体视觉内窥镜装置及图像融合方法
CN108198161A (zh) * 2017-12-29 2018-06-22 深圳开立生物医疗科技股份有限公司 一种双摄像头图像的融合方法、装置及设备
CN108185974A (zh) * 2018-02-08 2018-06-22 北京数字精准医疗科技有限公司 一种内窥式荧光超声融合造影导航系统

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268573A1 (en) * 2009-06-10 2012-10-25 W.O.M. World Of Medicine Ag Imaging system and method for the fluorescence-optical visualization of an object
CN103300812A (zh) * 2013-06-27 2013-09-18 中国科学院自动化研究所 基于内窥镜的多光谱视频导航系统和方法
KR20180006668A (ko) * 2016-07-11 2018-01-19 을지대학교 산학협력단 복강경 수술용 융합영상장치
WO2019191497A1 (en) * 2018-03-30 2019-10-03 Blaze Bioscience, Inc. Systems and methods for simultaneous near-infrared light and visible light imaging
CN110893095A (zh) * 2018-09-12 2020-03-20 上海逸思医学影像设备有限公司 一种用于可见光和激发荧光实时成像的系统和方法
CN109618099A (zh) * 2019-01-10 2019-04-12 深圳英飞拓科技股份有限公司 双光谱摄像机图像融合方法及装置
CN109924938A (zh) * 2019-03-26 2019-06-25 华中科技大学苏州脑空间信息研究院 外置式双光源阴道镜成像系统
WO2020198315A1 (en) * 2019-03-26 2020-10-01 East Carolina University Near-infrared fluorescence imaging for blood flow and perfusion visualization and related systems and computer program products
CN110811498A (zh) * 2019-12-19 2020-02-21 中国科学院长春光学精密机械与物理研究所 可见光和近红外荧光3d融合图像内窥镜系统
CN111818707A (zh) * 2020-07-20 2020-10-23 浙江华诺康科技有限公司 荧光内窥镜曝光参数调整的方法、设备和荧光内窥镜
CN111803013A (zh) * 2020-07-21 2020-10-23 深圳市博盛医疗科技有限公司 一种内窥镜成像方法和内窥镜成像系统
CN111948798A (zh) * 2020-08-21 2020-11-17 微创(上海)医疗机器人有限公司 内窥镜系统及用于检测内窥镜的末端与组织接触的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4248835A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100147A (zh) * 2022-06-24 2022-09-23 华中科技大学协和深圳医院 一种智能切换的脊柱内镜系统、装置和计算机可读介质
CN115100147B (zh) * 2022-06-24 2023-10-24 华中科技大学协和深圳医院 一种智能切换的脊柱内镜系统、装置和计算机可读介质

Also Published As

Publication number Publication date
EP4248835A4 (en) 2024-04-17
CN114027765B (zh) 2023-03-24
EP4248835A1 (en) 2023-09-27
CN114027765A (zh) 2022-02-11

Similar Documents

Publication Publication Date Title
EP3459428B1 (en) Endoscope and endoscopic system
EP2481342B1 (en) Electronic endoscope system
JP7140464B2 (ja) 画像処理システム、蛍光内視鏡照明撮像装置及び撮像方法
CN107518867B (zh) 光源装置及内窥镜系统
JP5808031B2 (ja) 内視鏡システム
JP5757891B2 (ja) 電子内視鏡システム、画像処理装置、画像処理装置の作動方法及び画像処理プログラム
JP6072374B2 (ja) 観察装置
WO2018159083A1 (ja) 内視鏡システム、プロセッサ装置、及び、内視鏡システムの作動方法
JP5892985B2 (ja) 内視鏡システム及びプロセッサ装置並びに作動方法
JP6840737B2 (ja) 内視鏡システム、プロセッサ装置、及び、内視鏡システムの作動方法
WO2017183324A1 (ja) 内視鏡システム、プロセッサ装置、及び、内視鏡システムの作動方法
WO2016056157A1 (ja) 色分解プリズム及び撮像装置
CN109381154B (zh) 内窥镜系统
WO2022105902A1 (zh) 荧光内窥镜系统、控制方法和存储介质
CN110893096A (zh) 一种基于图像曝光的多光谱成像系统和方法
CN110974133B (zh) 内窥镜系统
JP6293392B1 (ja) 生体観察システム
JP6905038B2 (ja) 光源装置及び内視鏡システム
JP2007097710A (ja) 電子内視鏡装置
JP2022027195A (ja) 3板式カメラ
JP6669539B2 (ja) 画像処理装置、画像処理装置の作動方法、および画像処理プログラム
WO2023112916A1 (ja) 映像信号処理装置、映像信号処理方法および映像信号処理システム
JP2005152130A (ja) 内視鏡撮像システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21894049

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021894049

Country of ref document: EP

Effective date: 20230620