WO2022257946A1 - 多光谱成像系统、成像方法和存储介质 - Google Patents

多光谱成像系统、成像方法和存储介质 Download PDF

Info

Publication number
WO2022257946A1
WO2022257946A1 PCT/CN2022/097521 CN2022097521W WO2022257946A1 WO 2022257946 A1 WO2022257946 A1 WO 2022257946A1 CN 2022097521 W CN2022097521 W CN 2022097521W WO 2022257946 A1 WO2022257946 A1 WO 2022257946A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
visible light
fluorescence
module
target tissue
Prior art date
Application number
PCT/CN2022/097521
Other languages
English (en)
French (fr)
Inventor
张葵阳
何超
曹伦
Original Assignee
上海微觅医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微觅医疗器械有限公司 filed Critical 上海微觅医疗器械有限公司
Publication of WO2022257946A1 publication Critical patent/WO2022257946A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0646Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements with illumination filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission

Definitions

  • the present application relates to the technical field of optical imaging, in particular to a multi-spectral imaging system, imaging method and storage medium.
  • endoscopes as a detection instrument integrating traditional optics, ergonomics, precision machinery, modern electronics, mathematics, and software, have become more and more widely used.
  • the endoscope can enter the body of the subject to be detected (such as the esophagus) to obtain images of the part to be detected, and then determine whether there is a lesion in the part to be detected.
  • the endoscope can see lesions that x-rays cannot, so it is very useful to doctors.
  • endoscopes which are most widely used in minimally invasive surgical robots, only use visible light band imaging technology, which makes it impossible to effectively distinguish lesions from normal tissue forms in some operations, and cannot provide better guidance and prompt assistance to the surgeon.
  • the purpose of this application is to provide a multi-spectral imaging system, imaging method and storage medium, which can obtain clear multi-band fusion images.
  • the present application provides a multispectral imaging system, including: an illumination module, a lens module, an image acquisition module and an image processing module;
  • the illumination module is used to emit visible light and excitation light to the target tissue in time division, so that the target tissue can reflect the visible light and make the target tissue be excited by the excitation light to emit fluorescence;
  • the lens module is used for time-sharing collection of visible light reflected by the target tissue and fluorescence emitted by the target tissue when excited;
  • the image acquisition module is used to receive visible light and fluorescence collected by the lens module in time-sharing, so as to obtain visible light images and fluorescence images;
  • the image processing module is used to process the visible light image and the fluorescence image to obtain a fusion image.
  • the image acquisition module is configured to output the visible light image and the fluorescence image in the form of a visible light video stream and a fluorescence video stream, respectively;
  • the image processing module is configured to perform image signal processing on each frame of visible light image in the visible light video stream and each frame of fluorescence image in the fluorescence video stream, and process all frames of corresponding frames after the image signal processing.
  • the visible light image and the fluorescent image are fused to obtain a fused image, and the fused image is output in the form of a video stream.
  • the multi-spectral imaging system further includes a band-pass filter module, the band-pass filter module is arranged between the lens module and the image acquisition module, and the band-pass filter module is used to allow the visible light pass through the fluorescence, and prevent the passage of light in other bands except the visible light band and the fluorescence band;
  • the image acquisition module is used to receive the visible light and the fluorescent light passing through the band-pass filtering module in time division, so as to acquire the visible light image and the fluorescent image.
  • the multispectral imaging system further includes a prism module, and the bandpass filter module is located between the lens module and the prism module.
  • the image acquisition module includes an RGBNIR image sensor, and the RGBNIR image sensor is used to time-divisionally receive the visible light and fluorescence collected by the lens module, so as to acquire visible light images and fluorescence images.
  • the lighting module includes a light source unit and a lighting controller, wherein, under the control of the lighting controller, the light source unit can emit visible light and excitation light to the target tissue in time division.
  • the light source unit includes a first light source module and a second light source module, the first light source module is used to emit visible light to the target tissue, and the second light source module is used to emit visible light to the target tissue.
  • the target tissue emits excitation light.
  • the lighting controller includes a first control unit, a second control unit and a third control unit;
  • the first control unit is used to control the output energy intensity of the first light source module and the second light source module;
  • the second control unit is used to control the opening and closing of the first light source module and the second light source module;
  • the third control unit is used to control the turn-on frequency of the first light source module and the second light source module.
  • the image processing module includes a visible light image processing unit, a fluorescence image processing unit, a binarization processing unit and an image fusion unit;
  • the visible light image processing unit is configured to perform first image signal processing on the visible light image
  • the fluorescence image processing unit is configured to perform second image signal processing on the fluorescence image
  • the binarization processing unit is configured to perform binarization processing on the fluorescent image processed by the second image signal to obtain a corresponding mask
  • the image fusion unit is configured to fuse the mask with the visible light image processed by the first image signal to obtain a fusion image.
  • the image fusion unit is configured to color and mark pixels in the visible light image processed by the first image signal that correspond to pixels whose pixel values in the mask are not 0 , to obtain a fused image.
  • the lens module includes a first lens and a second lens
  • the first lens is used to time-divisionally collect visible light reflected by the target tissue and fluorescence emitted by the target tissue along the first optical path
  • the second lens is used to time-divisionally collect visible light reflected by the target tissue and fluorescence emitted by the target tissue along the second optical path;
  • the image acquisition module includes a first image acquisition unit and a second image acquisition unit
  • the first image acquisition unit is used to time-divisionally receive the visible light and fluorescence collected by the first lens to acquire the first visible light image and the first image acquisition unit.
  • the second image acquisition unit is configured to time-share receive the visible light and fluorescence collected by the second lens to obtain a second visible light image and a second fluorescence image;
  • the image processing module includes a first image processing unit, a second image processing unit, and an overlay unit; the first image processing unit is configured to process the first visible light image and the first fluorescence image to obtain a second image processing unit.
  • a fused image the second image processing unit is used to process the second visible light image and the second fluorescent image to obtain a second fused image;
  • the superposition unit is used to process the first fused image performing registration with the second fused image, and superimposing the registered first fused image and the second fused image to generate a three-dimensional image and outputting the three-dimensional image.
  • the present application also provides a multispectral imaging method, the multispectral imaging method comprising:
  • the visible light image and the fluorescence image are processed to obtain a fusion image.
  • the processing the visible light image and the fluorescence image to obtain a fusion image includes:
  • the mask is fused with the visible light image processed by the first image signal to obtain a fused image.
  • the merging the mask with the visible light image processed by the first image signal to obtain a fused image includes:
  • the time-sharing receiving of the visible light reflected by the target tissue and the fluorescence emitted by the target tissue upon stimulation, so as to obtain a visible light image and a fluorescence image includes:
  • Time-division receiving the visible light reflected by the target tissue and the fluorescence emitted by the target tissue along the first optical path and the second optical path respectively, so as to obtain the first visible light image, the second visible light image, the first fluorescent image and the second visible light image.
  • Two fluorescent images Two fluorescent images;
  • the processing of the visible light image and the fluorescence image to obtain a fusion image includes:
  • the multispectral imaging method further includes:
  • the present application also provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, the above-mentioned multispectral imaging method is realized.
  • the multispectral imaging system, imaging method and storage medium provided by the present application have the following advantages:
  • This application transmits visible light and excitation light to the target tissue in time division to obtain visible light images and fluorescence images in time division, and then processes the visible light images and the fluorescence images to obtain multi-band fusion images.
  • the obtained multi-band fusion image can distinguish different tissue states, enabling doctors to observe tissue information that cannot be observed under a single band condition.
  • the doctor can clearly see the difference between the lesion and the normal tissue, and can see the details of the difference more clearly, so that the tissue can be cut more accurately and safely.
  • This application uses a band-pass filter module to prevent (filter) clutter, so that only visible light and fluorescence pass through, thereby effectively improving the signal-to-noise ratio of the input signal and further improving the image quality of the obtained multi-band fusion image .
  • This application adopts an RGBNIR image sensor with a higher QE (quantum efficiency) in the near-infrared band to collect visible light images and fluorescence images in time-sharing, so that high-quality visible light images and fluorescence images can be obtained, which further improves the acquired multi-wavelength images.
  • QE quantum efficiency
  • this application can enable doctors to see the three-dimensional information of the target tissue in the surgical field of view, thereby providing doctors with more realistic and clearer visual effects, which is more conducive to doctors' surgical judgment and accurate control of instruments , which greatly improves the operation efficiency and the safety during the operation.
  • the time-sharing control imaging system adopted in this application can improve the flexibility of control for the system software part; for the system hardware part, the complexity of the system hardware is greatly reduced, making the whole system more dexterous and more convenient for endoscopy System integration of mirror and minimally invasive surgical robot.
  • FIG. 1 is a schematic block diagram of a multispectral imaging system in an embodiment of the present application
  • FIG. 2 is a schematic cross-sectional view of the proximal end of the endoscope tube in an embodiment of the present application
  • FIG. 3 is a schematic perspective view of the proximal end of the endoscope tube in an embodiment of the present application
  • FIG. 4 is a schematic cross-sectional view of the proximal end of the endoscope tube in an embodiment of the present application
  • FIG. 5 is a schematic diagram of the first imaging optical path in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a second imaging optical path in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the quantum efficiency of the RGBNIR image sensor for light of different wavelength bands in an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of the connection relationship between the lighting module and the image acquisition module in an embodiment of the present application.
  • FIG. 9 is a spectrogram of a bandpass filter module in an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of an image processing module in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the workflow of the visible light image processing unit in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of the workflow of the fluorescence image processing unit in an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a first image processing unit in an embodiment of the present application.
  • FIG. 14 is a schematic block diagram of a second image processing unit in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the superimposition of the first video stream and the second video stream in an embodiment of the present application.
  • FIG. 16 is a schematic diagram of the superimposition of the first video stream and the second video stream in another embodiment of the present application.
  • FIG. 17 is a schematic flowchart of a multispectral imaging method in an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an image fusion process in an embodiment of the present application.
  • Lens module-200 first lens-210; second lens-220; first lens-211; second lens-221;
  • Image acquisition module-300 first image acquisition unit-310; second image acquisition unit-320; photosensitive surface-311, 321;
  • Image processing module-400 visible light image processing unit-410; fluorescence image processing unit-420; binarization processing unit-430; image fusion unit-440; first image processing unit-450; second image processing unit-460; Superposition unit-470; first visible light image processing unit-410a; first fluorescence image processing unit-420a; first binarization processing unit-430a; first image fusion unit-440a; second visible light image processing unit-410b; The second fluorescence image processing unit-420b; the second binarization processing unit-430b; the second image fusion unit-440b;
  • Band-pass filter module-600 first band-pass filter-610; second band-pass filter-620;
  • Prism module-700 First prism-710; Second prism-720;
  • First video stream - 810 Second video stream - 820 .
  • the main purpose of this application is to provide a multi-spectral imaging system, imaging method and storage medium, which can obtain clear multi-band fusion images. It should be noted that although the present application uses the multispectral imaging system, imaging method and storage medium as an example for illustration, as those skilled in the art can understand, the multispectral imaging system The imaging method and storage medium can also be applied to other devices with imaging functions, such as security inspection equipment, which is not limited in this application.
  • the multispectral imaging method in the embodiments of the present application can be applied to the multispectral imaging system in the embodiments of the present application.
  • the proximal end referred to in this application refers to the end close to the patient, and the distal end refers to the end close to the operator.
  • the present application provides a multispectral imaging system, please refer to FIG. 1 , which schematically shows a block structure diagram of the multispectral imaging system provided by an embodiment of the present application.
  • the multispectral imaging system includes an illumination module 100 , a lens module 200 , an image acquisition module 300 and an image processing module 400 .
  • the illumination module 100 is used to emit visible light and excitation light to the target tissue in time division, so that the target tissue can reflect the visible light and make the target tissue be excited by the excitation light to emit fluorescence.
  • the wavelength of the visible light may be 400-690nm
  • the wavelength of the excitation light may be 780-820nm
  • the wavelength of the fluorescence may be 820-860nm.
  • the present application has no special limitation on the specific position of the lighting module 100 .
  • the visible light and excitation light provided by the illumination module 100 can be delivered to the end of the endoscope through the connector accommodated in the illumination channel 500 of the endoscope. and reach the target organization.
  • the connector is, for example, an optical fiber. Therefore, by using the optical fiber to transmit the visible light and the exciting light, it can help to form a uniform light field and improve the imaging quality.
  • FIG. 2 and FIG. 3 wherein FIG. 2 schematically shows a schematic cross-sectional view of the proximal end of the endoscope tube provided by an embodiment of the present application, and FIG.
  • the illumination channel 500 includes two-way connectors (eg, optical fibers) symmetrically distributed on both sides of the lens module 200 .
  • the lighting module 100 includes a light source unit 110 and a lighting controller 120 .
  • the light source unit 110 Under the control of the illumination controller 120, the light source unit 110 can emit visible light and excitation light to the target tissue in time division.
  • the operator can control the illumination controller 120 so that the light source unit 110 can emit visible light and excitation light to the target tissue in time division.
  • the light source unit 110 includes a first light source module 111 and a second light source module 112, the first light source module 111 is used to emit visible light to the target tissue, and the second light source module 112 is used to emit visible light to the target tissue.
  • the two light source modules 112 are used for emitting excitation light to the target tissue.
  • the first light source module 111 is turned on and the second light source module 112 is turned off, visible light can be emitted to the target tissue through the first light source module 111;
  • the group 112 is turned on and the first light source module 111 is turned off, the second light source module 112 can emit excitation light to the target tissue.
  • the light source unit 110 can emit visible light and excitation light to the target tissue in time division.
  • the lighting controller 120 includes a first control unit 121, a second control unit 122 and a third control unit 123; the first control unit 121 is used to control the first light source The output energy intensity of the module 111 and the second light source module 112; the second control unit 122 is used to control the opening and closing of the first light source module 111 and the second light source module 112; The third control unit 123 is used to control the turn-on frequency of the first light source module 111 and the second light source module 112 . Therefore, the energy intensity of the visible light output by the first light source module 111 and the energy intensity of the excitation light output by the second light source module 112 can be controlled by the first control unit 121 according to actual needs, so as to further improve image quality.
  • the working state of the first light source module 111 and the second light source module 112 can be controlled by the second control unit 122, so that the light source unit 110 can emit visible light and stimulate the target tissue at different times.
  • the turn-on frequency of the first light source module 111 and the second light source module 112 can be controlled by the third control unit 123, so that the light source unit 110 can time-share the light to the target tissue at a certain frequency. Emit visible light and excitation light.
  • the lens module 200 is used for collecting visible light reflected by the target tissue and fluorescence emitted by the target tissue in time division.
  • the illumination module 100 emits visible light to the target tissue
  • the visible light reflected by the target tissue can be collected through the lens module 200;
  • the illumination module 100 emits excitation light to the target tissue , through the lens module 200, the fluorescence generated by the target tissue can be collected.
  • FIG. 4 schematically shows a cross-sectional view of a proximal end of an endoscope tube provided by an embodiment of the present application.
  • the lens module 200 when the endoscope is a three-dimensional endoscope, the lens module 200 includes a first lens 210 and a second lens 220, and the first lens 210 is used for time-sharing along the second lens.
  • An optical path collects the visible light reflected by the target tissue and the fluorescence emitted by the target tissue when excited, and the second lens 220 is used to collect the visible light reflected by the target tissue and the target tissue time-sharing along the second optical path. Fluorescence emitted by stimulated tissue.
  • the first lens 210 can collect the visible light reflected by the target tissue along the first optical path
  • the second lens 220 can collect the visible light reflected by the target tissue along the first optical path.
  • the second optical path collects the visible light reflected by the target tissue; when the illumination module 100 emits excitation light to the target tissue, the first lens 210 can collect the excited light generated by the target tissue along the first optical path. Fluorescence, through the second lens 220, can collect the fluorescence generated by the excited target tissue along the second optical path.
  • Figure 5 schematically shows a schematic diagram of the first imaging optical path provided by an embodiment of the present application
  • Figure 6 schematically shows a second imaging optical path provided by an embodiment of the present application schematic diagram.
  • the first lens 210 includes a plurality of first lenses 211 sequentially arranged along the first optical path.
  • the second lens 220 includes a plurality of second lenses 221 sequentially arranged along the second optical path.
  • the image acquisition module 300 is used for time-sharing receiving the visible light and the fluorescence collected by the lens module 200 to acquire the visible light image and the fluorescence image.
  • the acquired multiple visible light images may be output in the form of a visible light video stream
  • the acquired multiple fluorescent images may be output in the form of a fluorescent video stream.
  • the fluorescence emitted by the target tissue excited by the excitation light can reach the image acquisition module 300 through the lens module 200 and be captured by the image.
  • the fluorescent image captured by the acquisition module 300 is obtained, and the obtained fluorescent image is output in the form of a fluorescent video stream. Since the image acquisition module 300 acquires the visible light image and the fluorescence image in time division, it can effectively avoid the mutual interference between different wave bands, effectively eliminate the imaging noise, and make the acquired visible light image and fluorescence image clearer. It lays a good foundation for obtaining clear multi-band fusion images.
  • the multispectral imaging system provided by the present application adopts time-sharing control imaging, the flexibility of control can be greatly improved for the system software part, and the complexity of the system hardware is greatly reduced for the system hardware part, so that The whole system is more dexterous and more convenient for system integration of endoscope and minimally invasive surgical robot. It should be noted that, in some other implementation manners, the acquired multiple visible light images and multiple fluorescent images may not be output in the form of video streams, which is not limited in this application.
  • the image acquisition module 300 includes an RGBNIR image sensor (refers to an image sensor that can capture visible light to obtain a visible light image, and can capture near-infrared light (such as fluorescence) to obtain a near-infrared light image (such as a fluorescence image) ).
  • the RGBNIR image sensor is used to receive visible light and fluorescence collected by the lens module 200 in time division to obtain visible light images and fluorescence images, and can be used to output the visible light images and fluorescence images in the form of visible light video streams and fluorescence video streams respectively image.
  • FIG. 7 schematically shows the quantum efficiency of the RGBNIR image sensor provided by an embodiment of the present application for light of different wavelength bands. As shown in FIG.
  • the RGBNIR image sensor not only has high quantum efficiency for visible light, but also has high quantum efficiency for fluorescence in the near-infrared band. Therefore, the present application can greatly improve the image quality of the acquired visible light image and fluorescence image by using the RGBNIR image sensor to obtain the visible light image and the fluorescence image, and provide a good basis for obtaining a clear multi-band fusion image. In addition, the present application can further simplify the overall structure of the endoscope and reduce the cost by using the same image sensor to acquire the visible light image and the fluorescence image in time division.
  • the image acquisition module 300 when the endoscope is a three-dimensional endoscope, the image acquisition module 300 includes a first image acquisition unit 310 and a second image acquisition unit 320 .
  • the first image acquisition unit 310 is used to receive the visible light and fluorescence collected by the first lens 210 in time-division, so as to acquire the first visible light image and the first fluorescence image, and can be used to record the first visible light video stream Outputting the acquired first visible light image and first fluorescent image in the form of a first fluorescent video stream.
  • the second image acquisition unit 320 is used to time-divisionally receive the visible light and fluorescence collected by the second lens to acquire the second visible light image and the second fluorescence image, and can be used to record the second visible light video stream and the second fluorescence image respectively.
  • the acquired second visible light image and the second fluorescence image are output in the form of a fluorescence video stream.
  • the second image acquisition unit 320 is captured by the second image acquisition unit 320 to obtain a second visible light image, and the obtained second visible light image is output in the form of a second visible light video stream.
  • the fluorescence of the target tissue collected by the first lens 210 excited by the excitation light can reach the first image acquisition unit 310 and Captured by the first image acquisition unit 310 to obtain a first fluorescence image, and the obtained first fluorescence image is output in the form of a first fluorescence video stream;
  • the target tissue captured by the second lens 220 Fluorescence excited by the excitation light can reach the second image acquisition unit 320 and be captured by the second image acquisition unit 320 to obtain a second fluorescence image, and the obtained second fluorescence image is represented by the second fluorescence image
  • Two fluorescent video streams are output.
  • the components named "first” and “second” in this embodiment do not represent the sequence relationship between the components.
  • the first visible light scene image may be a visible light scene image on the left side of the endoscope, and may be a visible light scene image on the right side of the endoscope.
  • FIG. 8 schematically shows a structural diagram of the connection relationship between the lighting module and the image acquisition module provided by an embodiment of the present application.
  • the time-sharing control strategy of the lighting module 100 can be transmitted to the image capturing module 300, so that the image capturing module 300 can synchronize data collection.
  • the first image acquisition unit 310 is connected to the lighting controller 120 in the lighting module 100 through a signal transmission line
  • the second image acquisition unit 320 is connected to the lighting controller 120 through another signal transmission line.
  • the second image acquisition unit 320 can collect data synchronously (that is, the data acquisition by the first image acquisition unit 310 and the second image acquisition unit 320 follows the time-sharing control strategy of the lighting module 100 ).
  • both the first image acquisition unit 310 and the second image acquisition unit 320 are RGBNIR image sensors.
  • the clear first visible light image and the first fluorescence image can be acquired time-sharingly by the first image acquisition unit 310; the clear second visible light image and the second fluorescence image can be acquired by the second image acquisition unit 320 Fluorescence image.
  • FIG. 9 schematically shows a spectrum diagram of a bandpass filter module provided in an embodiment of the present application.
  • the multispectral imaging system also includes a bandpass filter module 600, and the bandpass filter module 600 is arranged on the lens module 200 and the image acquisition Between the modules 300, the band-pass filter module 600 is used to allow the visible light and the fluorescent light to pass through, and prevent the light of other wavebands except the visible light waveband and the fluorescent light waveband from passing through.
  • the image acquisition module 300 is configured to time-divisionally receive the visible light and fluorescence passing through the band-pass filter module 600 to acquire the visible light image and the fluorescence image, and is configured to output the visible light video stream and the fluorescence video stream respectively. Acquired visible and fluorescence images.
  • the band-pass filter module 600 between the lens module 200 and the image acquisition module 300 in this application, it can be made that when the illumination module 100 emits visible light to the target tissue, only the The visible light reflected by the target tissue sequentially passes through the lens module 200 and the band-pass filter module 600 to reach the image acquisition module 300, and prevents stray light of other bands from reaching the image acquisition module 300, thus effectively improving the The signal-to-noise ratio of the optical signal input to the image acquisition module 300 improves the image quality of the visible light image acquired by the image acquisition module 300; light, the band-pass filter module 600 only allows the fluorescence generated by the target tissue to pass through the lens module 200 and the band-pass filter module 600 to reach the image acquisition module 300 sequentially, while preventing other wavelength bands The stray light reaches the image acquisition module 300 , thereby effectively improving the signal-to-noise ratio of the light signal input to the image acquisition module 300 , thereby improving the image quality of the fluorescence image acquired by the image acquisition module 300 .
  • the multispectral imaging system further includes a prism module 700 .
  • the bandpass filter module 600 is disposed between the lens module 200 and the prism module 700 .
  • the photosensitive surface of the image acquisition module 300 is adjacent to the light emitting surface of the prism module 700 .
  • the visible light and fluorescent light passing through the bandpass filter module 600 time-divisionally can reach the photosensitive surface of the image acquisition module 300 through the light incident surface, reflective surface and light output surface of the prism module 700 in sequence.
  • the bandpass filter module 600 can be directly pasted on the light incident surface of the prism module 700 or the light exit surface of the lens module 200 by glue.
  • the bandpass filter module 600 may also be formed by directly coating a film on the light-emitting surface of the lens module 200 .
  • the bandpass filtering module 600 includes a first bandpass filter 610 and a second bandpass filter 620
  • the prism module 700 includes a first prism 710 and a second prism 720
  • the first bandpass filter 610 is arranged between the first lens 210 and the first prism 710
  • the first prism 710 is located in the between the first bandpass filter 610 and the first image acquisition unit 310
  • the second bandpass filter 620 is located between the second lens 220 and the second prism 720
  • the second The prism 720 is located between the second bandpass filter 620 and the second image acquisition unit 320 .
  • the illumination module 100 emits visible light to the target tissue
  • the visible light collected by the first lens 210 and reflected by the target tissue passes through the first bandpass filter 610 and the first prism 710 sequentially.
  • the light incident surface 711, reflective surface 712 and light exit surface 713 reach the photosensitive surface 311 of the first image acquisition unit 310 to be captured by the first image acquisition unit 310;
  • Visible light reflected by the target tissue passes through the second bandpass filter 620 , the light incident surface 721 , the reflection surface 722 and the light exit surface 723 of the second prism 720 in sequence, and reaches the photosensitive surface 321 of the second image acquisition unit 320 , so as to be captured by the second image acquisition unit 320.
  • the fluorescence of the target tissue collected by the first lens 210 excited by the excitation light passes through the first bandpass filter 610,
  • the light incident surface 711, reflective surface 712 and light exit surface 713 of the first prism 710 reach the photosensitive surface 311 of the first image acquisition unit 310 to be captured by the first image acquisition unit 310;
  • the fluorescence collected by the second lens 220 from the target tissue excited by the excitation light passes through the second bandpass filter 620 , the light incident surface 721 , the reflection surface 722 and the light exit surface 723 of the second prism 720 in sequence. , reaching the photosensitive surface 321 of the second image acquisition unit 320 to be captured by the second image acquisition unit 320 .
  • the first band-pass filter 610 and the second band-pass filter 620 can both be single-chip band-pass filters One or more (including two) bandpass filter sets.
  • the first bandpass filter 610 can be directly pasted on the light incident surface 711 of the first prism 710 or the light exit surface 212 of the first lens 210 with glue;
  • the filter 620 can be directly glued to the light incident surface 721 of the second prism 720 or the light exit surface 222 of the second lens 220 .
  • the first bandpass filter 610 can be formed by directly coating the light-emitting surface 212 of the first lens 210, and the film can be directly coated on the light-emitting surface 222 of the second lens 220. to form the second bandpass filter 620.
  • the image processing module 400 is used to process the visible light image and the fluorescence image to obtain a fusion image. Further, the image processing module 400 is configured to perform image signal processing on the visible light image and the fluorescence image, and fuse the visible light image and the fluorescence image after image signal processing to obtain a fused image . Specifically, the image processing module 400 is configured to perform image signal processing on each frame of visible light image in the visible light video stream and each frame of fluorescence image in the fluorescence video stream, and process the corresponding frames after image signal processing The visible light image and the fluorescent image are fused to obtain a fusion image, and the obtained fusion image is output in the form of a video stream.
  • the visible light image and the fluorescence image are acquired in time-sharing, mutual interference between multiple bands is effectively eliminated, imaging noise is small, and thus the acquired visible light image and fluorescence image are relatively clear.
  • image signal processing is performed on the visible light image and the fluorescence image, not only the visible light image and the fluorescence image can be converted into human
  • the eye-visible format can further improve the clarity of the visible light image and the fluorescence image, thereby greatly improving the clarity of the finally obtained fusion image.
  • Image signal processing refers to converting the original image in Bayer format output by the image sensor into an image in YUV (or RGB) format through a series of processing processes, so as to convert the image output by the image sensor into an image that can be seen by human eyes.
  • the image processing module 400 includes a visible light image processing unit 410 , a fluorescence image processing unit 420 , a binarization processing unit 430 and an image fusion unit 440 .
  • the visible light image processing unit 410 is configured to perform first image signal processing on the visible light image. Specifically, the visible light image processing unit 410 is configured to perform first image signal processing on each frame of visible light image in the visible light video stream.
  • FIG. 11 schematically shows a schematic workflow of a visible light image processing unit provided by an embodiment of the present application.
  • the visible light image processing unit 410 is specifically configured to sequentially perform dark current processing (black level correction), dead pixel processing, lens correction/gain, color interpolation, color difference restoration, gamma, etc. on the visible light image. Horse correction, noise reduction/sharpening, etc.
  • the first few lines of the pixel area in the visible light image can be used as non-light-sensitive areas for automatic black level correction, that is, the average level of the first few lines of the pixel area can be used as the correction value , and then subtract this correction value from the level values of the pixels in the subsequent area to correct the black level.
  • a bad pixel refers to a pixel in the pixel array that is significantly different from surrounding pixel changes.
  • Bad points are generally divided into three categories: the first category is dead spots, that is, the points that always appear as the darkest value; the second category is bright spots, that is, the points that always appear as the brightest value; A pixel whose pattern is significantly different from the surrounding pixels.
  • the specific implementation method of lens correction is: firstly, determine the area with relatively uniform brightness in the middle of the visible light image, and the pixels in this area do not need to be corrected; centering on this area, calculate the speed of image darkening caused by attenuation at each point, so that Compensation factors (that is, gains) of corresponding R, G, and B channels can be calculated.
  • Visible light mainly contains three kinds of color information, namely R, G, and B. Since the pixel can only sense the brightness of light, but not the intensity of light, in order to reduce the consumption of hardware and resources, it is necessary to use a filter layer so that each pixel can only sense one color of light. Or make it necessary to restore the information of the other two channels of the pixel.
  • the process of finding the values of the other two channels of the pixel is the process of color interpolation. Since the image changes continuously, the R, G, and B values of a pixel should be related to the surrounding pixels, so the values of the other two channels of the point can be obtained by using the values of the surrounding pixels . In this embodiment, the average value of the surrounding pixels of the pixel point can be used to calculate the interpolation value of the point.
  • Color restoration is mainly to correct the color error caused by the color penetration between the color blocks of the filter plate, so as to obtain an image closest to the real color of the object (target tissue).
  • the color correction matrix of the image acquisition module 300 can be used (the color correction matrix is calculated by comparing the image captured by the image acquisition module 300 with the standard image) to perform color correction on the visible light image. Correction.
  • Gamma correction is to edit the gamma curve of the image to detect the dark part and light part of the image signal, and increase the ratio of the two, thereby improving the image contrast and performing nonlinear tone editing on the image .
  • the gamma correction can be realized by using the look-up table method, that is, firstly, according to a gamma value, the ideal output values of different brightness ranges are set in the look-up table. The input brightness can get its ideal output value.
  • a filter may be used to perform filtering processing on the visible light image, so as to eliminate noise in the visible light image. Since some image details will be eliminated while denoising, the image will not be clear enough. Therefore, in order to eliminate the loss of image details during the denoising process, this embodiment will also denoise the visible light image.
  • the visible light image after noise reduction processing is sharpened to restore relevant details of the image.
  • the fluorescence image processing unit 420 is used for performing second image signal processing on the fluorescence image. Specifically, the fluorescence image processing unit 420 is configured to perform second image signal processing on each frame of fluorescence images in the fluorescence video stream. Please refer to FIG. 12 , which schematically shows the workflow of the fluorescence image processing unit provided by an embodiment of the present application. As shown in FIG. 12 , the fluorescence image processing unit 420 is specifically configured to sequentially perform dark current processing, dead pixel processing, lens correction, gamma correction, noise reduction/sharpening and other processing on the fluorescence image.
  • the fluorescence image is a grayscale image
  • the fluorescence image processing unit 420 does not need to perform color interpolation, Color restoration, etc. Therefore, by performing the second image signal processing on the fluorescence image, a fluorescence image with clearer details can be obtained, which lays a good foundation for subsequent acquisition of a multi-band fusion image with clear details.
  • the binarization processing unit 430 is configured to perform binarization processing on the fluorescence image processed by the second image signal, so as to obtain a corresponding mask. Specifically, the binarization processing unit 430 is configured to perform binarization processing on each frame of fluorescent images processed by the second image signal, so as to obtain a corresponding mask. Specifically, in this embodiment, the maximum between-class method (OSTU), iterative threshold method, P quantile method, global threshold method based on minimum error, local threshold method, method of combining global threshold and local threshold, etc. can be used to segment The method performs binarization processing on the fluorescent image processed by the second image signal to obtain a corresponding mask.
  • the maximum between-class method (OSTU)
  • iterative threshold method P quantile method
  • global threshold method based on minimum error
  • local threshold method method of combining global threshold and local threshold, etc.
  • the image fusion unit 440 is configured to fuse the mask with the visible light image processed by the first image signal to obtain a fusion image. Specifically, the image fusion unit 440 is configured to fuse each of the masks with the visible light image of the corresponding frame after processing the first image signal to obtain a fusion image, and is configured to output the obtained image in the form of a video stream. Acquired fused image. Since the mask can clearly reflect the lesion tissue, the mask is fused with the visible light image to obtain a fused image. Through the fused image, doctors can accurately distinguish the lesion tissue area from the normal tissue area.
  • the image fusion unit 440 is configured to color and mark pixels in the visible light image corresponding to pixels whose pixel values in the mask are not 0, so as to obtain a fusion image. Pixels with a pixel value of 0 in the mask correspond to normal tissue areas of the target tissue; pixels in the mask with pixel values other than 0 (ie white areas) correspond to lesion tissue areas of the target tissue. By coloring and marking the pixels in the visible light image corresponding to the pixel points in the mask whose pixel values are not 0, the lesion tissue area can be accurately identified in the visible light image, so that the final In the obtained fusion image, the normal tissue area and the lesion tissue area can be clearly distinguished.
  • the image processing module 400 includes a first image processing unit 450, a second image processing unit 460 and a superposition unit 470; the first image processing The unit 450 is configured to process the first visible light image and the first fluorescence image to obtain a first fused image. Further, the first image processing unit 450 is configured to perform image signal processing on the first visible light image and the first fluorescence image, and process the first visible light image of the corresponding frame after the image signal processing, The first fluorescent images are fused to obtain a first fused image.
  • the first image processing unit 450 is configured to perform image signal processing on each frame of the first visible light image in the first visible light video stream and each frame of the first fluorescence image in the first fluorescence video stream; merging the first visible light image and the first fluorescence image of the corresponding frame after image signal processing to obtain a first fused image; and outputting the first fused image in the form of a first video stream.
  • the second image processing unit 460 is configured to process the second visible light image and the second fluorescence image to obtain a second fusion image.
  • the second image processing unit 460 is configured to perform image signal processing on the second visible light image and the second fluorescence image, and process the second visible light image, the first fluorescence image after the image signal processing The two fluorescent images are fused to obtain a second fused image.
  • the second image processing unit 460 is configured to perform image signal processing on each frame of the second visible light image in the second visible light video stream and each frame of the second fluorescence image in the second fluorescence video stream; merging the second visible light image and the second fluorescence image of the corresponding frame after image signal processing to obtain a second fused image; and outputting the second fused image in the form of a second video stream; the The superposition unit 470 is configured to register the first fused image and the second fused image, and superimpose the registered first fused image and the second fused image to generate a three-dimensional image and Output the three-dimensional image.
  • the superimposing unit 470 is configured to register the first video stream and the second video stream, and superpose the registered first video stream and the second video stream to Generate a 3D video stream and output the 3D image.
  • the first fused image and the second fused image are registered by the superposition unit 470, and the registered first fused image and the second fused image are superimposed, so as to Generate a three-dimensional image, and output the three-dimensional image to the monitor in the surgeon's console for display.
  • the three-dimensional image can enable the doctor to see the three-dimensional information of the target tissue in the surgical field of view, thereby providing the doctor with a more realistic, Clearer visuals. This is more conducive to the doctor's surgical judgment and accurate control of the equipment, which greatly improves the operation efficiency and the safety during the operation.
  • the first image processing unit 450 includes a first visible light image processing unit 410a, a first fluorescence image processing unit 420a, a first binarization processing unit 430a and a first image fusion unit 440a.
  • the first visible light image processing unit 410a is configured to perform first image signal processing on each frame of the first visible light image in the first visible light video stream; the first fluorescence image processing unit 420a is configured to process the first visible light image Each frame of the first fluorescence image in the fluorescence video stream is subjected to second image signal processing; the first binarization processing unit 430a is used to perform binarization processing on each frame of the first fluorescence image after the second image signal processing , to obtain the first mask corresponding to each frame of the first fluorescent image; the first image fusion unit 440a is used to combine each first mask with the first image signal of the corresponding frame after processing the first image signal The first visible light image is fused to obtain a first fused image, which is used to output the first fused image in the form of a first video stream.
  • the first visible light image processing unit 410a and the second visible light image processing unit 410b described below constitute the visible light processing unit 410 described above;
  • the first fluorescence image processing unit 420a and the second visible light image processing unit described below constitute The fluorescence image processing unit 420b constitutes the above-mentioned fluorescence processing unit 420;
  • the first binarization processing unit 430a and the second binarization processing unit 430b described below constitute the above-mentioned binarization processing unit 430:
  • the first image fusion unit 440a and the second image fusion unit 440b described below constitute the image fusion unit 440 described above.
  • the first image fusion unit 440a is configured to color and mark pixels in the first visible light image corresponding to pixels whose pixel values in the first mask are not 0, so as to obtain the first - Merge the images.
  • the second image processing unit 460 includes a second visible light image processing unit 410b, a second fluorescence image processing unit 420b, a second binarization processing unit 430b and a second image fusion unit 440b.
  • the second visible light image processing unit 410b is configured to perform second image signal processing on each frame of the second visible light image in the second visible light video stream;
  • the second fluorescent image processing unit 420b is configured to process the second visible light image Perform second image signal processing on each frame of the second fluorescence image in the fluorescence video stream;
  • the second binarization processing unit 430b is used to perform binarization processing on each frame of the second fluorescence image after the second image signal processing , to obtain the second mask corresponding to each frame of the second fluorescent image;
  • the second image fusion unit 440b is used to combine each second mask with the first image signal of the corresponding frame after processing the first image signal
  • the second visible light image is fused to obtain a second fused image, which is used to output the second fused image in the form of a second video stream.
  • the second image fusion unit 440b is configured to color and mark pixels in the second visible light image corresponding to pixels whose pixel values in the second mask are not 0.
  • each frame of the first fused image in the first video stream can be registered with the second fused image of the corresponding frame in the second video stream to generate the first video stream with disparity information and the second video stream; by superimposing the first video stream with disparity information and the second video stream, a three-dimensional video stream can be generated.
  • the superimposing unit 470 can configure the 3D video stream into different formats according to the 3D display requirements of the monitor in the surgeon's console.
  • FIG. 15 schematically shows a superposition diagram of a first video stream and a second video stream provided by an embodiment of the present application.
  • the superimposing unit 470 may superimpose the first video stream 810 and the second video stream 820 into a 3D video stream output in a polarized interlaced scanning interlaced format.
  • FIG. 16 which schematically shows a superposition diagram of a first video stream 810 and a second video stream 820 provided in another embodiment of the present application. As shown in FIG.
  • the superimposing unit 470 may superimpose the first video stream 810 and the second video stream 820 into a 3D video stream output in a polarized progressive left-right scanning interlaced format.
  • the video stream in the left field of view is the first video stream 810 and the video stream in the right field of view is the second video stream 820 as an example for illustration, however, as Those skilled in the art can understand that in some other implementation manners, the first video stream 810 may also be a video stream under the right field of view, and the second video stream 820 may also be a video stream under the left field of view. flow, which is not limited in this application.
  • the present application also provides a multispectral imaging method.
  • the multispectral imaging method includes the steps of:
  • Step S100 time-sharingly emitting visible light and excitation light to the target tissue, so that the target tissue reflects the visible light and makes the target tissue excited by the excitation light to emit fluorescence;
  • Step S200 receiving the visible light reflected by the target tissue and the fluorescence emitted by the target tissue in time division, so as to obtain a visible light image and a fluorescence image;
  • Step S300 process the visible light image and the fluorescence image to obtain a fusion image.
  • the multi-spectral imaging method provided by the present application transmits visible light and excitation light to the target tissue in time division to acquire visible light images and fluorescence images in time division, and processes the visible light images and the fluorescence images to obtain multiple Band fusion images, so that different tissue states can be distinguished through the obtained multi-band fusion images, which enables doctors to observe tissue information that cannot be observed under single-band conditions.
  • the doctor can clearly see the difference between the lesion and the normal tissue, and can see the details of the difference more clearly, so that the tissue can be cut more accurately and safely.
  • FIG. 18 schematically shows a flow chart of image fusion provided by an embodiment of the present application.
  • the processing of the visible light image and the fluorescence image to obtain a fusion image specifically includes the following process:
  • the mask is fused with the visible light image processed by the first image signal to obtain a fused image.
  • the lesion tissue can be clearly reflected by the mask.
  • the obtained fusion image can help doctors to accurately distinguish the lesion tissue area from the normal tissue area.
  • the fusing the mask with the visible light image processed by the first image signal to obtain a fused image includes: Pixels corresponding to 0 pixels are marked by coloring to obtain a fused image. Pixels with a pixel value of 0 in the mask correspond to the normal tissue area of the target tissue; pixels in the mask with a pixel value other than 0 correspond to the lesion tissue area of the target tissue. Therefore, by coloring and marking the pixels in the visible light image corresponding to the pixel points in the mask whose pixel values are not 0, the lesion tissue area can be accurately identified in the visible light image, which makes The normal tissue area and the lesion tissue area can be clearly distinguished in the finally obtained fusion image.
  • the time-sharing receiving of the visible light reflected by the target tissue and the fluorescence emitted by the target tissue upon excitation, so as to obtain a visible light image and a fluorescence image includes:
  • Time-division receiving the visible light reflected by the target tissue and the fluorescence emitted by the target tissue along the first optical path and the second optical path respectively, so as to obtain the first visible light image, the second visible light image, the first fluorescent image and the second visible light image.
  • Two fluorescent images Two fluorescent images;
  • the processing of the visible light image and the fluorescence image to obtain a fusion image includes:
  • the multispectral imaging method further includes:
  • a three-dimensional image may be generated by acquiring the first fused image and the second fused image under different viewing angles, and registering and superimposing the acquired first fused image and the second fused image.
  • the resulting three-dimensional image can be output to a monitor in the surgeon's console for display.
  • the doctor can see the three-dimensional information of the target tissue in the surgical field of view, providing the doctor with a more realistic and clearer visual effect, which is more conducive to the doctor's surgical judgment and accurate control of the instrument, which greatly improves the The operation efficiency and the safety during the operation are improved.
  • the present application also provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned endoscope-based Multispectral Imaging Methods. Since the computer program stored in the readable storage medium provided by the application can realize the above-mentioned multispectral imaging method when executed by the processor, the readable storage medium provided by the application has the above-mentioned endoscope-based All the advantages of the multispectral imaging method, so it is not necessary to repeat them one by one.
  • the readable storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connection with one or more wires, portable computer hard disk, hard disk, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural programming language-such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via an Internet connection using an Internet service provider). ).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., via an Internet connection using an Internet service provider
  • the multispectral imaging system, imaging method and storage medium provided by the present application have the following advantages:
  • This application transmits visible light and excitation light to the target tissue in time division to obtain visible light images and fluorescence images in time division, and then processes the visible light images and the fluorescence images to obtain multi-band fusion images.
  • the obtained multi-band fusion image can distinguish different tissue states, enabling doctors to observe tissue information that cannot be observed under a single band condition.
  • the doctor can clearly see the difference between the lesion and the normal tissue, and can see the details of the difference more clearly, so that the tissue can be cut more accurately and safely.
  • This application uses a band-pass filter module to prevent (filter) clutter, so that only visible light and fluorescence pass through, thereby effectively improving the signal-to-noise ratio of the input signal and further improving the image quality of the obtained multi-band fusion image .
  • This application adopts an RGBNIR image sensor with a higher QE (quantum efficiency) in the near-infrared band to collect visible light images and fluorescence images in time-sharing, so that high-quality visible light images and fluorescence images can be obtained, which further improves the acquired multi-wavelength images.
  • QE quantum efficiency
  • this application can enable doctors to see the three-dimensional information of the target tissue in the surgical field of view, thereby providing doctors with more realistic and clearer visual effects, which is more conducive to doctors' surgical judgment and accurate control of instruments , which greatly improves the operation efficiency and the safety during the operation.
  • the time-sharing control imaging system adopted in this application can improve the flexibility of control for the system software part; for the system hardware part, the complexity of the system hardware is greatly reduced, making the whole system more dexterous and more convenient for endoscopy System integration of mirror and minimally invasive surgical robot.
  • each block in a flowchart or block diagram may represent a module, a program segment, or a portion of code that includes one or more programmable components for implementing specified logical functions.
  • Executable instructions, the module, program segment or part of the code contains one or more executable instructions for realizing the specified logic function.
  • the functions noted in the block may occur out of the order noted in the figures.
  • each block in the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in special purpose hardware-based systems that perform the specified functions or actions. implemented, or may be implemented by a combination of special purpose hardware and computer instructions.
  • the functional modules in the various embodiments herein can be integrated together to form an independent part, or each module can exist independently, or two or more modules can be integrated to form an independent part.

Abstract

一种多光谱成像系统、成像方法和存储介质。成像系统包括镜头模块(200)、照明模块(100)、图像获取模块(300)和图像处理模块(400)。照明模块(100)用于分时向目标组织发射可见光和激励光,以使目标组织将可见光反射以及使目标组织受激励光激发而发射出荧光。镜头模块(200)用于分时采集经目标组织反射的可见光以及目标组织受激而发射的荧光。图像获取模块(300)用于分时接收镜头模块(200)采集的可见光和荧光,以获取可见光图像和荧光图像。图像处理模块(400)用于对可见光图像以及荧光图像进行处理,以获取融合图像。通过多波段融合图像,可以清晰地显示病灶与正常组织的区别,尤其区别的具体细节,能够更加准确、更加安全地切割组织。

Description

多光谱成像系统、成像方法和存储介质 技术领域
本申请涉及光学成像技术领域,特别涉及一种多光谱成像系统、成像方法和存储介质。
背景技术
随着医疗技术的不断发展,内窥镜作为集传统光学、人体工程学、精密机械、现代电子、数学、软件等于一体的检测仪器,应用范围越来越广。内窥镜可以进入待检测者的体内(例如食道),以获取待检测部位的图像,进而确定待检测部位是否存在病变。内窥镜可以看到X射线不能显示的病变,因此它对医生非常有用。
当前最广泛地应用于微创手术机器人中的内窥镜只采用可见光波段成像技术,这使得在一些手术中无法有效区分病灶和正常组织形态,不能给主刀医生提供更好的引导提示辅助。
生物实验表明,当患者组织被注射特定化学物质,并照射特定波段光源时,病变组织会受激发射特定波段的辐射光,而正常组织不会发射这样的受激辐射。利用这种特性,可以区分病灶组织和正常组织。这使得将多波段成像技术引入到微创手术机器人中的内窥镜上成为可能。
现有技术中,在采用多波段成像时,由于不同的波段之间会产生干扰,从而大大影响了最终的多波段融合图像的质量。
需要说明的是,公开于该发明背景技术部分的信息仅仅旨在加深对本申请一般背景技术的理解,而不应当被视为承认或以任何形式暗示该信息构成已为本领域技术人员所公知的现有技术。
发明内容
本申请的目的在于提供一种多光谱成像系统、成像方法和存储介质,可以获得清晰的多波段融合图像。
为达到上述目的,本申请提供一种多光谱成像系统,包括:照明模块、镜头模块、图像获取模块和图像处理模块;
所述照明模块用于分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光;
所述镜头模块用于分时采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光;
所述图像获取模块用于分时接收所述镜头模块采集的可见光和荧光,以获取可见光图像和荧光图像;
所述图像处理模块用于对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。
可选的,所述图像获取模块用于将所述可见光图像和所述荧光图像分别以可见光视频流和荧光视频流的形式输出;
所述图像处理模块用于对所述可见光视频流中的各帧可见光图像以及所述荧光视频流中的各帧荧光图像进行图像信号处理,并将经所述图像信号处理后的对应帧的所述可见光图像、所述荧光图像进行融合,以获取融合图像,并将所述融合图像以视频流的形式输出。
可选的,所述多光谱成像系统还包括带通滤波模块,所述带通滤波模块设于所述镜头模块和所述图像获取模块之间,所述带通滤波模块用于允许所述可见光和所述荧光通过,并阻止除可见光波段和荧光波段以外的其它波段的光通过;
所述图像获取模块用于分时接收通过所述带通滤波模块的可见光和荧光,以获取可见光图像和荧光图像。
可选的,所述多光谱成像系统还包括棱镜模块,所述带通滤波模块位于所述镜头模块和所述棱镜模块之间。
可选的,所述图像获取模块包括RGBNIR图像传感器,所述RGBNIR图像传感器用于分时接收所述镜头模块采集的可见光和荧光,以获取可见光图像和荧光图像。
可选的,所述照明模块包括光源单元和照明控制器,其中,在所述照明控制器的控制下,所述光源单元能够分时向所述目标组织发射可见光和激励光。
可选的,所述光源单元包括第一光源模组和第二光源模组,所述第一光源模组用于向所述目标组织发射可见光,所述第二光源模组用于向所述目标组织发射激励光。
可选的,所述照明控制器包括第一控制单元、第二控制单元和第三控制单元;
所述第一控制单元用于控制所述第一光源模组和所述第二光源模组的输出能量强度;
所述第二控制单元用于控制所述第一光源模组和所述第二光源模组的开启与关闭;
所述第三控制单元用于控制所述第一光源模组和所述第二光源模组的开启频率。
可选的,所述图像处理模块包括可见光图像处理单元、荧光图像处理单元、二值化处理单元和图像融合单元;
所述可见光图像处理单元用于对所述可见光图像进行第一图像信号处理;
所述荧光图像处理单元用于对所述荧光图像进行第二图像信号处理;
所述二值化处理单元用于对经所述第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜;
所述图像融合单元用于将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图像。
可选的,所述图像融合单元用于对经所述第一图像信号处理后的所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。
可选的,所述镜头模块包括第一镜头和第二镜头,所述第一镜头用于分时沿第一光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,所述第二镜头用于分时沿第二光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光;
所述图像获取模块包括第一图像获取单元和第二图像获取单元,所述第一图像获取单元用于分时接收所述第一镜头采集的可见光和荧光,以获取第一可见光图像和第一荧光图像,所述第二图像获取单元用于分时接收所述第二镜头采集的可见光和荧光,以获取第二可见光图像和第二荧光图像;
所述图像处理模块包括第一图像处理单元、第二图像处理单元和叠加单元;所述第一图像处理单元用于对所述第一可见光图像以及所述第一荧光图像进行处理,以获取第一融合图像;所述第二图像处理单元用于对所述第二可见光图像以及所述第二荧光图像进行处理,以获取第二融合图像;所述叠加单元用于对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像并输出所述三维图像。
为达到上述目的,本申请还提供一种多光谱成像方法,所述多光谱成像方法包括:
分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光;
分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像;
对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。
可选的,所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,包括:
对所述可见光图像进行第一图像信号处理以及对所述荧光图像进行第二图像信号处理;
对经所述第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜;
将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图 像。
可选的,所述将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图像,包括:
对经所述第一图像信号处理后的所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。
可选的,所述分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像,包括:
分别沿第一光路和第二光路分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取第一可见光图像、第二可见光图像、第一荧光图像和第二荧光图像;
所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,包括:
对所述第一可见光图像和所述第一荧光图像进行处理,以获取第一融合图像,以及对所述第二可见光图像和所述第二荧光图像进行处理,以获第二融合图像;
在获取所述第一融合图像和第二融合图像后,所述多光谱成像方法还包括:
对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像并输出所述三维图像。
为达到上述目的,本申请还提供一种可读存储介质,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时,实现上文所述的多光谱成像方法。
与现有技术相比,本申请提供的多光谱成像系统、成像方法和存储介质具有以下优点:
(1)本申请通过分时向目标组织发射可见光和激励光,以分时获取可见光图像和荧光图像,再对所述可见光图像和所述荧光图像进行处理,以获取多波段融合图像。由此,所获取的多波段融合图像可以区分不同组织状态,使得医生能够观察到单一波段条件下无法观察到的组织信息。通过所述多波段融合图像,医生可以清晰地看到病灶与正常组织的区别,且可以更加清晰地看到该区别的细节,使得可以更加准确、更加安全地切割组织。
(2)本申请采用带通滤波模块进行杂波阻止(滤除),使仅可见光和荧光通过,从而可以有效提高输入信号的信噪比,进一步提高了所获取的多波段融合图像的图像质量。
(3)本申请通过采用近红外波段QE(量子效率)较高的RGBNIR图像传感器,分时采集可见光图像和荧光图像,可以获取高质量的可见光图像和荧光图像,进一步提高了所获取的多波段融合图像的图像质量。
(4)本申请通过获取三维图像,可以使得医生能够看到手术视野中目标组织的三维立体信息,从而提供医生更真实、更清晰的视觉效果,更加有利于医生进行手术判断、器械的准确控制,极大地提高了手术效率和手术过程中的安全性。
(5)本申请采用的分时控制成像系统,对于系统软件部分,可以提高控制的灵活性;对于系统硬件部分,极大地降低了系统硬件的复杂性,使得整个系统更加灵巧,更加便于内窥镜与微创手术机器人的系统整合。
附图说明
图1为本申请一实施方式中的多光谱成像系统的方框结构示意图;
图2为本申请一实施方式中的内窥镜镜管的近端的截面示意图;
图3为本申请一实施方式中的内窥镜镜管的近端的立体示意图;
图4为本申请一实施方式中的内窥镜镜管的近端的剖面示意图;
图5为本申请一实施方式中的第一成像光路示意图;
图6为本申请一实施方式中的第二成像光路示意图;
图7为本申请一实施方式中的RGBNIR图像传感器对不同波段的光的量子效率示意图;
图8为本申请一实施方式中的照明模块与图像获取模块之间的连接关系结构示意图;
图9为本申请一实施方式中的带通滤波模块的光谱图;
图10为本申请一实施方式中的图像处理模块的方框结构示意图;
图11为本申请一实施方式中的可见光图像处理单元的工作流程示意图;
图12为本申请一实施方式中的荧光图像处理单元的工作流程示意图;
图13为本申请一实施方式中的第一图像处理单元的方框结构示意图;
图14为本申请一实施方式中的第二图像处理单元的方框结构示意图;
图15为本申请一实施方式中的第一视频流与第二视频流的叠加示意图;
图16为本申请另一实施方式中的第一视频流与第二视频流的叠加示意图;
图17为本申请一实施方式中的多光谱成像方法的流程示意图;
图18为本申请一实施方式中的图像融合流程示意图。
其中,附图标记如下:
照明模块-100;光源单元-110;照明控制器-120;第一光源模组-111;第二光源模组-112;第一控制单元-121;第二控制单元-122;第三控制单元-123;
镜头模块-200;第一镜头-210;第二镜头-220;第一透镜-211;第二透镜-221;出光面-212、222;
图像获取模块-300;第一图像获取单元-310;第二图像获取单元-320;感光面-311、321;
图像处理模块-400;可见光图像处理单元-410;荧光图像处理单元-420;二值化处理单元-430;图像融合单元-440;第一图像处理单元-450;第二图像处理单元-460;叠加单元-470;第一可见光图像处理单元-410a;第一荧光图像处理单元-420a;第一二值化处理单元-430a;第一图像融合单元-440a;第二可见光图像处理单元-410b;第二荧光图像处理单元-420b;第二二值化处理单元-430b;第二图像融合单元-440b;
照明通道-500;
带通滤波模块-600;第一带通滤波器-610;第二带通滤波器-620;
棱镜模块-700;第一棱镜-710;第二棱镜-720;
入光面-711、721;反射面-712、722;出光面-212、222、713、723;
第一视频流-810;第二视频流-820。
具体实施方式
以下结合附图1至18和具体实施方式对本申请提出的多光谱成像系统、成像方法作进一步详细说明。根据下面说明,本申请的优点和特征将更清楚。需要说明的是,附图采用非常简化的形式且均使用非精准的比例,仅用以方便、明晰地辅助说明本申请实施方式的目的。为了使本申请的目的、特征和优点能够更加明显易懂,请参阅附图。须知,本说明书所附图式所绘示的结构、比例、大小等,均仅用以配合说明书所揭示的内容,以供熟悉此技术的人士了解与阅读,并非用以限定本申请实施的限定条件。任何结构的修饰、比例关系的改变或大小的调整,在与本申请所能产生的功效及所能达成的目的相同或近似的情况下,均应仍落在本申请所揭示的技术内容能涵盖的范围内。本文所公开的本申请的具体设计特征包括例如具体尺寸、方向、位置和外形将部分地由具体所要应用和使用的环境来确定。在以下说明的实施方式中,有时在不同的附图之间共同使用同一附图标记来表示相同部分或具有相同功能的部分,而省略其重复说明。在本说明书中,使用相似的标号和字母表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。另外,如果本文所述的方法包括一系列步骤,且本文所呈现的这些步骤的顺序并非必须是可执行这些步骤的唯一顺序,且一些所述的步骤可被省略和/或一些本文未描述的其他步骤可被添加到该方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不 排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本申请的主要目的在于提供一种多光谱成像系统、成像方法和存储介质,可以获得清晰的多波段融合图像。需要说明的是,虽然本申请是以将该多光谱成像系统、成像方法和存储介质应用于内窥镜上为例进行说明,但是如本领域技术人员所能理解的,所述多光谱成像系统、成像方法和存储介质还可以应用于其它具有成像功能的装置上,例如安检设备上,本申请对此并不进行限定。
需要说明的是,本申请实施方式的多光谱成像方法可应用于本申请实施方式的多光谱成像系统中。本申请中所称的近端是指靠近患者的一端,远端是指靠近操作者的一端。
为实现上述目的,本申请提供一种多光谱成像系统,请参考图1,其示意性地给出了本申请一实施方式提供的多光谱成像系统的方框结构示意图。如图1所示,所述多光谱成像系统包括照明模块100、镜头模块200、图像获取模块300和图像处理模块400。
其中,所述照明模块100用于分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光。具体地,所述可见光的波长可以为400-690nm,所述激励光的波长可以为780-820nm,所述荧光的波长可以为820-860nm。
需要说明的是,本申请对照明模块100的具体位置没有特别的限制。例如,当所述多光谱成像系统应用于内窥镜上时,所述照明模块100提供的可见光和激励光可以通过容纳于内窥镜的照明通道500中的连接器输送至内窥镜的末端并到达目标组织。所述连接器例如为光纤。由此,通过采用光纤进行可见光和激励光的传输,能够有助于形成均匀光场,提高成像质量。进一步地,请参考图2和图3,其中,图2示意性地给出了本申请一实施方式提供的内窥镜镜管的近端的截面示意图,图3示意性地给出了本申请一实施方式提供的内窥镜镜管的近端的立体示意图。如图2和图3所示,为了能够进一步形成均匀光场,所述照明通道500包括对称分布在所述镜头模块200两侧的两路连接器(例如,光纤)。
具体地,如图1所示,所述照明模块100包括光源单元110和照明控制器120。在所述照明控制器120的控制下,所述光源单元110能够分时向所述目标组织发射可见光和激励光。由此,操作者可以通过控制所述照明控制器120,使得所述光源单元110能够分时向所述目标组织发射可见光和激励光。
进一步地,如图1所示,所述光源单元110包括第一光源模组111和第二光源模组112,所述第一光源模组111用于向所述目标组织发射可见光,所述第二光源模组112用于向所述目标组织发射激励光。由此,当所述第一光源模组111开启,所述第二光源模组112关闭时,通过所述第一光源模组111能够向所述目标组织发射可见光;当所述第二光源模组112开启,所述第一光源模组111关闭时,通过所述第二光源模组112能够向所述目标组织发射激励光。进一步地,通过让所述第一光源模组111与所述第二光源模组112分时开启,即可通过所述光源单元110分时向所述目标组织发射可见光和激励光。
更进一步地,如图1所示,所述照明控制器120包括第一控制单元121、第二控制单元122和第三控制单元123;所述第一控制单元121用于控制所述第一光源模组111和所述第二光源模组112的输出能量强度;所述第二控制单元122用于控制所述第一光源模组111和所述第二光源模组112的开启与关闭;所述第三控制单元123用于控制所述第一光源模组111和所述第二光源模组112的开启频率。由此,通过所述第一控制单元121可以根据实际需要控制所述第一光源模组111输出的可见光的能量强度以及所述第二光源模组112输出的激励光的能量强度,以进一步提高成像质量。通过所述第二控制单元122可以控制所述第一光源模组111和所述第二光源模组112的工作状态,以使得所述光源单元110能够分时向所述目标组织发射可见光和激励光。通过所述第三控制单元123可以控制所述第一光源模组111和所述第二光源模组112的开启频率,以使得所述光源单元110能够以一定的频率分时向所述目标组织发射可见光和激励光。
所述镜头模块200用于分时采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光。由此,当所述照明模块100向所述目标组织发射可见光时,通过所述镜 头模块200可以采集经所述目标组织反射的可见光;当所述照明模块100向所述目标组织发射激励光时,通过所述镜头模块200可以采集所述目标组织受激发所产生的荧光。
请参考图1至图4,其中,图4示意性地给出了本申请一实施方式提供的内窥镜镜管的近端的剖面示意图。如图1至图4所示,当所述内窥镜为三维内窥镜时,所述镜头模块200包括第一镜头210和第二镜头220,所述第一镜头210用于分时沿第一光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,所述第二镜头220用于分时沿第二光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光。由此,当所述照明模块100向所述目标组织发射可见光时,通过所述第一镜头210可以沿第一光路采集经所述目标组织反射的可见光,通过所述第二镜头220可以沿第二光路采集经所述目标组织反射的可见光;当所述照明模块100向所述目标组织发射激励光时,通过所述第一镜头210可以沿第一光路采集所述目标组织受激发所产生的荧光,通过所述第二镜头220可以沿第二光路采集所述目标组织受激发所产生的荧光。
请参考图5和图6,其中,图5示意性地给出了本申请一实施方式提供的第一成像光路示意图;图6示意性地给出了本申请一实施方式提供的第二成像光路示意图。如图5所示,所述第一镜头210包括多个沿第一光路依次设置的第一透镜211。如图6所示,所述第二镜头220包括多个沿第二光路依次设置的第二透镜221。
所述图像获取模块300用于分时接收经所述镜头模块200采集的可见光和荧光,以获取可见光图像和荧光图像。具体地,所获取的多幅可见光图像可以可见光视频流的形式输出,所获取的多幅荧光图像可以荧光视频流的形式输出。由此,当所述照明模块100向所述目标组织发射可见光时,经所述目标组织反射的可见光可通过所述镜头模块200到达所述图像获取模块300处而被所述图像获取模块300所捕获,以获得可见光图像,所获得的可见光图像以可见光视频流的形式输出。当所述照明模块100向所述目标组织发射激励光时,所述目标组织受所述激励光激发发射出的荧光可通过所述镜头模块200到达所述图像获取模块300处而被所述图像获取模块300所捕获以获得荧光图像,所获得的荧光图像以荧光视频流的形式输出。由于所述图像获取模块300是分时获取可见光图像和荧光图像的,因此可以有效避免不同波段之间的相互干扰,有效消除成像噪声,使得所获取的可见光图像和荧光图像更加清晰,从而为后续获得清晰的多波段融合图像奠定良好的基础。此外,由于本申请提供的多光谱成像系统是采用分时控制成像的,因此,对于系统软件部分,可以大大提高控制的灵活性,对于系统硬件部分,极大地降低了系统硬件的复杂性,使得整个系统更加灵巧,更加便于内窥镜与微创手术机器人的系统整合。需要说明的是,在其它一些实施方式中,所获取的多幅可见光图像、多幅荧光图像也可不以视频流的形式输出,本申请对此并不进行限定。
具体地,所述图像获取模块300包括RGBNIR图像传感器(是指既能捕获可见光,以获取可见光图像,又能捕获近红外光(例如荧光),获取近红外光图像(例如荧光图像)的图像传感器)。所述RGBNIR图像传感器用于分时接收经所述镜头模块200采集的可见光和荧光以获取可见光图像和荧光图像,并可用于分别以可见光视频流和荧光视频流的形式输出所述可见光图像和荧光图像。请参考图7,其示意性地给出了本申请一实施方式提供的RGBNIR图像传感器对不同波段的光的量子效率示意图。如图7所示,所述RGBNIR图像传感器不仅对可见光具有较高的量子效率,对近红外波段的荧光也具有较高的量子效率。由此,本申请通过采用RGBNIR图像传感器获取可见光图像和荧光图像,可以大大提高所获取的可见光图像和荧光图像的图像质量,为获得清晰的多波段融合图像提供良好的基础。此外,本申请通过采用同一图像传感器分时获取可见光图像和荧光图像,可以进一步简化内窥镜的整体结构,降低成本。
进一步地,如图1和图4所示,当所述内窥镜为三维内窥镜时,所述图像获取模块300包括第一图像获取单元310和第二图像获取单元320。其中,所述第一图像获取单元310用于分时接收经所述第一镜头210采集的可见光和荧光,以获取第一可见光图像和第一荧光图像,并且可用于分别以第一可见光视频流和第一荧光视频流的形式输出所获取的 第一可见光图像和第一荧光图像。所述第二图像获取单元320用于分时接收经所述第二镜头采集的可见光和荧光,以获取第二可见光图像和第二荧光图像,并可用于分别以第二可见光视频流和第二荧光视频流的形式输出所获取的第二可见光图像和第二荧光图像。由此,当所述照明模块100向所述目标组织发射可见光时,经所述第一镜头210采集的目标组织反射的可见光可到达所述第一图像获取单元310处而被所述第一图像获取单元310所捕获,以获得第一可见光图像,并且所获得的第一可见光图像以第一可见光视频流的形式输出;经所述第二镜头220采集的目标组织反射的可见光可到达所述第二图像获取单元320处而被所述第二图像获取单元320所捕获,以获得第二可见光图像,并且所获得的第二可见光图像以第二可见光视频流的形式输出。当所述照明模块100向所述目标组织发射激励光时,经所述第一镜头210采集的所述目标组织受所述激励光激发产生的荧光可到达所述第一图像获取单元310处而被所述第一图像获取单元310所捕获,以获得第一荧光图像,并且所获得的第一荧光图像以第一荧光视频流的形式输出;经所述第二镜头220采集的所述目标组织受所述激励光激发产生的荧光可到达所述第二图像获取单元320处而被所述第二图像获取单元320所捕获,以获得第二荧光图像,并且所获得的第二荧光图像以第二荧光视频流的形式输出。
需强调的是,在本实施方式中以“第一”和“第二”命名的部件,不代表部件之间的先后顺序关系。例如,第一可见光场景图像,可能是内窥镜左侧的可见光场景图像,可能是内窥镜右侧的可见光场景图像。
请参考图8,其示意性地给出了本申请一实施方式提供的照明模块与图像获取模块之间的连接关系结构示意图。如图8所示,通过将所述照明模块100与所述图像获取模块300相连,可以将所述照明模块100的分时控制策略传输给所述图像获取模块300,以使得所述图像获取模块300能够同步进行数据的采集。具体地,所述第一图像获取单元310通过一信号传输线路与所述照明模块100中的照明控制器120相连,所述第二图像获取单元320通过另一信号传输线路与所述照明控制器120相连,由此,所述照明控制器120能够将分时控制策略同时传递给所述第一图像获取单元310和所述第二图像获取单元320,以使得所述第一图像获取单元310和所述第二图像获取单元320能够同步进行数据的采集(即,所述第一图像获取单元310和所述第二图像获取单元320的数据采集遵循照明模块100的分时控制策略)。
优选地,所述第一图像获取单元310和所述第二图像获取单元320均为RGBNIR图像传感器。由此,通过所述第一图像获取单元310可以分时获取清晰的第一可见光图像和第一荧光图像;通过所述第二图像获取单元320可以分时获取清晰的第二可见光图像和第二荧光图像。
请参考图1、图4、图5、图6和图9,其中,图9示意性地给出了本申请一实施方式提供的带通滤波模块的光谱图。如1、图4、图5、图6和图9所示,所述多光谱成像系统还包括带通滤波模块600,所述带通滤波模块600设于所述镜头模块200和所述图像获取模块300之间,所述带通滤波模块600用于允许所述可见光和所述荧光通过,并阻止除可见光波段和荧光波段以外的其它波段的光通过。对应的,所述图像获取模块300用于分时接收通过所述带通滤波模块600的可见光和荧光,以获取可见光图像和荧光图像,并用于分别以可见光视频流和荧光视频流的形式输出所获取的可见光图像和荧光图像。由此,本申请通过在所述镜头模块200和所述图像获取模块300之间设置带通滤波模块600,可以使得,当所述照明模块100向所述目标组织发射可见光时,只允许经所述目标组织反射的可见光依次通过所述镜头模块200、所述带通滤波模块600到达所述图像获取模块300处,而阻止其它波段的杂光到达所述图像获取模块300处,从而有效提高了输入至所述图像获取模块300的光信号的信噪比,进而提高了所述图像获取模块300所获取的可见光图像的图像质量;同理,当所述照明模块100向所述目标组织发射激励光时,所述带通滤波模块600只允许所述目标组织受激发产生的荧光依次通过所述镜头模块200、所述带通滤波模块600到达所述图像获取模块300处,而阻止其它波段的杂光到达所述图像获取模块300 处,从而有效提高了输入至所述图像获取模块300的光信号的信噪比,进而提高了所述图像获取模块300所获取的荧光图像的图像质量。
进一步地,如图1所示,所述多光谱成像系统还包括棱镜模块700。所述带通滤波模块600设于所述镜头模块200和所述棱镜模块700之间。所述图像获取模块300的感光面邻近于所述棱镜模块700的出光面。由此,分时通过所述带通滤波模块600的可见光和荧光可以依次经所述棱镜模块700的入光面、反射面和出光面到达所述图像获取模块300的感光面。具体地,在一些实施方式中,所述带通滤波模块600可直接采用胶水粘贴在所述棱镜模块700的入光面上或者所述镜头模块200的出光面上。如本领域技术人员所能理解的,在其它一些实施方式中,也可以通过直接在所述镜头模块200的出光面上镀膜以形成所述带通滤波模块600。
如图1、图4至图6所示,当所述内窥镜为三维内窥镜时,所述带通滤波模块600包括第一带通滤波器610和第二带通滤波器620,所述棱镜模块700包括第一棱镜710和第二棱镜720,所述第一带通滤波器610设于所述第一镜头210和所述第一棱镜710之间,所述第一棱镜710位于所述第一带通滤波器610和所述第一图像获取单元310之间,所述第二带通滤波器620位于所述第二镜头220和所述第二棱镜720之间,所述第二棱镜720位于所述第二带通滤波器620和所述第二图像获取单元320之间。由此,当所述照明模块100向所述目标组织发射可见光时,经所述第一镜头210采集的目标组织反射的可见光依次经所述第一带通滤波器610、所述第一棱镜710的入光面711、反射面712和出光面713,到达所述第一图像获取单元310的感光面311,以被所述第一图像获取单元310所捕获;经所述第二镜头220采集的目标组织反射的可见光依次经所述第二带通滤波器620、所述第二棱镜720的入光面721、反射面722和出光面723,到达所述第二图像获取单元320的感光面321,以被所述第二图像获取单元320所捕获。当所述照明模块100向所述目标组织发射激励光时,经所述第一镜头210采集的所述目标组织受所述激励光激发产生的荧光依次经所述第一带通滤波器610、所述第一棱镜710的入光面711、反射面712和出光面713,到达所述第一图像获取单元310的感光面311,以被所述第一图像获取单元310所捕获;经所述第二镜头220采集的所述目标组织受所述激励光激发产生的荧光依次经所述第二带通滤波器620、所述第二棱镜720的入光面721、反射面722和出光面723,到达所述第二图像获取单元320的感光面321,以被所述第二图像获取单元320所捕获。
需要说明的是,如本领域技术人员所能理解的,在一些实施方式中,所述第一带通滤波器610、所述第二带通滤波器620可均为单片的带通滤光片或多片(包括两片)的带通滤光片组。此时,所述第一带通滤波器610可直接采用胶水粘贴在所述第一棱镜710的入光面711上或者所述第一镜头210的出光面212上;所述第二带通滤波器620可以直接采用胶水粘贴在所述第二棱镜720的入光面721上或者所述第二镜头220的出光面222上。在其它一些实施方式中,可以通过直接在所述第一镜头210的出光面212上镀膜以形成所述第一带通滤波器610,以及直接在所述第二镜头220的出光面222上镀膜以形成所述第二带通滤波器620。
所述图像处理模块400用于对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。进一步地,所述图像处理模块400用于对所述可见光图像以及所述荧光图像进行图像信号处理,并将经图像信号处理后的所述可见光图像、所述荧光图像进行融合,以获取融合图像。具体地,所述图像处理模块400用于对所述可见光视频流中的各帧可见光图像以及所述荧光视频流中的各帧荧光图像进行图像信号处理,并将经图像信号处理后的对应帧的所述可见光图像、所述荧光图像进行融合,以获取融合图像,并以视频流的形式输出所获取的融合图像。由于所述可见光图像和所述荧光图像是分时获取的,有效消除了多波段之间的相互干扰,成像噪声较小,由此所获取的可见光图像和荧光图像均较清晰。而且,在将对应帧的所述可见光图像和所述荧光图像进行融合之前,先对所述可见光图像和所述荧光图像进行图像信号处理,不仅可以将所述可见光图像和所述荧光图像转换成人眼可见的格式,还可以进一步提高所述可见光图像和所述荧光图像的清晰度,从而大大提高 了最终获得的融合图像的清晰度。由此,所述融合图像可以使得医生能够区分不同组织状态,清晰地看到病灶与正常组织的区别,且本申请提供的多光谱成像系统所获得的融合图像的细节更清晰,从而在切割组织时更加准确、更加安全。图像信号处理是指将图像传感器输出的Bayer格式的原始图像经过一系列的处理过程转换成YUV(或者RGB)格式的图像,以将图像传感器输出的图像转换成人眼可看的图像。
具体地,请参考图10,其示意性地给出了本申请一实施方式提供的图像处理模块的方框结构示意图。如图10所示,所述图像处理模块400包括可见光图像处理单元410、荧光图像处理单元420、二值化处理单元430和图像融合单元440。
其中,所述可见光图像处理单元410用于对所述可见光图像进行第一图像信号处理。具体地,所述可见光图像处理单元410用于对所述可见光视频流中的各帧可见光图像进行第一图像信号处理。请参考图11,其示意性地给出了本申请一实施方式提供的可见光图像处理单元的工作流程示意图。如图11所示,所述可见光图像处理单元410具体用于对所述可见光图像依次进行暗电流处理(黑电平校正)、坏点处理、镜头校正/增益、颜色插补、色差还原、伽马校正、降噪/锐化等处理。由此,通过所述可见光图像处理单元410对所述可见光图像进行的第一图像信号处理,可以获得细节清晰的可见光图像,以为后续获取细节清晰的多波段融合图像奠定良好的基础。
具体地,在进行暗电流处理时,可以将可见光图像中的像素区的头几行作为不感光区,用于自动黑电平校正,即将像素区的头几行的电平平均值作为校正值,然后将后续区域的像素的电平值都减去此校正值,便可以将黑电平校正过来了。
坏点,是指像素阵列中与周围像素点的变化表现出明显不同的像素。坏点一般分为三类:第一类是死点,即一直表现为最暗值的点;第二类是亮点,即一直表现为最亮值的点;第三类是漂移点,就是变化规律与周围像素明显不同的像素点。通过在颜色插补之前进行坏点处理,可以有效防止坏点随着颜色插补的过程往外扩散。
由于相机在成像距离较远时,随着视场角慢慢增大,能够通过相机镜头的倾斜光束慢慢减少,从而使得获得的图像中间比较亮,边缘比较暗。这个现象就是光学系统中的渐晕。由此,本申请通过对所述可见光图像进行镜头校正,可以有效消除渐晕带来的图像亮度不均对后续处理的影响。镜头校正的具体实现方法是:首先确定可见光图像中间亮度比较均匀的区域,该区域的像素不需要做矫正;以这个区域为中心,计算出各点由于衰减带来的图像变暗的速度,这样就可以计算出相应R、G、B通道的补偿因子(即增益)。
可见光中主要包含三种颜色信息,即R、G、B。由于像素只能感应光的亮度,不能感应光的强度,为了减小硬件和资源的消耗,需要使用一个滤光层,以使得每个像素点只能感应到一种颜色的光。者使得需要复原该像素点的其它两个通道的信息。寻找该像素点的另外两个通道的值的过程就是颜色插补的过程。由于图像是连续变化的,因此一个像素点的R、G、B的值应该是与周围的像素点相联系的,因此可以利用其周围像素点的值来获得该点的其它两个通道的值。在本实施方式中,可以利用该像素点周围像素的平均值来计算该点的插补值。
色彩还原主要是为了校正在滤光板各颜色块之间的颜色渗透带来的颜色误差,以获得最接近于物体(目标组织)真实颜色的图像。在本实施方式中,可以利用所述图像获取模块300的颜色校正矩阵(通过将图像获取模块300拍摄到的图像与标准图像进行比较,以计算得到颜色校正矩阵)来对所述可见光图像进行颜色校正。
伽马校正是对图像的伽马曲线进行编辑,以检出图像信号中的深色部分和浅色部分,并使两者的比例增大,从而提高图像对比度,以对图像进行非线性色调编辑。当用于校正的伽马值大于1时,图像较亮的部分被压缩,较暗的部分被扩展;而伽马值小于1时,情况则刚好相反。在本实施方式中,可以采用查表法来实现伽马校正,即首先根据一个伽马值,将不同亮度范围的理想输出值在查表中设定好,在处理可见光图像时,只需要根据输入的亮度,即可以得到其理想的输出值。
通过对所述可见光图像进行降噪处理,可以有效消除所述可见光图像中的各种噪声。 具体地,在本实施方式中可以采用滤波器对所述可见光图像进行滤波处理,以消除所述可见光图像中的噪声。由于在降噪的同时,会把一些图像细节给消除,导致图像不够清晰,因此,为了消除降噪过程中对图像细节的损失,本实施方式在对可见光图像进行降噪处理后,还会对经降噪处理后的所述可见光图像进行锐化处理,以还原图像的相关细节。
所述荧光图像处理单元420用于对所述荧光图像进行第二图像信号处理。具体地,所述荧光图像处理单元420用于对所述荧光视频流中的各帧荧光图像进行第二图像信号处理。请参考图12,其示意性地给出了本申请一实施方式提供的荧光图像处理单元的工作流程示意图。如图12所示,所述荧光图像处理单元420具体用于对所述荧光图像依次进行暗电流处理、坏点处理、镜头校正、伽马校正、降噪/锐化等处理。由于荧光图像为灰度图像,因此与所述可见光图像处理单元410对所述可见光图像所做的图像信号处理过程相比,所述荧光图像处理单元420不需要对所述荧光图像进行颜色插值、色彩还原等处理。由此,通过对所述荧光图像进行第二图像信号处理,可以获得细节更加清晰的荧光图像,以为后续获取细节清晰的多波段融合图像奠定良好的基础。
所述二值化处理单元430用于对经第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜。具体地,所述二值化处理单元430用于对经第二图像信号处理后的各帧荧光图像进行二值化处理,以获取对应的掩膜。具体地,在本实施方式中可以采用最大类间方法(OSTU)、迭代阈值法、P分位法、基于最小误差的全局阈值法、局部阈值法、全局阈值与局部阈值相结合的方法等分割方法对经第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜。
所述图像融合单元440用于将所述掩膜与经第一图像信号处理后的所述可见光图像进行融合,以获取融合图像。具体地,所述图像融合单元440用于将各所述掩膜与对应帧的经第一图像信号处理后的所述可见光图像进行融合,以获取融合图像,并用于以视频流的形式输出所获取的融合图像。由于所述掩膜能够清楚地反映出病灶组织,因此,将所述掩膜与所述可见光图像进行融合以获取融合图像。通过所述融合图像,可以有利于医生准确区分病灶组织区域与正常组织区域。
具体地,所述图像融合单元440用于对所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。所述掩膜中的像素值为0的像素点对应目标组织的正常组织区域;所述掩膜中的像素值不为0(即白色区域)的像素点对应目标组织的病灶组织区域。通过对所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,可以准确地在所述可见光图像中标识出病灶组织区域,从而使得最终所获得的融合图像中能够明显区分出正常组织区域与病灶组织区域。
如图1所示,当所述内窥镜为三维内窥镜时,所述图像处理模块400包括第一图像处理单元450、第二图像处理单元460和叠加单元470;所述第一图像处理单元450用于对所述第一可见光图像以及所述第一荧光图像进行处理,以获取第一融合图像。进一步地,所述第一图像处理单元450用于对所述第一可见光图像以及所述第一荧光图像进行图像信号处理,并将经图像信号处理后的对应帧的所述第一可见光图像、所述第一荧光图像进行融合,以获取第一融合图像。具体地,所述第一图像处理单元450用于对所述第一可见光视频流中的各帧第一可见光图像以及所述第一荧光视频流中的各帧第一荧光图像进行图像信号处理;将经图像信号处理后的对应帧的所述第一可见光图像、所述第一荧光图像进行融合,以获取第一融合图像;以及以第一视频流的形式输出该第一融合图像。所述第二图像处理单元460用于对所述第二可见光图像以及所述第二荧光图像进行处理,以获取第二融合图像。进一步地,所述第二图像处理单元460用于对所述第二可见光图像以及所述第二荧光图像进行图像信号处理,并将经图像信号处理后的所述第二可见光图像、所述第二荧光图像进行融合,以获取第二融合图像。具体地,所述第二图像处理单元460用于对所述第二可见光视频流中的各帧第二可见光图像以及所述第二荧光视频流中的各帧第二荧光图像进行图像信号处理;将经图像信号处理后的对应帧的所述第二可见光图像、所述第二荧光图像进行融合,以获取第二融合图像;以及以第二视频流的形式输出该第二融合 图像;所述叠加单元470用于对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像并输出该三维图像。具体地,所述叠加单元470用于对所述第一视频流和所述第二视频流进行配准,并将配准后的所述第一视频流和所述第二视频流进行叠加以生成三维视频流并输出该三维图像。由此,通过所述叠加单元470对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像,并将该三维图像输出至外科医生的控制台中的显示器上予以显示,所述三维图像可以使得医生能够看到手术视野中的目标组织的三维立体信息,从而为医生提供更真实、更清晰的视觉效果。这更加有利于医生进行手术判断、对器械的准确控制,极大地提高了手术效率和手术过程中的安全性。
具体地,请参考图13,其示意性地给出了本申请一实施方式提供的第一图像处理单元的方框结构示意图。如图13所示,所述第一图像处理单元450包括第一可见光图像处理单元410a、第一荧光图像处理单元420a、第一二值化处理单元430a和第一图像融合单元440a。所述第一可见光图像处理单元410a用于对所述第一可见光视频流中的各帧第一可见光图像进行第一图像信号处理;所述第一荧光图像处理单元420a用于对所述第一荧光视频流中的各帧第一荧光图像进行第二图像信号处理;所述第一二值化处理单元430a用于对经第二图像信号处理后的各帧第一荧光图像进行二值化处理,以获取(与各帧第一荧光图像)对应的第一掩膜;所述第一图像融合单元440a用于将各所述第一掩膜与对应帧的经第一图像信号处理后的所述第一可见光图像进行融合,以获取第一融合图像,并用于以第一视频流的形式输出该第一融合图像。其中,所述第一可见光图像处理单元410a和下文所述的第二可见光图像处理单元410b构成上文所述的可见光处理单元410;所述第一荧光图像处理单元420a和下文所述的第二荧光图像处理单元420b构成上文所述的荧光处理单元420;所述第一二值化处理单元430a和下文所述的第二二值化处理单元430b构成上文所述的二值化处理单元430;所述第一图像融合单元440a和下文所述的第二图像融合单元440b构成上文所述的图像融合单元440。
进一步地,所述第一图像融合单元440a用于对所述第一可见光图像中的与所述第一掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取第一融合图像。
请继续参考图14,其示意性地给出了本申请一实施方式提供的第二图像处理单元的方框结构示意图。如图14所示,所述第二图像处理单元460包括第二可见光图像处理单元410b、第二荧光图像处理单元420b、第二二值化处理单元430b和第二图像融合单元440b。所述第二可见光图像处理单元410b用于对所述第二可见光视频流中的各帧第二可见光图像进行第二图像信号处理;所述第二荧光图像处理单元420b用于对所述第二荧光视频流中的各帧第二荧光图像进行第二图像信号处理;所述第二二值化处理单元430b用于对经第二图像信号处理后的各帧第二荧光图像进行二值化处理,以获取(与各帧第二荧光图像)对应的第二掩膜;所述第二图像融合单元440b用于将各所述第二掩膜与对应帧的经第一图像信号处理后的所述第二可见光图像进行融合,以获取第二融合图像,并用于以第二视频流的形式输出该第二融合图像。
进一步地,所述第二图像融合单元440b用于对所述第二可见光图像中的与所述第二掩膜中的像素值不为0的像素点对应的像素点进行着色标识。
具体地,可以将所述第一视频流中的各帧第一融合图像与所述第二视频流中的对应帧的第二融合图像进行配准,以生成带有视差信息的第一视频流和第二视频流;通过将带有视差信息的第一视频流和第二视频流进行叠加,即可生成三维视频流。
所述叠加单元470根据外科医生的控制台中的显示器的三维显示需要,可以将所述三维视频流配置成不同格式。请参考图15,其示意性地给出了本申请一实施方式提供的第一视频流与第二视频流的叠加示意图。如图15所示,在本实施方式中,所述叠加单元470可以将所述第一视频流810和所述第二视频流820叠加成偏振式隔行扫描交错格式的三维视频流输出。请参考图16,其示意性地给出了本申请另一实施方式提供的第一视频流810 与第二视频流820的叠加示意图。如图16所示,在本实施方式中,所述叠加单元470可以将所述第一视频流810和所述第二视频流820叠加成偏振式逐行左右扫描交错格式的三维视频流输出。需要说明的是,虽然图15和图16中,是以左视场下的视频流为第一视频流810,右视场下的视频流为第二视频流820为例进行说明,但是,如本领域技术人员所能理解的,在其它一些实施方式中,所述第一视频流810也可为右视场下的视频流,所述第二视频流820也可为左视场下的视频流,本申请对此并不进行限制。
与上述的多光谱成像系统相对应,本申请还提供一种多光谱成像方法,请参考图17,示意性地给出了本申请一实施方式的多光谱成像方法的流程示意图,如图17所示,所述多光谱成像方法包括如下步骤:
步骤S100、分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光;
步骤S200、分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像;
步骤S300、对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。
可见,本申请提供的多光谱成像方法,通过分时向目标组织发射可见光和激励光,以分时获取可见光图像和荧光图像,并对所述可见光图像和所述荧光图像进行处理,以获取多波段融合图像,从而能够通过所获取的多波段融合图像区分不同组织状态,这使得医生能够观察到单一波段条件下无法观察到的组织信息。通过所述多波段融合图像,医生可以清晰地看到病灶与正常组织的区别,且可以更加清晰地看到该区别的细节,使得可以更加准确、更加安全地切割组织。
请继续参考图18,其示意性地给出了本申请一实施方式提供的图像融合流程示意图。如图18所示,所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,具体包括如下过程:
对所述可见光图像进行第一图像信号处理以及对所述荧光图像进行第二图像信号处理;
对经第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜;
将所述掩膜与经第一图像信号处理后的所述可见光图像进行融合,以获取融合图像。
通过所述掩膜能够清楚地反映出病灶组织。通过将所述掩膜与所述可见光图像融合以获取融合图像,所获取的融合图像可以有利于医生准确区分病灶组织区域与正常组织区域。
具体地,所述将所述掩膜与经第一图像信号处理后的所述可见光图像进行融合,以获取融合图像,包括:对所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。所述掩膜中的像素值为0的像素点对应目标组织的正常组织区域;所述掩膜中的像素值不为0的像素点对应目标组织的病灶组织区域。因此,通过对所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,可以准确地在所述可见光图像中标识出病灶组织区域,这使得能够在最终所获得的融合图像中明显区分出正常组织区域与病灶组织区域。
进一步地,所述分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像,包括:
分别沿第一光路和第二光路分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取第一可见光图像、第二可见光图像、第一荧光图像和第二荧光图像;
所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,包括:
对所述第一可见光图像和所述第一荧光图像进行处理,以获取第一融合图像,以及对所述第二可见光图像和所述第二荧光图像进行处理,以获第二融合图像;
在获取第一融合图像和第二融合图像后,所述多光谱成像方法还包括:
对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像 和所述第二融合图像进行叠加,以生成三维图像并输出所述三维图像。
通过获取不同视角下的第一融合图像和所述第二融合图像,并对所获取的所述第一融合图像和所述第二融合图像进行配准与叠加,可以生成三维图像,。生成的三维图像可输出至外科医生的控制台中的显示器上予以显示。通过所述三维图像,可以使得医生能够看到手术视野中目标组织的三维立体信息,为医生提供更真实、更清晰的视觉效果,更加有利于医生进行手术判断、对器械的准确控制,极大地提高了手术效率和手术过程中的安全性。
基于同一发明构思,本申请还提供了一种可读存储介质,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时可以实现上文所述的基于内窥镜的多光谱成像方法。由于本申请提供的可读存储介质内存储的计算机程序被处理器执行时能够实现上文所述的多光谱成像方法,因此本申请提供的可读存储介质具有上文所述的基于内窥镜的多光谱成像方法的所有优点,故对此不再进行一一赘述。
本申请实施方式的可读存储介质,可以采用一个或多个计算机可读的介质的任意组合。可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机硬盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其组合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言-诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
综上所述,与现有技术相比,本申请提供的多光谱成像系统、成像方法和存储介质具有以下优点:
(1)本申请通过分时向目标组织发射可见光和激励光,以分时获取可见光图像和荧光图像,再对所述可见光图像和所述荧光图像进行处理,以获取多波段融合图像。由此,所获取的多波段融合图像可以区分不同组织状态,使得医生能够观察到单一波段条件下无法观察到的组织信息。通过所述多波段融合图像,医生可以清晰地看到病灶与正常组织的区别,且可以更加清晰地看到该区别的细节,使得可以更加准确、更加安全地切割组织。
(2)本申请采用带通滤波模块进行杂波阻止(滤除),使仅可见光和荧光通过,从而可以有效提高输入信号的信噪比,进一步提高了所获取的多波段融合图像的图像质量。
(3)本申请通过采用近红外波段QE(量子效率)较高的RGBNIR图像传感器,分时采集可见光图像和荧光图像,可以获取高质量的可见光图像和荧光图像,进一步提高了所获取的多波段融合图像的图像质量。
(4)本申请通过获取三维图像,可以使得医生能够看到手术视野中目标组织的三维 立体信息,从而提供医生更真实、更清晰的视觉效果,更加有利于医生进行手术判断、器械的准确控制,极大地提高了手术效率和手术过程中的安全性。
(5)本申请采用的分时控制成像系统,对于系统软件部分,可以提高控制的灵活性;对于系统硬件部分,极大地降低了系统硬件的复杂性,使得整个系统更加灵巧,更加便于内窥镜与微创手术机器人的系统整合。
应当注意的是,在本文的实施方式中所揭露的装置和方法,也可以通过其他的方式实现。以上所描述的装置实施方式仅仅是示意性的,例如,附图中的流程图和框图显示了根据本文的多个实施方式的装置、方法和计算机程序产品可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用于执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本文各个实施方式中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
此外,在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施方式或示例以及不同实施方式或示例的特征进行结合和组合。
上述描述仅是对本申请较佳实施方式的描述,并非对本申请范围的任何限定,本申请领域的普通技术人员根据上述揭示内容做的任何变更、修饰,均属于本申请的保护范围。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若这些修改和变型属于本申请及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。

Claims (16)

  1. 一种多光谱成像系统,其特征在于,包括:照明模块、镜头模块、图像获取模块和图像处理模块;
    所述照明模块用于分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光;
    所述镜头模块用于分时采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光;
    所述图像获取模块用于分时接收所述镜头模块采集的可见光和荧光,以获取可见光图像和荧光图像;
    所述图像处理模块用于对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。
  2. 根据权利要求1所述的多光谱成像系统,其特征在于,所述图像获取模块用于将所述可见光图像和所述荧光图像分别以可见光视频流和荧光视频流的形式输出;
    所述图像处理模块用于对所述可见光视频流中的各帧可见光图像以及所述荧光视频流中的各帧荧光图像进行图像信号处理,并将经所述图像信号处理后的对应帧的所述可见光图像、所述荧光图像进行融合,以获取融合图像,并将所述融合图像以视频流的形式输出。
  3. 根据权利要求1所述的多光谱成像系统,其特征在于,所述多光谱成像系统包括带通滤波模块,所述带通滤波模块设于所述镜头模块和所述图像获取模块之间,所述带通滤波模块用于允许所述可见光和所述荧光通过,并阻止除可见光波段和荧光波段以外的其它波段的光通过;
    所述图像获取模块用于分时接收通过所述带通滤波模块的可见光和荧光,以获取可见光图像和荧光图像。
  4. 根据权利要求3所述的多光谱成像系统,其特征在于,所述多光谱成像系统包括棱镜模块,所述带通滤波模块位于所述镜头模块和所述棱镜模块之间。
  5. 根据权利要求1所述的多光谱成像系统,其特征在于,所述图像获取模块包括RGBNIR图像传感器,所述RGBNIR图像传感器用于分时接收所述镜头模块采集的可见光和荧光,以获取可见光图像和荧光图像。
  6. 根据权利要求1所述的多光谱成像系统,其特征在于,所述照明模块包括光源单元和照明控制器,其中,在所述照明控制器的控制下,所述光源单元能够分时向所述目标组织发射可见光和激励光。
  7. 根据权利要求6所述的多光谱成像系统,其特征在于,所述光源单元包括第一光源模组和第二光源模组,所述第一光源模组用于向所述目标组织发射可见光,所述第二光源模组用于向所述目标组织发射激励光。
  8. 根据权利要求7所述的多光谱成像系统,其特征在于,所述照明控制器包括第一控制单元、第二控制单元和第三控制单元;
    所述第一控制单元用于控制所述第一光源模组和所述第二光源模组的输出能量强度;
    所述第二控制单元用于控制所述第一光源模组和所述第二光源模组的开启与关闭;
    所述第三控制单元用于控制所述第一光源模组和所述第二光源模组的开启频率。
  9. 根据权利要求1所述的多光谱成像系统,其特征在于,所述图像处理模块包括可见光图像处理单元、荧光图像处理单元、二值化处理单元和图像融合单元;
    所述可见光图像处理单元用于对所述可见光图像进行第一图像信号处理;
    所述荧光图像处理单元用于对所述荧光图像进行第二图像信号处理;
    所述二值化处理单元用于对经所述第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜;
    所述图像融合单元用于将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图像。
  10. 根据权利要求9所述的多光谱成像系统,其特征在于,所述图像融合单元用于对经所述第一图像信号处理后的所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。
  11. 根据权利要求1所述的多光谱成像系统,其特征在于,所述镜头模块包括第一镜头和第二镜头,所述第一镜头用于分时沿第一光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,所述第二镜头用于分时沿第二光路采集经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光;
    所述图像获取模块包括第一图像获取单元和第二图像获取单元,所述第一图像获取单元用于分时接收所述第一镜头采集的可见光和荧光,以获取第一可见光图像和第一荧光图像,所述第二图像获取单元用于分时接收所述第二镜头采集的可见光和荧光,以获取第二可见光图像和第二荧光图像;
    所述图像处理模块包括第一图像处理单元、第二图像处理单元和叠加单元;所述第一图像处理单元用于对所述第一可见光图像以及所述第一荧光图像进行处理,以获取第一融合图像;所述第二图像处理单元用于对所述第二可见光图像以及所述第二荧光图像进行处理,以获取第二融合图像;所述叠加单元用于对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像并输出所述三维图像。
  12. 一种多光谱成像方法,其特征在于,包括:
    分时向目标组织发射可见光和激励光,以使所述目标组织将所述可见光反射以及使所述目标组织受所述激励光激发而发射出荧光;
    分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像;
    对所述可见光图像以及所述荧光图像进行处理,以获取融合图像。
  13. 根据权利要求12所述的多光谱成像方法,其特征在于,所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,包括:
    对所述可见光图像进行第一图像信号处理以及对所述荧光图像进行第二图像信号处理;
    对经所述第二图像信号处理后的所述荧光图像进行二值化处理,以获取对应的掩膜;
    将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图像。
  14. 根据权利要求13所述的多光谱成像方法,其特征在于,所述将所述掩膜与经所述第一图像信号处理后的所述可见光图像进行融合,以获取融合图像,包括:
    对经所述第一图像信号处理后的所述可见光图像中的与所述掩膜中的像素值不为0的像素点对应的像素点进行着色标识,以获取融合图像。
  15. 根据权利要求14所述的多光谱成像方法,其特征在于,所述分时接收经所述目标组织反射的可见光以及所述目标组织受激而发射的荧光,以获取可见光图像和荧光图像,包括:
    分别沿第一光路和第二光路分时接收经所述目标组织反射的可见光以及所述目标组 织受激而发射的荧光,以获取第一可见光图像、第二可见光图像、第一荧光图像和第二荧光图像;
    所述对所述可见光图像以及所述荧光图像进行处理,以获取融合图像,包括:
    对所述第一可见光图像和所述第一荧光图像进行处理,以获取第一融合图像,以及对所述第二可见光图像和所述第二荧光图像进行处理,以获第二融合图像;
    在获取所述第一融合图像和所述第二融合图像后,所述多光谱成像方法还包括:
    对所述第一融合图像和所述第二融合图像进行配准,并将配准后的所述第一融合图像和所述第二融合图像进行叠加,以生成三维图像并输出所述三维图像。
  16. 一种可读存储介质,其特征在于,所述可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时用于实现如权利要求12至15中的任一项所述的多光谱成像方法。
PCT/CN2022/097521 2021-06-07 2022-06-08 多光谱成像系统、成像方法和存储介质 WO2022257946A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110633622.8A CN113208567A (zh) 2021-06-07 2021-06-07 多光谱成像系统、成像方法和存储介质
CN202110633622.8 2021-06-07

Publications (1)

Publication Number Publication Date
WO2022257946A1 true WO2022257946A1 (zh) 2022-12-15

Family

ID=77083337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097521 WO2022257946A1 (zh) 2021-06-07 2022-06-08 多光谱成像系统、成像方法和存储介质

Country Status (2)

Country Link
CN (1) CN113208567A (zh)
WO (1) WO2022257946A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152362A (zh) * 2023-10-27 2023-12-01 深圳市中安视达科技有限公司 内窥镜多光谱的多路成像方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质
CN113610823B (zh) * 2021-08-13 2023-08-22 南京诺源医疗器械有限公司 图像处理方法、装置、电子设备及存储介质
CN113693724B (zh) * 2021-08-19 2022-10-14 南京诺源医疗器械有限公司 适用于荧光影像导航手术的照射方法、装置及存储介质
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备
CN115719415B (zh) * 2022-03-28 2023-11-10 南京诺源医疗器械有限公司 一种视野可调双视频融合成像方法及系统

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009226072A (ja) * 2008-03-24 2009-10-08 Fujifilm Corp 手術支援方法及び装置
CN102076259A (zh) * 2008-04-26 2011-05-25 直观外科手术操作公司 用于手术机器人的放大立体可视化
CN102370462A (zh) * 2010-07-13 2012-03-14 索尼公司 成像装置、成像系统、手术导航系统和成像方法
CN107105977A (zh) * 2015-01-21 2017-08-29 奥林巴斯株式会社 内窥镜装置
US20180153408A1 (en) * 2016-05-10 2018-06-07 Ze Shan YAO Multispectral synchronized imaging
CN110840386A (zh) * 2019-12-19 2020-02-28 中国科学院长春光学精密机械与物理研究所 基于单探测器的可见光和近红外荧光3d共成像内窥镜系统
WO2020144901A1 (ja) * 2019-01-09 2020-07-16 パナソニックi-PROセンシングソリューションズ株式会社 内視鏡
US20200397266A1 (en) * 2017-03-10 2020-12-24 Transenterix Surgical, Inc. Apparatus and method for enhanced tissue visualization
CN112243091A (zh) * 2020-10-16 2021-01-19 微创(上海)医疗机器人有限公司 三维内窥镜系统、控制方法和存储介质
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016061754A1 (zh) * 2014-10-22 2016-04-28 中国科学院自动化研究所 一种手持式分子影像导航系统
CN110893095A (zh) * 2018-09-12 2020-03-20 上海逸思医学影像设备有限公司 一种用于可见光和激发荧光实时成像的系统和方法
CN110811498A (zh) * 2019-12-19 2020-02-21 中国科学院长春光学精密机械与物理研究所 可见光和近红外荧光3d融合图像内窥镜系统
CN112734914A (zh) * 2021-01-14 2021-04-30 温州大学 一种增强现实视觉的图像立体重建方法及装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009226072A (ja) * 2008-03-24 2009-10-08 Fujifilm Corp 手術支援方法及び装置
CN102076259A (zh) * 2008-04-26 2011-05-25 直观外科手术操作公司 用于手术机器人的放大立体可视化
CN102370462A (zh) * 2010-07-13 2012-03-14 索尼公司 成像装置、成像系统、手术导航系统和成像方法
CN107105977A (zh) * 2015-01-21 2017-08-29 奥林巴斯株式会社 内窥镜装置
US20180153408A1 (en) * 2016-05-10 2018-06-07 Ze Shan YAO Multispectral synchronized imaging
US20200397266A1 (en) * 2017-03-10 2020-12-24 Transenterix Surgical, Inc. Apparatus and method for enhanced tissue visualization
WO2020144901A1 (ja) * 2019-01-09 2020-07-16 パナソニックi-PROセンシングソリューションズ株式会社 内視鏡
CN110840386A (zh) * 2019-12-19 2020-02-28 中国科学院长春光学精密机械与物理研究所 基于单探测器的可见光和近红外荧光3d共成像内窥镜系统
CN112243091A (zh) * 2020-10-16 2021-01-19 微创(上海)医疗机器人有限公司 三维内窥镜系统、控制方法和存储介质
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152362A (zh) * 2023-10-27 2023-12-01 深圳市中安视达科技有限公司 内窥镜多光谱的多路成像方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113208567A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2022257946A1 (zh) 多光谱成像系统、成像方法和存储介质
JP6581730B2 (ja) 電子内視鏡用プロセッサ及び電子内視鏡システム
US20190005641A1 (en) Vascular information acquisition device, endoscope system, and vascular information acquisition method
US10659703B2 (en) Imaging device and imaging method for capturing a visible image and a near-infrared image
US20040019253A1 (en) Endoscope apparatus
US20110237895A1 (en) Image capturing method and apparatus
CN102300498A (zh) 用于解剖结构的红外显示设备及其信号处理方法
US20100324366A1 (en) Endoscope system, endoscope, and method for measuring distance and illumination angle
US10473911B2 (en) Simultaneous visible and fluorescence endoscopic imaging
JPH11332820A (ja) 蛍光内視鏡
CN111803013A (zh) 一种内窥镜成像方法和内窥镜成像系统
WO2018159083A1 (ja) 内視鏡システム、プロセッサ装置、及び、内視鏡システムの作動方法
JP2001157658A (ja) 蛍光画像表示装置
US10447906B2 (en) Dual path endoscope
JP2003036436A (ja) 規格化画像生成方法および装置
WO2019220801A1 (ja) 内視鏡画像処理装置、内視鏡画像処理方法、及びプログラム
JP2021035549A (ja) 内視鏡システム
WO2018159082A1 (ja) 内視鏡システム、プロセッサ装置、及び、内視鏡システムの作動方法
CN212326346U (zh) 一种内窥镜成像系统
WO2018043551A1 (ja) 電子内視鏡用プロセッサ及び電子内視鏡システム
CN211324858U (zh) 内窥镜系统、混合光源、视频采集装置及图像处理器
US20220007925A1 (en) Medical imaging systems and methods
KR102190398B1 (ko) 단일 컬러 카메라를 이용하고 가시광선 및 근적외선 영상 동시 획득이 가능한 가시광선 및 근적외선 영상 제공 시스템 및 방법
EP4248835A1 (en) Fluorescence endoscope system, control method and storage medium
JP2003000528A (ja) 蛍光診断画像生成方法および装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819543

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE