WO2016029403A1 - Head-mounted molecular image navigation system - Google Patents

Head-mounted molecular image navigation system Download PDF

Info

Publication number
WO2016029403A1
WO2016029403A1 PCT/CN2014/085396 CN2014085396W WO2016029403A1 WO 2016029403 A1 WO2016029403 A1 WO 2016029403A1 CN 2014085396 W CN2014085396 W CN 2014085396W WO 2016029403 A1 WO2016029403 A1 WO 2016029403A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
visible light
infrared
light source
module
Prior art date
Application number
PCT/CN2014/085396
Other languages
French (fr)
Chinese (zh)
Inventor
田捷
迟崇魏
杨鑫
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2014/085396 priority Critical patent/WO2016029403A1/en
Publication of WO2016029403A1 publication Critical patent/WO2016029403A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum

Definitions

  • the invention relates to an imaging system, in particular to a head mounted molecular image navigation system. Background technique
  • molecular imaging essentially reflects changes in the level of physiological molecules and changes in overall function of organisms caused by changes in molecular regulation. Therefore, studying the life activities of genes, biological macromolecules and cells in vivo at the molecular level is an important technology, including in vivo bio-optics based on molecular techniques, tomographic techniques, optical imaging techniques, and simulation methods. Basic research on imaging technology has become one of the hotspots and difficulties in the field of molecular imaging.
  • Molecular imaging equipment combines traditional medical imaging technology with modern molecular biology to observe physiological or pathological changes at the cellular or molecular level. It has the advantages of non-invasive, real-time, in vivo, high specificity, high sensitivity and high resolution imaging. .
  • the use of molecular imaging technology can greatly speed up the development of drugs, shorten the pre-clinical research time of drugs, provide more accurate diagnosis, make the treatment plan best match the patient's genetic map, and help pharmaceutical companies develop personalized treatment.
  • it can be applied in the field of biomedicine to achieve the objectives of quantitative analysis, image navigation and molecular typing in vivo.
  • the system using this method is relatively complicated, and the ease of operation and the comfort of use need to be further improved.
  • the present invention proposes a head-mounted molecular image navigation system for detecting an in-vivo target in a molecular image by a multi-spectral excitation method, and enhancing the application range of the application. Summary of the invention
  • the invention provides a head-mounted molecular image navigation system, comprising:
  • a multi-spectral light source module for illuminating the detection area with visible light and near-infrared light
  • a signal acquisition module configured to acquire a near-infrared fluorescence image and a visible light image of the imaged object
  • a head mounted system support module configured to carry the multi-spectral light source module and the signal acquisition module, to adjust the multi-spectral light source module pair Irradiation of the detection area
  • the image processing module is configured to perform image fusion on the collected near-infrared light image and the visible light image, and output the fused image.
  • Embodiments of the present invention have the following technical effects:
  • the projection imaging method can guide the operator to pre-judge the imaging range, thus increasing the function of human-computer interaction.
  • the function of speech recognition can facilitate the operator to further liberate both hands in the process of using the system, thereby more accurately controlling the head-mounted molecular image navigation system.
  • FIG. 1 is a schematic structural view of a head mounted system supporting module according to an embodiment of the present invention
  • FIG. 2 is a block diagram of a head mounted molecular image navigation system in accordance with an embodiment of the present invention
  • FIG. 3 is a flow chart of an image processing method of a head-mounted molecular image navigation system in accordance with an embodiment of the present invention. detailed description
  • Embodiments of the present invention provide a head-mounted molecular imaging navigation system based on excitation fluorescence imaging in molecular imaging.
  • 1 is a schematic structural view of a head mounted system supporting module according to an embodiment of the present invention.
  • 2 is a block diagram of a head mounted molecular image navigation system in accordance with an embodiment of the present invention.
  • the head-mounted molecular image navigation system may include a multi-spectral light source module 110 for providing light of a plurality of different spectral segments to illuminate a subject; and an optical signal acquisition module 120 for real-time acquisition and reception.
  • the fluorescent excitation image and the visible light image of the inspection object is configured to adjust the comfort of the operator when wearing, and ensure the safe and effective imaging;
  • the image processing module 140 is configured to perform image segmentation and feature extraction. , Processing such as image registration, fusion of visible light image and fluorescent image and output of fused image.
  • Multi-spectrum light source module 110 may include a cold light source 111, a near-infrared laser light source 112 and the coupler 1130 to a cold light source 111 for emitting visible objects subject.
  • the cold light source 111 may be placed with a first band pass filter to transmit visible light having a wavelength of 400-650 nm.
  • the near-infrared laser 113 is configured to emit near-infrared light having a center wavelength of, for example, 785 nm.
  • the excitation light source can be extracted through the optical fiber.
  • the embodiments of the present invention are not limited to the above implementations, and visible light and near-infrared light may also be emitted by other means known in the art.
  • the detection region is excited, based on the spectral separation method, the light source 111 and the near-infrared laser 112 are simultaneously emitted by a single optical fiber. Specifically, the light emitted from the visible light source and the near-infrared light source is coupled at the light exit port.
  • a light source coupler 113 is provided at the coupling.
  • the light source coupler 113 can be a diverging lens, and the light source is changed from a linear point source to a cone beam, which can enlarge the illumination area to achieve uniform illumination of the detection area by the excitation source.
  • an optical lens can be disposed at the light exit of the near-infrared laser 112, and the optical lens is inversely coupled with the output end of the laser to achieve an output of a large divergence angle of the light source.
  • a mechanical fixing method may be used to fix one end of the optical fiber to the optical lens and the other end of the optical fiber to the head mounted system support module 130.
  • the optical signal acquisition module 120 can include a camera 121, a lens 122, and a coordinate projector 123.
  • the camera 121 is configured to acquire near-infrared fluorescent signals and visible light signals.
  • the cold light source illuminates the background during the acquisition process.
  • the reference parameters required for near-infrared light signal acquisition can be set as follows: at 800 nm, the quantum efficiency is higher than 30%, the frame rate is greater than 30 fps, and the image source (ie, the image source of the camera 121) is larger than 5 Micron.
  • a second band pass filter is placed between the camera 121 and the lens 122 to transmit near-infrared light having a wavelength of 810-870 nm.
  • the coordinate projector 123 can project a circular corridor to the detection area (not shown), which is the maximum range of the field of view, so that the operator can obtain the detection area of the system, and at the same time The operator obtains the excitation range of the multi-spectral light source module 110.
  • the head mounted system support module 130 can include a head mounted system bracket 131.
  • the head mounted system bracket 131 is used to carry the light source module 110 and the signal acquisition module 120.
  • the head mounted system support module 130 may further include a voice recognition and control module 132.
  • the voice recognition control module 132 can include A microphone, a voice recognition unit, and a control unit (not shown) are included to control the operation of the multi-spectral light source module 110, the coordinate projector 123, and the like by the voice of the operator.
  • the speech recognition and control module 132 can be implemented using speech recognition techniques well known in the art.
  • the visible light image and the near-infrared fluorescence image of the subject from the optical signal acquisition module 120 are input to the image processing module 140, respectively.
  • the image processing module 140 is implemented by a back-end computer processing, and the acquisition and light source control can also be manually controlled by the back end.
  • the image processing module 140 first pre-processes the input near-infrared fluorescence image to obtain a characteristic distribution of the fluorescence image based on the fluorescence specificity. Preprocessing may include noise removal, feature extraction, and dead point compensation. Of course, pre-processing well known in the art can also be performed on visible light images. Feature extraction can be performed on the input near-infrared fluorescence image using threshold segmentation. For example, for a pixel in the near-infrared fluorescence image where the image gray value G/background noise gray value is higher than 1.5, the gray value of the pixel is multiplied by 2 for pixels with G/G n lower than 1.5. Point, the gray value of the pixel is divided by 2.
  • the feature point can be strengthened.
  • grayscale images to pseudo color known in the art can be used for the region of interest whose gray value is greater than the preset threshold.
  • the image adjustment algorithm converts the regions of interest into pseudo-color images, thereby further marking the locations of the feature points and the feature regions, so that the operator can perform the operations according to the images.
  • the image processed by the image processing module 140 is a fused image.
  • the general-purpose computer has a display and projection interface, which is convenient for the operator to realize the output display of the image.
  • the video signal can be fed back to the head-mounted system, and the fused image can be visualized by placing the front screen of the mirror.
  • image fusion of a fluorescent image with a visible light image includes registration of the fluorescent image with the visible light image using the fluorescent image optical property distribution. This registration operation will be described in detail below.
  • the fluorescence image optical property distribution has fluorescence specificity, and the visible light image is a high resolution structural image.
  • Image registration in accordance with embodiments of the present invention utilizes the above characteristics.
  • the morphological theory can be used to correct the minimized energy function of the optical image distribution of the fluorescence image so that its shape is close to the imaged structure.
  • the following formula (1) can be used for registration.
  • J is a discrete Lapkce operator
  • U is a position vector
  • the image coincidence degree represented by the following formula (2) is used as a registration evaluation standard when performing registration.
  • is the normalized gray value matrix of the visible light image
  • B is the normalized gray value matrix of the fluorescent image. The closer the result is to 1, the better the image registration effect.
  • step 301 spatial motion detection of the preprocessed visible light image sequence and the fluorescent image sequence is performed to filter out the mismatched minute displacement frames to obtain a visible light image sequence M1 and a fluorescent image sequence M2.
  • the image pyramid P1 is formed for the high-resolution visible light image sequence M1 obtained in step 301 to reduce the amount of data, thereby improving the real-time performance of the image processing.
  • the image is downsampled using a Gaussian pyramid to generate an i+1th layer from the i-th layer of the pyramid.
  • the i-th layer is convolved with a Gaussian kernel, and then all even and even columns are deleted.
  • the newly obtained image will become one-fourth the size of the previous image. In this case, the image is first expanded twice in each dimension, and the new row (even rows) is padded with 0.
  • step 305 edge detection is performed on the obtained image pyramid P1 and the fluorescence image sequence M2 by using a gradient edge detection method using, for example, a Roberts operator, to obtain image edges E1 and E2, respectively.
  • step 303 may be skipped to directly perform edge detection on the visible light image sequence M1 and the fluorescent image sequence M2.
  • saliency-based sparse sampling is performed on the obtained image edges E1 and E2, respectively.
  • the same method can be used to perform the saliency-based sparse sampling on the image edges E1 and E2 respectively.
  • the compressed sensing sparse sampling technique is used to sample the coefficients of E1 and E2, so that the sampling outputs S1 and S2 are obtained respectively.
  • step 308 registration is performed on the sampled outputs S1 and S2 obtained in step 307.
  • point cloud registration can be used to further optimize registration results.
  • point cloud registration please refer to "Xue Yaohong et al., Point Cloud Data Registration and Surface Subdivision Technology Research, National Defense Industry Press, 2011", which is not mentioned in this article.
  • the image processing method according to the present invention may further include step 309.
  • step 309 algorithm convergence verification is performed on the result of the point cloud registration to ensure stable and reliable operation process.
  • steps 301, 303, 305, and 309 can be performed by a smaller image GPU or FPGA while the central processing unit CPU with more computational power is used to perform the registration step 308 to further optimize system performance. At the same time, reduce the required hardware size.

Abstract

A head-mounted molecular image navigation system, comprising: a multi-spectral light source module for emitting visible light and near-infrared light to a detection region, a signal acquisition module for acquiring a near-infrared fluorescence image and a visible light image of an imaging object, a head-mounted system supporting module for bearing the multi-spectral light source module and the signal acquisition module so as to adjust illumination of the multi-spectral light source module to the detection region, and an image processing module for fusing the acquired near-infrared light image and the acquired visible light image, and outputting a fused image. According to the embodiments of the present invention, flexible usage of the device in applications of an image system is effectively achieved, and an application space of optical molecular image navigation is expanded.

Description

一种头戴式分子影像导航系统  Head-mounted molecular imaging navigation system
技术领域 Technical field
本发明涉及一种成像系统, 特别是一种头戴式分子影像导航系统。 背景技术  The invention relates to an imaging system, in particular to a head mounted molecular image navigation system. Background technique
作为无创可视化成像技术的新方法和手段, 分子影像在本质上反映了分子调 控的改变所引发的生物体生理分子水平变化和整体机能的变化。 因此, 在分子水 平上在体 (in vivo)研究基因、 生物大分子和细胞的生命活动是一种重要技术, 其中 基于分子技术、 断层成像技术、 光学成像技术、 模拟方法学的在体生物光学成像 技术的基础研究, 已经成为分子影像领域研究的热点和难点之一。  As a new method and means of non-invasive visualization imaging technology, molecular imaging essentially reflects changes in the level of physiological molecules and changes in overall function of organisms caused by changes in molecular regulation. Therefore, studying the life activities of genes, biological macromolecules and cells in vivo at the molecular level is an important technology, including in vivo bio-optics based on molecular techniques, tomographic techniques, optical imaging techniques, and simulation methods. Basic research on imaging technology has become one of the hotspots and difficulties in the field of molecular imaging.
分子影像设备将传统医学影像技术与现代分子生物学相结合, 能够从细胞、 分子层面观测生理或病理变化, 具有无创伤、 实时、 活体、 高特异性、 高灵敏度 以及高分辨率显像等优点。 利用分子影像技术, 一方面可极大加快药物的研制开 发速度, 缩短药物临床前研究时间; 提供更准确的诊断, 使治疗方案最佳地匹配 病人的基因图谱, 帮助制药公司研发个性化治疗的药物; 另一方面, 可以在生物 医学领域进行应用, 实现在体的定量分析、 影像导航、 分子分型等目标。 然而, 利用这种方法的系统相对复杂, 操作简易性及使用舒适性方面有待进一步提高。  Molecular imaging equipment combines traditional medical imaging technology with modern molecular biology to observe physiological or pathological changes at the cellular or molecular level. It has the advantages of non-invasive, real-time, in vivo, high specificity, high sensitivity and high resolution imaging. . The use of molecular imaging technology can greatly speed up the development of drugs, shorten the pre-clinical research time of drugs, provide more accurate diagnosis, make the treatment plan best match the patient's genetic map, and help pharmaceutical companies develop personalized treatment. On the other hand, it can be applied in the field of biomedicine to achieve the objectives of quantitative analysis, image navigation and molecular typing in vivo. However, the system using this method is relatively complicated, and the ease of operation and the comfort of use need to be further improved.
因此本发明提出了一种头戴式分子影像导航系统, 通过多光谱激发的方法检 测分子影像中的在体目标, 增强应用的适用范围。 发明内容  Therefore, the present invention proposes a head-mounted molecular image navigation system for detecting an in-vivo target in a molecular image by a multi-spectral excitation method, and enhancing the application range of the application. Summary of the invention
本发明提供了一种头戴式分子影像导航系统, 包括:  The invention provides a head-mounted molecular image navigation system, comprising:
多光谱光源模块, 用于向探测区域照射可见光和近红外光;  a multi-spectral light source module for illuminating the detection area with visible light and near-infrared light;
信号采集模块, 用于采集成像对象的近红外荧光图像和可见光图像; 头戴式系统支撑模块, 用于承载所述多光谱光源模块和所述信号采集模块, 以调整所述多光谱光源模块对所述探测区域的照射;  a signal acquisition module, configured to acquire a near-infrared fluorescence image and a visible light image of the imaged object; a head mounted system support module, configured to carry the multi-spectral light source module and the signal acquisition module, to adjust the multi-spectral light source module pair Irradiation of the detection area;
图像处理模块, 用于对采集的近红外光图像和可见光图像进行图像融合, 并输出 融合图像。 本发明的实施例具有以下技术效果: The image processing module is configured to perform image fusion on the collected near-infrared light image and the visible light image, and output the fused image. Embodiments of the present invention have the following technical effects:
1、 通过头戴方式实现分子影像导航、 分子成像,在实现功能的同时提高了便 捷性。  1. Molecular image navigation and molecular imaging through head-on method, which improves the convenience while realizing the function.
2、 采用投影成像的方法可以引导操作人员对成像范围进行预判断, 从而增加 了人机交互的功能。  2. The projection imaging method can guide the operator to pre-judge the imaging range, thus increasing the function of human-computer interaction.
3、利用语音识别的功能可以方便操作人员在使用系统的过程中进一步解放双 手, 从而更精确地控制头戴式分子影像导航系统。  3. The function of speech recognition can facilitate the operator to further liberate both hands in the process of using the system, thereby more accurately controlling the head-mounted molecular image navigation system.
4、 由于采用阈值分解的特征值提取方法, 使得信背比明显提高, 有助于操 作人员根据图像引导实时精准操作。 附图说明 4. Due to the feature value extraction method using threshold decomposition, the letter-to-back ratio is significantly improved, which helps the operator to guide the real-time precise operation according to the image. DRAWINGS
图 1是根据本发明实施例的头戴式系统支撑模块的结构示意图;  1 is a schematic structural view of a head mounted system supporting module according to an embodiment of the present invention;
图 2是依照本发明实施例的头戴式分子影像导航系统的方框图;  2 is a block diagram of a head mounted molecular image navigation system in accordance with an embodiment of the present invention;
图 3 是依照本发明实施例的头戴式分子影像导航系统的图像处理方法流程 图。 具体实施方式  3 is a flow chart of an image processing method of a head-mounted molecular image navigation system in accordance with an embodiment of the present invention. detailed description
为使本发明的目的、 技术方案和优点更加清楚明白, 以下结合具体实施例, 并参照附图, 对本发明进一步详细说明。  The present invention will be further described in detail below with reference to the specific embodiments of the invention.
本发明实施例基于分子影像中的激发荧光成像, 提供了一种头戴式分子影像 导航系统。 图 1是根据本发明实施例的头戴式系统支撑模块的结构示意图。 图 2是根据 本发明实施例的头戴式分子影像导航系统的方框图。 如图 2所示, 该头戴式分子 影像导航系统可以包括多光谱光源模块 110 , 用于提供多个不同谱段的光, 以便 照射受检对象; 光学信号采集模块 120 , 用于实时采集受检对象的荧光激发图像 和可见光图像; 头戴式系统支撑模块 130 , 用于调整操作人员佩戴时的舒适性, 并保证成像的安全有效进行; 图像处理模块 140 , 用于进行图像分割、 特征提取、 图像配准等处理, 实现可见光图像与荧光图像的融合并输出融合图像。 Embodiments of the present invention provide a head-mounted molecular imaging navigation system based on excitation fluorescence imaging in molecular imaging. 1 is a schematic structural view of a head mounted system supporting module according to an embodiment of the present invention. 2 is a block diagram of a head mounted molecular image navigation system in accordance with an embodiment of the present invention. As shown in FIG. 2, the head-mounted molecular image navigation system may include a multi-spectral light source module 110 for providing light of a plurality of different spectral segments to illuminate a subject; and an optical signal acquisition module 120 for real-time acquisition and reception. The fluorescent excitation image and the visible light image of the inspection object; the head mounted system support module 130 is configured to adjust the comfort of the operator when wearing, and ensure the safe and effective imaging; the image processing module 140 is configured to perform image segmentation and feature extraction. , Processing such as image registration, fusion of visible light image and fluorescent image and output of fused image.
接下来将分别详细描述多光谱光源模块 110、 光学信号采集模块 120、 头戴式 系统支撑模块 130和图像处理模块 140的操作。 多光谱光源模块 110可以包括冷光源 111、 近红外激光器 112及光源耦合器 1130 冷光源 111用于向受检对象发射可见光。 冷光源 111可以放置有第一带通滤 光片, 以便透过波长为 400- 650nm的可见光。 近红外激光器 113配置为发射中心 波长为例如 785nm的近红外光。 可以通过光纤将激发光源引出。 本领域技术人员 已知的是, 本发明实施例并不局限于上述实现方式, 还可以采用本领域公知的其 他方式来发射可见光与近红外光。 当激发探测区域时, 基于光谱分离方法, 由单 根光纤来同时实现冷光源 111与近红外激光器 112光出射。 具体地, 将可见光源 与近红外光源出射的光在出光口处耦合。 在耦合处设置光源耦合器 113。 光源耦 合器 113可以是发散镜头, 将光源由直线点光源变成锥束光, 这样可以扩大照射 面积, 以实现激发光源对探测区域的均匀照射。 例如, 可以在近红外激光器 112 的出光口处设置光学镜头, 光学镜头与激光器输出端反向耦合, 实现光源较大发 散角的输出。 可以采用机械固定的方法, 将光纤的一端和光学镜头固定在一起, 将光纤的另一端与头戴式系统支撑模块 130相连。 Next, the operations of the multi-spectral light source module 110, the optical signal acquisition module 120, the head mounted system support module 130, and the image processing module 140 will be described in detail. Multi-spectrum light source module 110 may include a cold light source 111, a near-infrared laser light source 112 and the coupler 1130 to a cold light source 111 for emitting visible objects subject. The cold light source 111 may be placed with a first band pass filter to transmit visible light having a wavelength of 400-650 nm. The near-infrared laser 113 is configured to emit near-infrared light having a center wavelength of, for example, 785 nm. The excitation light source can be extracted through the optical fiber. It is known to those skilled in the art that the embodiments of the present invention are not limited to the above implementations, and visible light and near-infrared light may also be emitted by other means known in the art. When the detection region is excited, based on the spectral separation method, the light source 111 and the near-infrared laser 112 are simultaneously emitted by a single optical fiber. Specifically, the light emitted from the visible light source and the near-infrared light source is coupled at the light exit port. A light source coupler 113 is provided at the coupling. The light source coupler 113 can be a diverging lens, and the light source is changed from a linear point source to a cone beam, which can enlarge the illumination area to achieve uniform illumination of the detection area by the excitation source. For example, an optical lens can be disposed at the light exit of the near-infrared laser 112, and the optical lens is inversely coupled with the output end of the laser to achieve an output of a large divergence angle of the light source. A mechanical fixing method may be used to fix one end of the optical fiber to the optical lens and the other end of the optical fiber to the head mounted system support module 130.
光学信号采集模块 120可以包括相机 121、镜头 122和坐标投影器 123。相机 121 配置用于采集近红外荧光信号和可见光信号。 其中, 在采集过程中冷光源对 背景进行照明。例如,可以如下设置近红外光信号采集所需的参考参数:在 800nm 处, 量子效率高于 30% , 帧速大于 30fps , 像源 (即, 相机 121的最小感光单元点 image source) 尺寸大于 5微米。 优选地, 在相机 121与镜头 122之间放置第二带 通滤光片, 以便透过波长为 810- 870nm的近红外光。 当相机 121进行操作时, 坐 标投影器 123可以向探测区域 (图中未示出) 投射出一圆形轮廊, 该轮廊为视野 的最大范围, 以便操作人员获得系统的探测区域, 同时便于操作人员获得多光谱 光源模块 110的激发范围。 如图 1所示, 头戴式系统支撑模块 130可以包括头戴式系统支架 131。 头戴 式系统支架 131用于承载光源模块 110和信号采集模块 120。优选地,头戴式系统 支撑模块 130还可以包括语音识别与控制模块 132。语音识别控制模块 132可以包 括麦克风、 语音识别单元和控制单元(未示出), 以便通过操作人员的语音来控制 多光谱光源模块 110、 坐标投影器 123等模块的操作。 可以使用本领域公知的语 音识别技术来实现语音识别与控制模块 132。 将来自光学信号采集模块 120 的受检对象的可见光图像和近红外荧光图像分 别输入到图像处理模块 140。 图像处理模块 140 由后端计算机处理实现, 采集及 光源控制也可以由后端实现手动控制。 图像处理模块 140首先对输入的近红外荧 光图像进行预处理, 以便根据荧光特异性得到荧光图像的特性分布。 预处理可以 包括噪声去除、 特征提取以及坏点补偿等。 当然, 也可以对可见光图像进行本领 域公知的预处理。 可以利用阈值分割对输入的近红外荧光图像进行特征提取。 例 如, 对于近红外荧光图像中图像灰度值 G/背景噪声灰度值0„高于 1.5的像素点, 将该像素点的灰度值乘以 2 , 对于 G/Gn低于 1.5的像素点, 将该像素点的灰度值 除以 2。 按照这种阈值分割方法能够强化特征点。 对于灰度值大于预设阈值的感 兴趣区域, 可以通过本领域公知的灰度图像至伪彩色图像调整算法, 将这些感兴 趣区域转化成伪彩色图像, 从而进一步标记出特征点及特征区域的位置, 以便操 作人员根据图像来引导实施操作。 图像处理模块 140处理后的图像为融合图像, 在通用计算机上具有显示和投影接口, 方便操作人员实现图像的输出显示。 同时 可以将视频信号反馈到头戴系统上,通过放置镜前屏幕实现对融合图像的可视化。 The optical signal acquisition module 120 can include a camera 121, a lens 122, and a coordinate projector 123. The camera 121 is configured to acquire near-infrared fluorescent signals and visible light signals. Among them, the cold light source illuminates the background during the acquisition process. For example, the reference parameters required for near-infrared light signal acquisition can be set as follows: at 800 nm, the quantum efficiency is higher than 30%, the frame rate is greater than 30 fps, and the image source (ie, the image source of the camera 121) is larger than 5 Micron. Preferably, a second band pass filter is placed between the camera 121 and the lens 122 to transmit near-infrared light having a wavelength of 810-870 nm. When the camera 121 is operated, the coordinate projector 123 can project a circular corridor to the detection area (not shown), which is the maximum range of the field of view, so that the operator can obtain the detection area of the system, and at the same time The operator obtains the excitation range of the multi-spectral light source module 110. As shown in FIG. 1, the head mounted system support module 130 can include a head mounted system bracket 131. The head mounted system bracket 131 is used to carry the light source module 110 and the signal acquisition module 120. Preferably, the head mounted system support module 130 may further include a voice recognition and control module 132. The voice recognition control module 132 can include A microphone, a voice recognition unit, and a control unit (not shown) are included to control the operation of the multi-spectral light source module 110, the coordinate projector 123, and the like by the voice of the operator. The speech recognition and control module 132 can be implemented using speech recognition techniques well known in the art. The visible light image and the near-infrared fluorescence image of the subject from the optical signal acquisition module 120 are input to the image processing module 140, respectively. The image processing module 140 is implemented by a back-end computer processing, and the acquisition and light source control can also be manually controlled by the back end. The image processing module 140 first pre-processes the input near-infrared fluorescence image to obtain a characteristic distribution of the fluorescence image based on the fluorescence specificity. Preprocessing may include noise removal, feature extraction, and dead point compensation. Of course, pre-processing well known in the art can also be performed on visible light images. Feature extraction can be performed on the input near-infrared fluorescence image using threshold segmentation. For example, for a pixel in the near-infrared fluorescence image where the image gray value G/background noise gray value is higher than 1.5, the gray value of the pixel is multiplied by 2 for pixels with G/G n lower than 1.5. Point, the gray value of the pixel is divided by 2. According to this threshold segmentation method, the feature point can be strengthened. For the region of interest whose gray value is greater than the preset threshold, grayscale images to pseudo color known in the art can be used. The image adjustment algorithm converts the regions of interest into pseudo-color images, thereby further marking the locations of the feature points and the feature regions, so that the operator can perform the operations according to the images. The image processed by the image processing module 140 is a fused image, The general-purpose computer has a display and projection interface, which is convenient for the operator to realize the output display of the image. At the same time, the video signal can be fed back to the head-mounted system, and the fused image can be visualized by placing the front screen of the mirror.
然后, 利用得到的荧光图像光学特性分布, 将荧光输入的可见光图像进行图 像融合, 从而得到融合结果图像以便输出。 具体地, 荧光图像与可见光图像的图 像融合包括利用荧光图像光学特性分布将荧光图像与可见光图像进行配准。 以下 将详细描述该配准操作。  Then, using the obtained optical characteristic distribution of the fluorescent image, the visible light image of the fluorescence input is image-fused, thereby obtaining a fusion result image for output. In particular, image fusion of a fluorescent image with a visible light image includes registration of the fluorescent image with the visible light image using the fluorescent image optical property distribution. This registration operation will be described in detail below.
荧光图像光学特性分布具有荧光特异性, 而可见光图像是一种高分辨率结构 图像。 根据本发明实施例的图像配准利用了上述特性。 在进行配准时, 可以采用 形态学理论, 修正荧光图像光学特性分布的最小化能量函数式, 使其形状接近成 像组织。 可以使用下式 (1 ) 来进行配准。
Figure imgf000006_0001
The fluorescence image optical property distribution has fluorescence specificity, and the visible light image is a high resolution structural image. Image registration in accordance with embodiments of the present invention utilizes the above characteristics. In the registration, the morphological theory can be used to correct the minimized energy function of the optical image distribution of the fluorescence image so that its shape is close to the imaged structure. The following formula (1) can be used for registration.
Figure imgf000006_0001
式 (1)中, J为离散 Lapkce算子, U为位置向量, 选择 n个表面点作为主要标 记点, 、 分别为成像表面标记点, = (p,.- a,.)移动向量, 通过最小化 ^ )获 得向量 f/P , 则 3Ωρ = + 为表面变形后的位置。 为了获取较准确的、 高分辨的融合图像, 在进行配准时, 采用下式 (2) 所示 的图像重合度作为配准效果评价标准。
Figure imgf000007_0001
其中, Α是可见光图像归一化灰度值矩阵, B是荧光图像归一化灰度值矩阵。 运算结果越接近 1 , 说明图像配准效果越好。 图 3示出了根据本发明实施例的图像处理方法的流程图。 如图 3所示, 在步 骤 301 , 对经过预处理的可见光图像序列和荧光图像序列空间运动检测, 以便滤 除不匹配的微小位移帧, 得到可见光图像序列 Ml和荧光图像序列 M2。
In equation (1), J is a discrete Lapkce operator, U is a position vector, and n surface points are selected as the main marker points, respectively, the imaging surface marker points, = (p, .- a,.) motion vectors, Minimize ^) The vector f/ P is obtained , then 3Ω ρ = + is the position after surface deformation. In order to obtain a more accurate and high-resolution fused image, the image coincidence degree represented by the following formula (2) is used as a registration evaluation standard when performing registration.
Figure imgf000007_0001
Where Α is the normalized gray value matrix of the visible light image, and B is the normalized gray value matrix of the fluorescent image. The closer the result is to 1, the better the image registration effect. FIG. 3 shows a flow chart of an image processing method according to an embodiment of the present invention. As shown in FIG. 3, in step 301, spatial motion detection of the preprocessed visible light image sequence and the fluorescent image sequence is performed to filter out the mismatched minute displacement frames to obtain a visible light image sequence M1 and a fluorescent image sequence M2.
可选地, 在步骤 303 , 针对步骤 301得到的高分辨率可见光图像序列 Ml形成 图像金字塔 P1 , 以减少数据量, 从而提高图像处理的实时性。 具体地, 采用高斯 金字塔对图像进行下采样以便根据金字塔第 i层生成第 i+1层。 首先用高斯核对 第 i层进行卷积, 然后删除所有偶数行和偶数列。 当然, 新得到的图像大小会变 为上一级图像的四分之一。 在这种情况下, 图像首先在每个维度上都扩大为原来 的两倍, 新增的行 (偶数行) 以 0填充。 然后使用指定滤波器进行卷积 (实际上 是一个在每一维上都扩大为两倍的过滤器) 去估计 "丢失" 像素的近似值。 按上 述过程对输入图像循环执行操作就可产生整个金字塔。 在步骤 305 , 利用例如采用 Roberts算子的梯度边缘检测方法, 对得到的图像 金字塔 P1和荧光图像序列 M2进行边缘检测,分别得到图像边缘 E1和 E2。当然, 在图像处理能力较高的情况下也可以跳过步骤 303 , 直接对可见光图像序列 Ml 和荧光图像序列 M2进行边缘检测。  Optionally, in step 303, the image pyramid P1 is formed for the high-resolution visible light image sequence M1 obtained in step 301 to reduce the amount of data, thereby improving the real-time performance of the image processing. Specifically, the image is downsampled using a Gaussian pyramid to generate an i+1th layer from the i-th layer of the pyramid. First, the i-th layer is convolved with a Gaussian kernel, and then all even and even columns are deleted. Of course, the newly obtained image will become one-fourth the size of the previous image. In this case, the image is first expanded twice in each dimension, and the new row (even rows) is padded with 0. The convolution is then performed using the specified filter (actually a filter that is doubled in each dimension) to estimate the approximation of the "lost" pixel. The entire pyramid can be generated by performing an operation on the input image as described above. In step 305, edge detection is performed on the obtained image pyramid P1 and the fluorescence image sequence M2 by using a gradient edge detection method using, for example, a Roberts operator, to obtain image edges E1 and E2, respectively. Of course, in the case of high image processing capability, step 303 may be skipped to directly perform edge detection on the visible light image sequence M1 and the fluorescent image sequence M2.
在步骤 307 , 对得到的图像边缘 E1和 E2分别进行基于显著性的稀疏采样。 可以采用相同的方法对图像边缘 E1和 E2分别进行基于显著性的稀疏采样,这里 采用压缩感知稀疏采样技术对 E1和 E2进行系数采样,从而分别得到采样输出 S1 和 S2。  At step 307, saliency-based sparse sampling is performed on the obtained image edges E1 and E2, respectively. The same method can be used to perform the saliency-based sparse sampling on the image edges E1 and E2 respectively. Here, the compressed sensing sparse sampling technique is used to sample the coefficients of E1 and E2, so that the sampling outputs S1 and S2 are obtained respectively.
在步骤 308 , 对步骤 307得到的采样输出 S1和 S2执行配准。 除了采用以上公 式 (1 ) 和 (2) 进行配准以外, 还可以使用点云配准进一步优化配准结果。 关于 点云配准的详细可以参见 "薛耀红等, 点云数据配准及曲面细分技术研究, 国防 工业出版社, 2011 " , 本文不再赞述。 At step 308, registration is performed on the sampled outputs S1 and S2 obtained in step 307. In addition to registration using equations (1) and (2) above, point cloud registration can be used to further optimize registration results. About For details of point cloud registration, please refer to "Xue Yaohong et al., Point Cloud Data Registration and Surface Subdivision Technology Research, National Defense Industry Press, 2011", which is not mentioned in this article.
优选地, 根据本发明的图像处理方法还可以包括步骤 309。 在步骤 309 , 对点 云配准的结果进行算法收敛性验证, 以保证运算过程的稳定可靠。  Preferably, the image processing method according to the present invention may further include step 309. In step 309, algorithm convergence verification is performed on the result of the point cloud registration to ensure stable and reliable operation process.
优选地, 可以通过体积较小的图像 GPU或 FPGA来执行步骤 301、 303、 305 和 309的处理,同时采用计算能力更强的中央处理单元 CPU来执行配准步骤 308 , 从而进一步在优化系统性能的同时, 减小所需的硬件尺寸。  Preferably, the processing of steps 301, 303, 305, and 309 can be performed by a smaller image GPU or FPGA while the central processing unit CPU with more computational power is used to perform the registration step 308 to further optimize system performance. At the same time, reduce the required hardware size.
以上所述的具体实施例, 对本发明的目的、 技术方案和有益效果进行了进 一步详细说明, 所应理解的是, 以上所述仅为本发明的具体实施例而已, 并不 用于限制本发明, 凡在本发明的精神和原则之内, 所做的任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。  The above described specific embodiments of the present invention are described in detail, and are not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc., made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Claims

权 利 要 求 Rights request
1 . 一种头戴式分子影像导航系统, 包括: A head-mounted molecular image navigation system comprising:
多光谱光源模块, 用于向探测区域照射可见光和近红外光;  a multi-spectral light source module for illuminating the detection area with visible light and near-infrared light;
信号采集模块, 用于采集成像对象的近红外荧光图像和可见光图像; 头戴式系统支撑模块, 用于承载所述多光谱光源模块和所述信号采集模块, 以调整所述多光谱光源模块对所述探测区域的照射范围;  a signal acquisition module, configured to acquire a near-infrared fluorescence image and a visible light image of the imaged object; a head mounted system support module, configured to carry the multi-spectral light source module and the signal acquisition module, to adjust the multi-spectral light source module pair The irradiation range of the detection area;
图像处理模块, 用于对采集的近红外光图像和可见光图像进行图像融合,并 输出融合图像。  The image processing module is configured to perform image fusion on the collected near-infrared light image and the visible light image, and output the fused image.
2. 根据权利要求 1所述的系统, 其中, 所述多光谱光源模块包括: 可见光源, 用于向受检对象发射可见光;  2. The system according to claim 1, wherein the multi-spectral light source module comprises: a visible light source for emitting visible light to a subject;
近红外激光器, 用于向受检对象发射近红外光; 和  a near-infrared laser for emitting near-infrared light to a subject; and
光源耦合器;  Light source coupler
其中, 所述光源耦合器耦合所述可见光和近红外光, 并通过单根光纤将耦合 光连接到所述头戴式系统支撑模块。  Wherein the light source coupler couples the visible light and the near-infrared light, and couples the coupled light to the head mounted system support module through a single optical fiber.
3. 根据权利要求 2所述的系统, 其中, 所述头戴式系统支撑模块包括: 头戴式系统支架, 用于承载所述多光谱光源模块和所述信号采集模块; 以及 语音控制模块, 用于控制多光谱光源模块的操作, 以形成期望范围的探测区 域。  3. The system according to claim 2, wherein the head mounted system support module comprises: a head mounted system bracket for carrying the multi-spectral light source module and the signal acquisition module; and a voice control module, Used to control the operation of the multispectral light source module to form a desired range of detection regions.
4. 根据权利要求 1所述的系统, 其中, 所述图像处理模块对采集的近红外荧 光图像进行特征提取, 包括:  4. The system according to claim 1, wherein the image processing module performs feature extraction on the collected near-infrared fluorescence image, including:
对于图像灰度值 G/背景噪声灰度值0„高于 1.5的像素点, 将所述像素点的 灰度值乘以 2 ; 对于 G/Gn低于 1.5的像素点, 将所述像素点的灰度值除以 2。 For a pixel whose gray value G/background noise gray value is higher than 1.5, the gray value of the pixel is multiplied by 2; for a pixel whose G/Gn is lower than 1.5, the pixel is The gray value of the point is divided by 2.
5. 根据权利要求 4所述的系统, 其中, 所述图像处理模块对采集的近红外荧 光图像和可见光图像进行图像融合,包括通过以下最小化能量函数式获得所述近 红外荧光图像的光学特性分布:  5. The system according to claim 4, wherein the image processing module performs image fusion on the acquired near-infrared fluorescent image and the visible light image, including obtaining optical characteristics of the near-infrared fluorescent image by minimizing an energy function formula: Distribution:
£(υ) = ||Δ /||2 + ¾| /'— 式 (1)中, J为离散 Lapkce算子, U为位置向量, 选择 n个表面点作为主要标 记点, 、 分别为成像表面标记点, = (p, - a,.)移动向量, 通过最小化获得向 量 Up , 则 3Ωρ = 3Ω + 为表面变形后的位置。 £(υ) = ||Δ /|| 2 + 3⁄4| /'− In equation (1), J is the discrete Lapkce operator, U is the position vector, select n surface points as the main marker points, respectively Surface marker point, = (p, - a,.) movement vector, obtained by minimizing The amount U p , then 3 Ω ρ = 3 Ω + is the position after surface deformation.
6. 根据权利要求 1 所述的系统, 其中, 所述图像处理模块对采集的近红外 荧光图像和可见光图像进行图像融合,包括通过下式所示的图像重合度作为配准 效果评价标准:
Figure imgf000010_0001
其中, A是可见光图像归一化灰度值矩阵, B是荧光图像归一化灰度值矩阵。
The system according to claim 1, wherein the image processing module performs image fusion on the collected near-infrared fluorescence image and the visible light image, including the image coincidence degree shown by the following formula as a registration effect evaluation criterion:
Figure imgf000010_0001
Where A is the normalized gray value matrix of the visible light image, and B is the normalized gray value matrix of the fluorescent image.
7. 根据权利要求 5 所述的系统, 其中, 所述图像处理模块通过点云配准来 进一步对近红外荧光图像和可见光图像进行图像融合。 7. The system according to claim 5, wherein the image processing module further performs image fusion on the near-infrared fluorescent image and the visible light image by point cloud registration.
8. 一种应用于权利要求 1所述头戴式分子影像导航系统的图像处理方法, 包括:  8. An image processing method applied to the head-mounted molecular image navigation system of claim 1, comprising:
对可见光图像序列和近红外荧光图像序列进行空间运动检测,以便滤除不匹 配的微小位移帧 (301 ) ;  Spatial motion detection of visible light image sequences and near-infrared fluorescent image sequences to filter out mismatched micro-displacement frames (301);
对经过空间运动检测的所述可见光图像序列进行下采样, 得到图像金字塔 ( 303);  Downsampling the sequence of visible light images detected by spatial motion to obtain an image pyramid (303);
采用梯度边缘检测方法,分别对得到的图像金字塔和近红外荧光图像序列进 行边缘检测, 得到图像边缘 (305) ;  Edge detection is performed on the obtained image pyramid and near-infrared fluorescence image sequence by gradient edge detection method, and the image edge is obtained (305).
对得到的图像边缘分别进行基于显著性的稀疏采样,从而分别得到采样输出 ( 307); 以及  Performing saliency-based sparse sampling on the edges of the obtained images to obtain sampled outputs (307);
对得到的采样输出执行配准以便进行图像融合 (308)。  Registration is performed on the resulting sampled output for image fusion (308).
9. 根据权利要求 8所述的方法, 其中, 所述配准包括通过以下最小化能量函 数式获得所述近红外荧光图像的光学特性分布:  9. The method of claim 8, wherein the registering comprises obtaining an optical property distribution of the near-infrared fluorescent image by a minimum energy function:
£(υ) = ||Δ /||2 + ¾| /'— 式 (1)中, J为离散 Lapkce算子, U为位置向量, 选择 n个表面点作为主要 标记点, 、 分别为成像表面标记点, = (p, - a,.)移动向量, 通过最小化获得 向量; P , 则 3Ωρ = 3Ω + 为表面变形后的位置。 £(υ) = ||Δ /|| 2 + 3⁄4| /'− In equation (1), J is the discrete Lapkce operator, U is the position vector, select n surface points as the main marker points, respectively Surface marker points, = (p, - a,.) move the vector, obtain the vector by minimizing; P , then 3Ω ρ = 3Ω + is the position after surface deformation.
10. 根据权利要求 8所述的方法, 其中, 所述配准包括通过下式所示的图像 重合度作为配准效果评价标准:
Figure imgf000011_0001
其中, A是可见光图像归一化灰度值矩阵, B是近红外荧光图像归一化灰度 值矩阵。
10. The method according to claim 8, wherein the registration comprises using an image coincidence degree as shown in the following formula as a registration effect evaluation criterion:
Figure imgf000011_0001
Where A is a normalized gray value matrix of the visible light image, and B is a normalized gray value matrix of the near infrared fluorescent image.
11. 根据权利要求 9所述的方法, 还包括使用点云配准来进一步对近红外荧 光图像和可见光图像进行图像配准。  11. The method of claim 9 further comprising using a point cloud registration to further image registration of the near infrared fluorescent image and the visible light image.
12. 根据权利要求 11所述的方法,还包括对点云配准的结果进行算法收敛性 验证。  12. The method of claim 11 further comprising performing algorithm convergence verification on results of point cloud registration.
PCT/CN2014/085396 2014-08-28 2014-08-28 Head-mounted molecular image navigation system WO2016029403A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/085396 WO2016029403A1 (en) 2014-08-28 2014-08-28 Head-mounted molecular image navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/085396 WO2016029403A1 (en) 2014-08-28 2014-08-28 Head-mounted molecular image navigation system

Publications (1)

Publication Number Publication Date
WO2016029403A1 true WO2016029403A1 (en) 2016-03-03

Family

ID=55398617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/085396 WO2016029403A1 (en) 2014-08-28 2014-08-28 Head-mounted molecular image navigation system

Country Status (1)

Country Link
WO (1) WO2016029403A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060001881A1 (en) * 2004-06-30 2006-01-05 Maier John S Method and apparatus for peak compensation in an optical filter
CN102721469A (en) * 2012-06-14 2012-10-10 中国科学院自动化研究所 Multispectral imaging system and method based on two cameras
CN102809429A (en) * 2012-07-26 2012-12-05 中国科学院自动化研究所 Multi-spectral imaging system and multi-spectral imaging method based on double cameras
CN103390281A (en) * 2013-07-29 2013-11-13 西安科技大学 Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060001881A1 (en) * 2004-06-30 2006-01-05 Maier John S Method and apparatus for peak compensation in an optical filter
CN102721469A (en) * 2012-06-14 2012-10-10 中国科学院自动化研究所 Multispectral imaging system and method based on two cameras
CN102809429A (en) * 2012-07-26 2012-12-05 中国科学院自动化研究所 Multi-spectral imaging system and multi-spectral imaging method based on double cameras
CN103390281A (en) * 2013-07-29 2013-11-13 西安科技大学 Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device
CN106361281B (en) * 2016-08-31 2018-06-19 北京数字精准医疗科技有限公司 Fluorescence real time imagery, fusion method and device

Similar Documents

Publication Publication Date Title
US10890439B2 (en) Generation of one or more edges of luminosity to form three-dimensional models of objects
US10512395B2 (en) Montaging of wide-field fundus images
WO2016011611A1 (en) Endoscopic optical molecular image navigation system and multi-spectral imaging method
US9984277B2 (en) Systems, apparatus, and methods for analyzing blood cell dynamics
WO2020251938A1 (en) Hair analysis methods and apparatuses
CN104305957B (en) Wear-type molecular image navigation system
Wisotzky et al. Interactive and multimodal-based augmented reality for remote assistance using a digital surgical microscope
Thong et al. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing
CN109752377B (en) Spectroscopic bimodal projection tomography tissue blood vessel imaging device and method
JP2021529053A (en) Methods and systems for dye-free visualization of blood flow and tissue perfusion in laparoscopy
JP2020010735A (en) Inspection support device, method, and program
CN105662354B (en) A kind of wide viewing angle optical molecular tomographic navigation system and method
WO2016061754A1 (en) Handheld molecular imaging navigation system
JP2014228851A5 (en)
WO2016029403A1 (en) Head-mounted molecular image navigation system
WO2020179586A1 (en) Information processing device and microscope system
EP3079567A1 (en) Medical imaging
CN204072055U (en) Wear-type molecular image navigation system
WO2016041155A1 (en) Three-dimensional optical molecular image navigation system and method
US20240115344A1 (en) Multi-modal microscopic imaging
CN205795645U (en) A kind of wide viewing angle optical molecular tomographic navigation system
JP2020010734A (en) Inspection support device, method, and program
JP7164090B2 (en) Imaging and analysis method of skin capillaries
WO2018186363A1 (en) Measurement instrument mounting assist device and measurement instrument mounting assist method
TWI581750B (en) Endoscope imaging system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14900717

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14900717

Country of ref document: EP

Kind code of ref document: A1