WO2020135040A1 - 图像三维信息提取方法、对象成像方法、装置及系统 - Google Patents

图像三维信息提取方法、对象成像方法、装置及系统 Download PDF

Info

Publication number
WO2020135040A1
WO2020135040A1 PCT/CN2019/124508 CN2019124508W WO2020135040A1 WO 2020135040 A1 WO2020135040 A1 WO 2020135040A1 CN 2019124508 W CN2019124508 W CN 2019124508W WO 2020135040 A1 WO2020135040 A1 WO 2020135040A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
intensity
dimensional information
overlapping area
Prior art date
Application number
PCT/CN2019/124508
Other languages
English (en)
French (fr)
Inventor
高玉峰
郑炜
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2020135040A1 publication Critical patent/WO2020135040A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0052Optical details of the image generation
    • G02B21/0076Optical details of the image generation arrangements using fluorescence or luminescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods

Definitions

  • the present application relates to the field of optical imaging technology, and in particular, to an image three-dimensional information extraction method, object imaging method, device, and system.
  • the existing two-photon fluorescence microscope mainly provides the optical sectioning ability by exciting the fluorescence signal at the focal point with the highest energy through the nonlinear effect, so it can image a sample at a certain depth, as shown in (a) of FIG. 1. If three-dimensional large-volume imaging is achieved, the z-axis stepper motor or zoom lens is required to achieve the axial movement of the focal point, so the volume imaging speed of this solution is very slow.
  • technology one as shown in (b) of FIG. 1, through the focal point of the Bessel beam elongated, a single imaging can detect a fluorescent signal in a large volume range.
  • embodiments of the present application provide an image three-dimensional information extraction method, object imaging method, device, and system, which can quickly extract three-dimensional information in an image based on the gradient change of the axial light intensity, thereby improving the object imaging speed.
  • a method for extracting three-dimensional image information includes: acquiring two target images of three-dimensional information to be extracted; wherein the two target images are that the intensity of the light beam of the microscope changes gradually, And the beam is acquired in the case that the axial stepping amount is half of the beam length; preprocessing is performed on the two target images; wherein, the preprocessing includes a background subtraction operation and an average filtering operation; Intensity-to-position processing is performed on the last two target images; the same area of the two target images after the intensity-to-position processing is extracted to obtain an overlapping area image; calculated based on the intensity image of the overlapping area The corresponding position image finally obtains the three-dimensional information map of the overlapping area image corresponding to the two target images.
  • the step of performing intensity-to-position processing on the two pre-processed target images includes: converting the two pre-processed target intensity images into a ratio proportional to the axial position by the following formula image: Where y represents the intensity of the light beam, L represents the length of the light beam, and x represents the axial position.
  • the step of extracting the same area of the two target images after the intensity-to-position processing to obtain an overlapping area image includes: converting the two target images after the intensity-to-position processing Convert to two binary images; take the intersection of the two binary images to obtain the same area of the two target images after the intensity-to-position processing; use the images corresponding to the same area as two The overlapping area image corresponding to the target image.
  • the step of calculating the corresponding position image based on the intensity image of the overlapping area and finally obtaining the three-dimensional information map of the overlapping area image corresponding to the two target images includes: obtaining the The intensity image of the overlapping area image; performing intensity-to-position calculation on the intensity image to obtain a position image of the overlapping area image; encoding the intensity image of the overlapping area image and the position image to obtain two A three-dimensional information map of the overlapping area image corresponding to the target image.
  • the step of performing intensity-to-position calculation on the intensity image to obtain the position image of the overlapping area image includes: normalizing the position information of the intensity image by the following formula: Wherein, x position is the normalization of the position information; I m3-1 and I m3-2 represent the position information of the two images after intensity-to-position processing; find the relationship between the x position and the beam length The half of the product obtains the position image of the overlapping area image.
  • an object imaging method includes: acquiring image source data of a target object; the image source data is a two-photon display mirror in which the intensity of a light beam changes gradually, and the light spot is on an axis
  • the stepping amount in the direction is half of the beam length
  • at least two images of the target object are collected; according to the sequence of image collection in the axial direction, with any two adjacent images as a group,
  • the multiple images in the image source data are divided into multiple groups of images; each group of images is input into a preset three-dimensional information extraction model to obtain a three-dimensional information map of the overlapping area image corresponding to each group of images;
  • the three-dimensional information extraction model pre-stores an algorithm corresponding to any one of the methods described above; cascade the three-dimensional information maps of the overlapping area images corresponding to the multiple sets of images to obtain the three-dimensional imaging map of the target object.
  • the microscope is a two-photon microscope and the light beam is a Bessel beam.
  • an apparatus for extracting three-dimensional information of an image includes: an image acquisition module for acquiring two target images of three-dimensional information to be extracted; wherein the two target images are The microscope acquires when the intensity of the light beam changes in a gradient and the step size of the light beam in the axial direction is half of the length of the light beam; a pre-processing module is used to pre-process the two target images; The preprocessing includes a background subtraction operation and an average filtering operation; an intensity conversion position module is used to perform intensity conversion position processing on the two pre-processed target images; an area extraction module is used to extract the intensity conversion position processing The same area of two of the target images, to obtain an overlapping area image; a three-dimensional information map generation module is used to obtain an overlap corresponding to the two target images based on the intensity image and the position image of the overlapping area image Three-dimensional information map of the area image.
  • an object imaging device includes: a data acquisition module for acquiring image source data of a target object; the image source data is that the intensity of the light beam of the display mirror changes gradually, And when the step size of the beam spot in the axial direction is half of the beam length, at least two images of the target object are collected; a grouping module is used to arrange any adjacent images in the axial image acquisition order The two images are a group, and the multiple images in the image source data are divided into multiple groups of images; the three-dimensional information extraction module is used to input each group of images into a preset three-dimensional information extraction model to obtain each group The three-dimensional information map of the overlapping area image corresponding to the image; wherein, the three-dimensional information extraction model prestores the algorithm corresponding to the above device; the image cascading module is used to store the three-dimensional information of the overlapping area image corresponding to multiple sets of images The images are cascaded to obtain a three-dimensional imaging image of the target object.
  • an object imaging system includes: a mirror, a cone lens, a convex lens, a ring mask, a microscope, and a controller; a laser is provided in the microscope; and the cone lens is provided On the front focal plane of the convex lens; the ring mask is set on the back focal plane of the convex lens; the laser light emitted by the laser reaches the cone lens after being reflected by the mirror, and passes through the convex lens and the ring A mask plate to generate a light beam; the microscope acquires multiple images of a target object when the intensity of the light beam changes in a gradient, and the step size of the light beam in the axial direction is half of the length of the light beam;
  • the object imaging device as described in the above aspect is installed on the controller; the controller receives multiple images of the target object sent by the microscope, and obtains the target object's Three-dimensional imaging map.
  • two target images of three-dimensional information to be extracted are first obtained; wherein, the two target images are the gradient of the intensity of the light beam of the microscope and the step size of the light beam in the axial direction It is collected with half the beam length; pre-processes two target images; where the pre-processing includes background subtraction operation and average filtering operation; performs intensity-to-position processing on the pre-processed two target images; extraction The same area of the two target images after the intensity-to-position processing obtains the overlapping area image; based on the intensity image and the position image of the overlapping area image, a three-dimensional information map of the overlapping area image corresponding to the two target images is obtained.
  • the present application can quickly extract the three-dimensional information in the image based on the gradient change of the axial light intensity, thereby improving the imaging speed of the object.
  • FIG. 1 shows three schematic diagrams of imaging in the prior art
  • FIG. 2 shows a flowchart of a method for extracting three-dimensional image information provided by an embodiment of the present application
  • FIG. 3 shows an image processing process diagram corresponding to an image three-dimensional information extraction method provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of a light beam and a stepping amount in a method for extracting three-dimensional image information provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of intensity-to-position relationship in an image three-dimensional information extraction method provided by an embodiment of the present application
  • FIG. 6 shows a flowchart of an object imaging method provided by an embodiment of the present application.
  • FIG. 7 shows a block diagram of an image three-dimensional information extraction device provided by an embodiment of the present application.
  • FIG. 8 shows a block diagram of an object imaging device provided by an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of an image three-dimensional information extraction system provided by an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of an electronic device provided by an embodiment of the present application.
  • an image three-dimensional information extraction method, object imaging method, device and system provided by the embodiments of the present application can quickly extract three-dimensional information in an image based on the gradient change of the axial light intensity, thereby improving the object imaging speed.
  • FIG. 2 shows a flowchart of a method for extracting three-dimensional image information provided by an embodiment of the present application, which is applied to a server such as an object imaging system
  • FIG. 3 shows a method for extracting three-dimensional image information provided by an embodiment of the present application.
  • the above image three-dimensional information extraction method specifically includes the following steps:
  • Step S202 Acquire two target images of the three-dimensional information to be extracted.
  • the two target images are acquired by the microscope when the intensity of the beam changes in a gradient, and the step of the beam in the axial direction is half of the length of the beam, as shown in FIG. 4, the step of the beam in the axial direction The amount is half of the beam length.
  • the microscope acquires the first image under the condition that the intensity of the beam changes in a gradient, and then moves the beam focus by a preset step amount through a stage or an electronically controlled focusing lens in the axial direction, that is, the z direction.
  • the feed rate is half of the above beam.
  • the second image is collected through the microscope, and these two images are used as the two target images of the three-dimensional information to be extracted, as shown in Figure 3 Im1-1 and Im1-2 .
  • the above microscope can use various types of microscopes, such as a two-photon fluorescence microscope, and the above beam can also use multiple beams that satisfy the intensity gradient change.
  • the beam in this embodiment uses Xaar beam.
  • Step S204 pre-process the two target images.
  • the preprocessing includes background subtraction operation and average filtering operation.
  • the signal-to-noise ratio can be improved by background reduction operation and average filtering operation.
  • Im2-1 and Im2-2 in FIG. 3 are obtained by filtering and reducing noise from the target images Im1-1 and Im1-2.
  • Step S206 Perform intensity-to-position processing on the two pre-processed target images.
  • each intensity y in Im2-1 and Im2-2 are converted to the axial position x by the following formula to obtain Im3-1 and Im3-2 as shown in FIG. 3:
  • y represents the intensity of the beam
  • L represents the length of the beam
  • x represents the axial position
  • step S208 the same area of the two target images after the intensity-to-position processing is extracted to obtain an overlapping area image.
  • the two target images Im3-1 and Im3-2 after the intensity-to-position processing are converted into two binary images; the intersection of the two binary images is taken to obtain the two target images after the intensity-to-position processing The same area; the image corresponding to the same area is used as the overlapping area image corresponding to the two target images, as shown in Im3 in FIG. 3.
  • Step S210 based on the intensity image and the position image of the overlapping area image, a three-dimensional information map of the overlapping area image corresponding to the two target images is obtained.
  • the above step of performing intensity-to-position calculation on the intensity image to obtain the position image of the overlapping area image specifically includes:
  • the position information of the intensity image is normalized by the following formula:
  • x position is the normalization of the position information
  • I m3-1 and I m3-2 represent the position information of the two images after the intensity-to-position processing; find the x position and half of the beam length Multiply the position image of the overlapping area image.
  • the image three-dimensional information extraction method provided by the embodiment of the present application can quickly extract three-dimensional information in the image based on the gradient change of the axial light intensity, thereby improving the imaging speed of the object.
  • an embodiment of the present application also provides an object imaging method, which is also applied to the above-mentioned server.
  • the object imaging method specifically includes the following steps:
  • Step S602 Collect image source data of the target object.
  • the image source data is at least two images of the target object collected when the intensity of the light beam of the two-photon display mirror changes gradually and the step size of the light beam in the axial direction is half of the beam length.
  • the image acquisition process is the same as the previous embodiment, and will not be repeated here.
  • the above-mentioned beam is a Bessel beam.
  • step S604 according to the axial image acquisition sequence, any two adjacent images are taken as a group to divide the multiple images in the image source data into multiple groups of images.
  • Step S606 Input each group of images into a preset three-dimensional information extraction model to obtain a three-dimensional information map of the overlapping region image corresponding to each group of images; wherein the three-dimensional information extraction model pre-stores the algorithm corresponding to the method described in the above embodiment .
  • Step S608 Cascade the three-dimensional information maps of the overlapping area images corresponding to the multiple groups of images to obtain the three-dimensional imaging map of the target object.
  • the object imaging speed can be improved, and the body imaging speed can be achieved more than 10 times faster than the traditional two-photon microscope, and it will not cause excessive photobleaching and photodamage, and is particularly suitable for embryonic development and Imaging of nerve activity.
  • this method does not require major changes to the imaging system and is simple and easy to use.
  • FIG. 7 shows a block diagram of an image three-dimensional information extraction device provided by an embodiment of the present application.
  • the device may be applied to the above server.
  • the device includes: an image acquisition module 702, a preprocessing module 704, The intensity transfer position module 706, the area extraction module 708 and the three-dimensional information map generation module 710.
  • the image acquisition module 702 is used to acquire two target images of the three-dimensional information to be extracted; wherein, the two target images show that the intensity of the beam in the display mirror changes gradually, and the step size of the beam in the axial direction is the length of the beam Collected in half of the cases;
  • the preprocessing module 704 is used to preprocess two target images; where the preprocessing includes background subtraction operation and average filtering operation;
  • the intensity conversion position module 706 is used to preprocess the The two target images are subjected to intensity-to-position processing;
  • the area extraction module 708 is used to extract the same area of the two target images after intensity-to-position processing to obtain an overlapping area image;
  • the three-dimensional information map generation module 710 is used to generate an overlap-based image The intensity image and the position image of the area image are obtained as the three-dimensional information map of the overlapping area image corresponding to the two target images.
  • FIG. 8 shows a block diagram of an object imaging device provided by an embodiment of the present application.
  • the device can be applied to the above server.
  • the device includes: a data acquisition module 802, a grouping module 804, a three-dimensional information extraction module 806, and an image cascading module 808.
  • the data collection module 802 is used to collect the image source data of the target object;
  • the image source data is the case where the intensity of the light beam of the display mirror changes gradually, and the step size of the light spot in the axial direction is half of the beam length.
  • the grouping module 804 is used to group two images in the image source data into multiple groups of images according to the axial image acquisition sequence, using any two adjacent images as a group
  • the three-dimensional information extraction module 806 is used to input each group of images into a preset three-dimensional information extraction model to obtain a three-dimensional information map of the overlapping region image corresponding to each group of images; wherein the three-dimensional information extraction model pre-stores the above implementation Example device corresponding algorithm; image cascading module 808, for cascading the three-dimensional information map of the overlapping area images corresponding to multiple sets of images to obtain a three-dimensional imaging map of the target object.
  • the above-mentioned modules may be connected or communicate with each other via a wired connection or a wireless connection.
  • Wired connections may include metal cables, optical cables, hybrid cables, etc., or any combination thereof.
  • the wireless connection may include a connection via LAN, WAN, Bluetooth, ZigBee, or NFC, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units.
  • FIG. 9 shows an object imaging system provided by an embodiment of the present application.
  • the system includes: a mirror 2, a conical lens 3, a convex lens 4, a ring mask 5, a microscope 1, and a controller 6;
  • the microscope 1 is provided with a laser.
  • the cone lens 3 is disposed on the front focal plane of the convex lens 4; the ring mask 5 is disposed on the back focal plane of the convex lens 4; the laser light emitted by the laser is reflected by the mirror 2 and reaches the cone lens 3, and passes through the convex lens 4 and the ring mask 5.
  • a light beam such as a Bessel beam
  • the microscope 1 collects multiple images of the target object when the intensity of the light beam changes in a gradient, and the step size of the beam in the axial direction is half of the beam length
  • the controller 6 The object imaging device described in the above embodiment is mounted thereon; the controller 6 receives a plurality of images of the target object sent by the microscope 1, and obtains a three-dimensional imaging image of the target object through the object imaging device.
  • the system provided in this embodiment can generate a Bessel beam.
  • the core combination is a cone lens, a combination of a lens and a ring mask.
  • the cone lens is placed on the front focal plane of the lens, and the back focal plane forms a ring beam due to the lens.
  • the ring mask and the transparent part are aligned with the ring beam, that is, the two are concentric, and the mask must block about 50% of the light, so that a symmetric Bessel beam can be formed.
  • the annular mask uses a 4f lens system and the back aperture of the objective lens is conjugate.
  • the Bessel beam After the Bessel beam is generated, the beam is measured using fluorescent microbeads, which should conform to the intensity distribution shown in (a) of FIG. 5. At this time, the image is collected, and the step size of the axial direction, that is, the z axis is adjusted to half of the beam length, so that the intensity distribution of the overlapping area can be used to calculate the z-axis position information of the overlapping area, that is, three-dimensional information.
  • an appropriate conical lens 3 and convex lens 4 are selected to match the ring beam with the outermost circle of the objective lens.
  • Bezier beam can also be realized by the combination of spatial light modulator and mask.
  • FIG. 10 shows a schematic diagram of exemplary hardware and software components of an electronic device 1000 that can implement the concepts of the present application according to some embodiments of the present application.
  • the processor 1020 may be used on the electronic device 1000 and used to perform the functions in this application.
  • the electronic device 1000 may be a general-purpose computer or a special-purpose computer, and both may be used to implement the image three-dimensional information extraction method or object imaging method of the present application.
  • this application only shows one computer, for convenience, the functions described in this application may be implemented in a distributed manner on multiple similar platforms to balance the processing load.
  • the electronic device 1000 may include a network port 1010 connected to a network, one or more processors 1020 for executing program instructions, a communication bus 1030, and different forms of storage media 1040, such as a magnetic disk, ROM, or RAM, or random combination.
  • the computer platform may also include program instructions stored in ROM, RAM, or other types of non-transitory storage media, or any combination thereof. The method of the present application can be implemented according to these program instructions.
  • the electronic device 1000 further includes an input/output (I/O) interface 1050 between the computer and other input-output devices (eg, keyboard, display screen).
  • I/O input/output
  • the electronic device 1000 in the present application may further include multiple processors, so the steps performed by one processor described in the present application may also be performed jointly by the multiple processors or separately.
  • steps A and B may also be executed by two different processors together or separately in one processor.
  • the first processor performs step A
  • the second processor performs step B
  • the first processor and the second processor perform steps A and B together.
  • An embodiment of the present application further provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, which when executed by a processor executes the steps of any of the above image three-dimensional information extraction methods or object imaging methods .
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they may be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the foregoing storage media include various media that can store program codes, such as a U disk, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
  • connection should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , Or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected, or it can be indirectly connected through an intermediary, or it can be the connection between two components.
  • connection should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , Or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected, or it can be indirectly connected through an intermediary, or it can be the connection between two components.
  • connection should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , Or integrally connected; it can be a mechanical connection or an electrical connection; it can be directly connected, or it can be indirectly connected through an intermediary, or it can be the connection between two components.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

一种图像三维信息提取方法、对象成像方法、装置及系统,方法包括:获取待提取三维信息的两个目标图像(S202);其中,两个目标图像为显微镜(1)在光束的强度呈梯度变化、且光束在轴向的步进量为光束长度的一半的情况下所采集的;对两个目标图像进行预处理(S204);其中,预处理包括减背景操作和平均滤波操作;对预处理后的两个目标图像进行强度转位置处理(S206);提取强度转位置处理后的两个目标图像的相同区域,得到交叠区域图像(S208);基于交叠区域图像的强度图像和位置图像,得到两个目标图像对应的交叠区域图像的三维信息图(S210)。方法基于轴向光强的梯度变化快速提取出图像中的三维信息,从而提高对象成像速度。

Description

图像三维信息提取方法、对象成像方法、装置及系统 技术领域
本申请涉及光学成像技术领域,尤其涉及一种图像三维信息提取方法、对象成像方法、装置及系统。
背景技术
现有的双光子荧光显微镜主要通过非线性效应在能量最高的焦点处激发荧光信号提供光学切片能力,因此它能对某一深度的样品进行成像,如图1中的(a)所示。如果实现三维大体积的成像,需要借助z轴的步进电机或者变焦透镜实现焦点的轴向移动,因此该方案体积成像的速度很慢。目前实现体成像的技术有两种:技术一如图1中的(b)所示,通过贝赛尔光束拉长的焦点,单次成像能够探测到一个大体积的范围内的荧光信号。普通的双光子成像一次只能够成像500um*500um*1um的区域,而该技术可以成像500um*500um*60um的区域,但是这种方式缺乏轴向的位置信息。技术二如图1中的(c)所示,将入射焦点设计成V型,将轴向的位置信息转化为横向的位置信息。同一个荧光信号在图像中有两个对应的位置,而这两个位置的间距和荧光信号的轴向位置有关,因此可以定位出荧光信号的轴向位置,但是以这种方式提取三维信息速度很慢。
发明内容
有鉴于此,本申请实施例提供一种图像三维信息提取方法、对象成像方法、装置及系统,能够基于轴向光强的梯度变化快速提取出图像中的三维信息,从而提高对象成像速度。
根据本申请的一个方面,提供一种图像三维信息提取方法,所述方法包括:获取待提取三维信息的两个目标图像;其中,所述两个目标图像为显微镜在光 束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;对两个所述目标图像进行预处理;其中,所述预处理包括减背景操作和平均滤波操作;对预处理后的两个所述目标图像进行强度转位置处理;提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像;基于所述交叠区域的强度图像计算出其对应的位置图像,最后得到两个所述目标图像对应的交叠区域图像的三维信息图。
在一些实施例中,对预处理后的两个所述目标图像进行强度转位置处理的步骤,包括:通过下式将所述预处理后的两个目标强度图像转换为与轴向位置正比的图像:
Figure PCTCN2019124508-appb-000001
其中,y表示所述光束的强度,L表示所述光束的长度,x表示轴向位置。
在一些实施例中,提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像的步骤,包括:将所述强度转位置处理后的两个所述目标图像转换为两个二值图像;对两个所述二值图像取交集,得到所述强度转位置处理后的两个所述目标图像的相同区域;将所述相同区域对应的图像,作为两个所述目标图像对应的交叠区域图像。
在一些实施例中,;基于所述交叠区域的强度图像计算出其对应的位置图像,最后得到两个所述目标图像对应的交叠区域图像的三维信息图的步骤,包括:获取所述交叠区域图像的强度图像;将所述强度图像进行强度转位置运算,得到所述交叠区域图像的位置图像;对所述交叠区域图像的强度图像和所述位置图像进行编码,得到两个所述目标图像对应的交叠区域图像的三维信息图。
在一些实施例中,将所述强度图像进行强度转位置运算,得到所述交叠区域图像的位置图像的步骤,包括:通过下式对所述强度图像进行位置信息归一 化:
Figure PCTCN2019124508-appb-000002
其中,x position为位置信息的归一化;I m3-1、I m3-2表示两个所述经过强度转位置处理后的图像的位置信息;求取所述x position与所述光束长度的一半的乘积,得到所述交叠区域图像的位置图像。
根据本申请的另一个方面,提供一种对象成像方法,所述方法包括:采集目标对象的图像源数据;所述图像源数据为双光子显示镜在光束的强度呈梯度变化,且光斑在轴向的步进量为光束长度的一半的情况下,所采集的所述目标对象的至少两个图像;按所述轴向的图像采集顺序,以任意相邻的两个图像为一组,将所述图像源数据中的多个图像分为多组图像;将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,所述三维信息提取模型预存有上述任一项所述方法对应的算法;将多组图像对应的交叠区域图像的三维信息图进行级联,得到所述目标对象的三维成像图。
在一些实施例中,所述显微镜为双光子显微镜,所述光束为贝赛尔光束。
根据本申请的另一个方面,提供一种图像的三维信息的提取装置,所述装置包括:图像获取模块,用于获取待提取三维信息的两个目标图像;其中,所述两个目标图像为显微镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;预处理模块,用于对两个所述目标图像进行预处理;其中,所述预处理包括减背景操作和平均滤波操作;强度转位置模块,用于对预处理后的两个所述目标图像进行强度转位置处理;区域提取模块,用于提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像;三维信息图生成模块,用于基于所述交叠区域图像的强度图像和位置图像,得到两个所述目标图像对应的交叠区域图像的三维信息图。
根据本申请的另一个方面,提供一种对象成像装置,所述装置包括:数据 采集模块,用于采集目标对象的图像源数据;所述图像源数据为显示镜在光束的强度呈梯度变化,且光斑在轴向的步进量为光束长度的一半的情况下,所采集的所述目标对象的至少两个图像;分组模块,用于按所述轴向的图像采集顺序,以任意相邻的两个图像为一组,将所述图像源数据中的多个图像分为多组图像;三维信息提取模块,用于将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,所述三维信息提取模型预存有上述装置对应的算法;图像级联模块,用于将多组图像对应的交叠区域图像的三维信息图进行级联,得到所述目标对象的三维成像图。
根据本申请的另一个方面,提供一种对象成像系统,所述系统包括:反射镜、锥透镜、凸透镜、环形掩模板、显微镜和控制器;所述显微镜中设置有激光器;所述锥透镜设置于所述凸透镜的前焦面;所述环形掩模板设置于所述凸透镜的后焦面;所述激光器发射的激光经过所述反射镜反射后到达所述锥透镜,并通过凸透镜和所述环形掩模板,生成光束;所述显微镜在所述光束的强度呈梯度变化,且所述光束在轴向的步进量为所述光束长度的一半的情况下,采集目标对象的多个图像;所述控制器上安装有如上一方面所述的对象成像装置;所述控制器接收所述显微镜所发送的所述目标对象的多个图像,并通过所述对象成像装置,得到所述目标对象的三维成像图。
本申请提供的图像三维信息提取方法及装置中,首先获取待提取三维信息的两个目标图像;其中,两个目标图像为显微镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;对两个目标图像进行预处理;其中,预处理包括减背景操作和平均滤波操作;对预处理后的两个目标图像进行强度转位置处理;提取强度转位置处理后的两个目标图像的相同区域,得到交叠区域图像;基于交叠区域图像的强度图像和位置图像,得到两 个所述目标图像对应的交叠区域图像的三维信息图。本申请能够基于轴向光强的梯度变化快速提取出图像中的三维信息,从而提高对象成像速度。
为使本申请实施例的上述目的、特征和优点能更明显易懂,下面将结合实施例,并配合所附附图,作详细说明。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了现有技术中的三种成像示意图;
图2示出了本申请实施例所提供的一种图像三维信息提取方法的流程图;
图3示出了本申请实施例所提供的一种图像三维信息提取方法对应的图像处理过程图;
图4示出了本申请实施例所提供的一种图像三维信息提取方法中光束及步进量的示意图;
图5示出了本申请实施例所提供的一种图像三维信息提取方法中强度转位置关系示意图;
图6示出了本申请实施例所提供的一种对象成像方法的流程图;
图7示出了本申请实施例所提供的一种图像三维信息提取装置的框图;
图8示出了本申请实施例所提供的一种对象成像装置的框图;
图9示出了本申请实施例所提供的一种图像三维信息提取系统的示意图;
图10示出了本申请实施例所提供的一种电子设备的示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
目前现有的成像技术中,无论是通过贝赛尔光束拉长的焦点,单次探测到一个大体积的范围内的荧光信号进行成像,还是将入射焦点设计成V型,将轴向的位置信息转化为横向的位置信息的方式来成像,其速度都很慢。基于此,本申请实施例提供的一种图像三维信息提取方法、对象成像方法、装置及系统,能够基于轴向光强的梯度变化快速提取出图像中的三维信息,从而提高对象成像速度。
为便于对本实施例进行理解,首先对本申请实施例所公开的一种图像三维信息提取方法进行详细介绍。
图2示出了本申请实施例提供的一种图像三维信息提取方法的流程图,该方法应用于诸如对象成像系统中的服务器,图3示出了本申请实施例提供的图像三维信息提取方法对应的图像处理过程图,上述图像三维信息提取方法具体包括以下步骤:
步骤S202,获取待提取三维信息的两个目标图像。
其中,两个目标图像为显微镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的,如图4所示,光束在轴向的步进 量为光束长度的一半。
具体实施时,显微镜在光束的强度呈梯度变化的情况下,采集第一个图像,然后在轴向即z方向通过位移台或者电控调焦透镜将光束焦点移动预设步进量,该步进量为上述光束的一半,此时,再通过显微镜采集第二个图像,将这两个图像作为待提取三维信息的两个目标图像,如图3中所示的Im1-1和Im1-2。
需要说明的是,上述显微镜可以采用多种类型的显微镜,比如双光子荧光显微镜,上述光束也可以采用多种满足强度梯度变化的光束,作为一种优选实施方式,本实施例中的光束采用贝赛尔光束。
步骤S204,对两个目标图像进行预处理。
其中,预处理包括减背景操作和平均滤波操作。通过减背景操作和平均滤波操作可以提高信噪比。由目标图像Im1-1和Im1-2滤波降噪得到图3中的Im2-1和Im2-2。
步骤S206,对预处理后的两个目标图像进行强度转位置处理。
具体的,将贝赛尔光束强度分布映射为和轴向位置成线性关系的强度,参见图5所示,(a)表示贝赛尔光束的强度分布图,(b)表示经过映射后的强度与位置成线性关系图。通过下式将预处理后的两个目标图像,Im2-1和Im2-2中的每一个强度y转换为轴向位置x,得到如图3中所示的Im3-1和Im3-2:
Figure PCTCN2019124508-appb-000003
其中,y表示光束的强度,L表示光束的长度,x表示轴向位置。
步骤S208,提取强度转位置处理后的两个目标图像的相同区域,得到交叠区域图像。
具体的,将强度转位置处理后的两个目标图像Im3-1和Im3-2转换为两个 二值图像;对两个二值图像取交集,得到强度转位置处理后的两个目标图像的相同区域;将相同区域对应的图像,作为两个目标图像对应的交叠区域图像,如图3中的Im4。
步骤S210,基于交叠区域图像的强度图像和位置图像,得到两个目标图像对应的交叠区域图像的三维信息图。
具体的,首先获取交叠区域图像的强度图像,如图3中所示的Im5-1;将强度图像Im5-1进行强度转位置运算,得到交叠区域图像的位置图像,如图3中所示的Im5-2;对交叠区域图像的强度图像Im5-1和位置图像Im5-2进行编码,得到两个目标图像对应的交叠区域图像Im4的三维信息图Im6。
上述将强度图像进行强度转位置运算,得到交叠区域图像的位置图像的步骤具体包括:
通过下式对强度图像进行位置信息归一化:
Figure PCTCN2019124508-appb-000004
其中,x position为位置信息的归一化;I m3-1、I m3-2表示两个经过强度转位置处理后的图像的位置信息;求取所述x position与所述光束长度的一半的乘积,得到交叠区域图像的位置图像。
本申请实施例提供的图像三维信息提取方法,能够基于轴向光强的梯度变化快速提取出图像中的三维信息,从而提高对象成像速度。
基于上述图像三维信息提取方法实施例,本申请实施例还提供一种对象成像方法,该方法同样应用于上述服务器,参见图6所示,该对象成像方法具体包括以下步骤:
步骤S602,采集目标对象的图像源数据。
其中,图像源数据为双光子显示镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下,所采集的目标对象的至少两个图像。图像采集过程同上一实施例,在此不再赘述。优选的,上述光束为贝赛尔光束。
步骤S604,按轴向的图像采集顺序,以任意相邻的两个图像为一组,将图像源数据中的多个图像分为多组图像。
步骤S606,将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,三维信息提取模型预存有上述实施例所述方法对应的算法。
步骤S608,将多组图像对应的交叠区域图像的三维信息图进行级联,得到目标对象的三维成像图。
通过上一实施例中的图像三维信息提取方法,对每组图像中的两个图像进行三维信息提取操作,得到多组图像对应的交叠区域图像的三维信息图。最后将上述得到的多个交叠区域图像的三维信息图级联起来,即按顺序进行拼接,得到目标对象的三维成像图。
采用本实施例提供的对象成像方法,可以提高对象成像速度,可以实现比传统双光子显微镜快10倍以上的体成像速度,而且不会造成过多的光漂白和光损伤,特别适用于胚胎发育和神经活动的成像。此外,该方法不要求成像系统的重大改变,简单易用。
基于上述方法实施例,图7示出了本申请实施例提供的一种图像的三维信息提取装置的框图,该装置可以应用于上述服务器,该装置包括:图像获取模块702、预处理模块704、强度转位置模块706、区域提取模块708和三维信息图生成模块710。
其中,图像获取模块702,用于获取待提取三维信息的两个目标图像;其 中,两个目标图像为显示镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;预处理模块704,用于对两个目标图像进行预处理;其中,预处理包括减背景操作和平均滤波操作;强度转位置模块706,用于对预处理后的两个目标图像进行强度转位置处理;区域提取模块708,用于提取强度转位置处理后的两个目标图像的相同区域,得到交叠区域图像;三维信息图生成模块710,用于基于交叠区域图像的强度图像和位置图像,得到两个目标图像对应的交叠区域图像的三维信息图。
图8示出了本申请实施例提供的一种对象成像装置的框图,该装置可以应用于上述服务器,该装置包括:数据采集模块802、分组模块804和三维信息提取模块806和图像级联模块808。
其中,数据采集模块802,用于采集目标对象的图像源数据;图像源数据为显示镜在光束的强度呈梯度变化,且光斑在轴向的步进量为光束长度的一半的情况下,所采集的目标对象的至少两个图像;分组模块804,用于按轴向的图像采集顺序,以任意相邻的两个图像为一组,将图像源数据中的多个图像分为多组图像;三维信息提取模块806,用于将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,三维信息提取模型预存有上述实施例装置对应的算法;图像级联模块808,用于将多组图像对应的交叠区域图像的三维信息图进行级联,得到目标对象的三维成像图。
上述模块可以经由有线连接或无线连接彼此连接或通信。有线连接可以包括金属线缆、光缆、混合线缆等,或其任意组合。无线连接可以包括通过LAN、WAN、蓝牙、ZigBee、或NFC等形式的连接,或其任意组合。两个或更多个模块可以组合为单个模块,并且任何一个模块可以分成两个或更多个单元。
基于上述方法及装置,图9示出了本申请实施例提供的一种对象成像系统, 该系统包括:反射镜2、锥透镜3、凸透镜4、环形掩模板5、显微镜1和控制器6;显微镜1中设置有激光器。
其中,锥透镜3设置于凸透镜4的前焦面;环形掩模板5设置于凸透镜4的后焦面;激光器发射的激光经过反射镜2反射后到达锥透镜3,并通过凸透镜4和环形掩模板5,生成光束,如贝赛尔光束;显微镜1在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下,采集目标对象的多个图像;控制器6上安装有上述实施例所述的对象成像装置;控制器6接收显微镜1所发送的目标对象的多个图像,并通过对象成像装置,得到目标对象的三维成像图。
本实施例提供的系统可以生成贝赛尔光束,其核心组合是锥透镜,透镜和环形掩模板的组合,锥透镜放置在透镜的前焦面上,由于透镜的作用后焦面形成环形光束,环形掩模板和通光部分和环形光束对齐,即二者同心,并且掩模板要挡大约50%的光,这样才能够形成对称的贝赛尔光束。环形掩模板要使用4f透镜系统和物镜的后背孔径共轭。
生成贝赛尔光束后,使用荧光微珠测量光束,应符合如图5中(a)所示的强度分布。此时采集图像,并且将轴向即z轴的步进量调整为光束长度的一半,这样可以利用交叠区域的强度分布计算出该交叠区域的z轴位置信息,即三维信息。
需要说明的是,为了得到更高数值孔径效果,选择合适的锥透镜3、凸透镜4使环形光束与物镜最外圈匹配。贝赛尔光束还可以通过空间光调制器和掩模板的组全来实现。
为便于理解,图10示出根据本申请的一些实施例的可以实现本申请思想的电子设备1000的示例性硬件和软件组件的示意图。例如,处理器1020可以 用于电子设备1000上,并且用于执行本申请中的功能。
电子设备1000可以是通用计算机或特殊用途的计算机,两者都可以用于实现本申请的图像三维信息提取方法或对象成像方法。本申请尽管仅示出了一个计算机,但是为了方便起见,可以在多个类似平台上以分布式方式实现本申请描述的功能,以均衡处理负载。
例如,电子设备1000可以包括连接到网络的网络端口1010、用于执行程序指令的一个或多个处理器1020、通信总线1030和不同形式的存储介质1040,例如,磁盘、ROM或RAM,或其任意组合。示例性地,计算机平台还可以包括存储在ROM、RAM或其他类型的非暂时性存储介质或其任意组合中的程序指令。根据这些程序指令可以实现本申请的方法。电子设备1000还包括计算机与其他输入输出设备(例如键盘、显示屏)之间的输入/输出(Input/Output,I/O)接口1050。
为了便于说明,在电子设备1000中仅描述了一个处理器。然而,应当注意,本申请中的电子设备1000还可以包括多个处理器,因此本申请中描述的一个处理器执行的步骤也可以由多个处理器联合执行或单独执行。例如,若电子设备1000的处理器执行步骤A和步骤B,则应该理解,步骤A和步骤B也可以由两个不同的处理器共同执行或者在一个处理器中单独执行。例如,第一处理器执行步骤A,第二处理器执行步骤B,或者第一处理器和第二处理器共同执行步骤A和B。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如上述任一图像三维信息提取方法或对象成像方法的步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述 的系统和装置的具体工作过程,可以参考方法实施例中的对应过程,本申请中不再赘述。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
另外,在本申请实施例的描述中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。
在本申请的描述中,需要说明的是,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种图像三维信息提取方法,其特征在于,所述方法包括:
    获取待提取三维信息的两个目标图像;其中,所述两个目标图像为显微镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;
    对两个所述目标图像进行预处理;其中,所述预处理包括减背景操作和平均滤波操作;
    对预处理后的两个所述目标图像进行强度转位置处理;
    提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像;
    基于所述交叠区域图像的强度图像和位置图像,得到两个所述目标图像对应的交叠区域图像的三维信息图。
  2. 根据权利要求1所述的方法,其特征在于,对预处理后的两个所述目标图像进行强度转位置处理的步骤,包括:
    通过下式将所述预处理后的两个目标图像中的强度转换为轴向位置:
    Figure PCTCN2019124508-appb-100001
    其中,y表示所述光束的强度,L表示所述光束的长度,x表示轴向位置。
  3. 根据权利要求1所述的方法,其特征在于,提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像的步骤,包括:
    将所述强度转位置处理后的两个所述目标图像转换为两个二值图像;
    对两个所述二值图像取交集,得到所述强度转位置处理后的两个所述目标图像的相同区域;
    将所述相同区域对应的图像,作为两个所述目标图像对应的交叠区域图像。
  4. 根据权利要求1所述的方法,其特征在于,基于所述交叠区域图像的强度图像和位置图像,得到两个所述目标图像对应的交叠区域图像的三维信息图的步骤,包括:
    获取所述交叠区域图像的强度图像;
    将所述强度图像进行强度转位置运算,得到所述交叠区域图像的位置图像;
    对所述交叠区域图像的强度图像和所述位置图像进行编码,得到两个所述目标图像对应的交叠区域图像的三维信息图。
  5. 根据权利要求4所述的方法,其特征在于,将所述强度图像进行强度转位置运算,得到所述交叠区域图像的位置图像的步骤,包括:
    通过下式对所述强度图像进行位置信息归一化:
    Figure PCTCN2019124508-appb-100002
    其中,x position为位置信息的归一化;I m3-1、I m3-2表示两个所述经过强度转位置处理后的图像的位置信息;
    求取所述x position与所述光束长度的一半的乘积,得到所述交叠区域图像的位置图像。
  6. 一种对象成像方法,其特征在于,所述方法包括:
    采集目标对象的图像源数据;所述图像源数据为双光子显示镜在光束的强度呈梯度变化,且光斑在轴向的步进量为光束长度的一半的情况下,所采集的所述目标对象的至少两个图像;
    按所述轴向的图像采集顺序,以任意相邻的两个图像为一组,将所述图像源数据中的多个图像分为多组图像;
    将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,所述三维信息提取模型预存有权利要求1-5任一项 所述方法对应的算法;
    将多组图像对应的交叠区域图像的三维信息图进行级联,得到所述目标对象的三维成像图。
  7. 根据权利要求6所述的方法,其特征在于,所述显微镜为双光子显微镜,所述光束为贝赛尔光束。
  8. 一种图像三维信息提取装置,其特征在于,所述装置包括:
    图像获取模块,用于获取待提取三维信息的两个目标图像;其中,所述两个目标图像为显示镜在光束的强度呈梯度变化,且光束在轴向的步进量为光束长度的一半的情况下所采集的;
    预处理模块,用于对两个所述目标图像进行预处理;其中,所述预处理包括减背景操作和平均滤波操作;
    强度转位置模块,用于对预处理后的两个所述目标图像进行强度转位置处理;
    区域提取模块,用于提取所述强度转位置处理后的两个所述目标图像的相同区域,得到交叠区域图像;
    三维信息图生成模块,用于基于所述交叠区域图像的强度图像和位置图像,得到两个所述目标图像对应的交叠区域图像的三维信息图。
  9. 一种对象成像装置,其特征在于,所述装置包括:
    数据采集模块,用于采集目标对象的图像源数据;所述图像源数据为显示镜在光束的强度呈梯度变化,且光斑在轴向的步进量为光束长度的一半的情况下,所采集的所述目标对象的至少两个图像;
    分组模块,用于按所述轴向的图像采集顺序,以任意相邻的两个图像为一组,将所述图像源数据中的多个图像分为多组图像;
    三维信息提取模块,用于将每组图像输入预设的三维信息提取模型,得到每组图像对应的交叠区域图像的三维信息图;其中,所述三维信息提取模型预存有权利要求1-5任一项所述方法对应的算法;
    图像级联模块,用于将多组图像对应的交叠区域图像的三维信息图进行级联,得到所述目标对象的三维成像图。
  10. 一种对象成像系统,其特征在于,所述系统包括:反射镜、锥透镜、凸透镜、环形掩模板、显微镜和控制器;
    所述显微镜中设置有激光器;
    所述锥透镜设置于所述凸透镜的前焦面;
    所述环形掩模板设置于所述凸透镜的后焦面;
    所述激光器发射的激光经过所述反射镜反射后到达所述锥透镜,并通过凸透镜和所述环形掩模板,生成光束;
    所述显微镜在所述光束的强度呈梯度变化,且所述光束在轴向的步进量为所述光束长度的一半的情况下,采集目标对象的多个图像;
    所述控制器上安装有如权利要求9所述的对象成像装置;
    所述控制器接收所述显微镜所发送的所述目标对象的多个图像,并通过所述对象成像装置,得到所述目标对象的三维成像图。
PCT/CN2019/124508 2018-12-29 2019-12-11 图像三维信息提取方法、对象成像方法、装置及系统 WO2020135040A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811635501.1A CN111381357B (zh) 2018-12-29 2018-12-29 图像三维信息提取方法、对象成像方法、装置及系统
CN201811635501.1 2018-12-29

Publications (1)

Publication Number Publication Date
WO2020135040A1 true WO2020135040A1 (zh) 2020-07-02

Family

ID=71127260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124508 WO2020135040A1 (zh) 2018-12-29 2019-12-11 图像三维信息提取方法、对象成像方法、装置及系统

Country Status (2)

Country Link
CN (1) CN111381357B (zh)
WO (1) WO2020135040A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040026619A1 (en) * 2002-08-09 2004-02-12 Oh Chil Hwan Method and apparatus for extracting three-dimensional spacial data of object using electron microscope
CN102968792A (zh) * 2012-10-29 2013-03-13 中国科学院自动化研究所 显微视觉下多焦面物体成像的方法
CN103308452A (zh) * 2013-05-27 2013-09-18 中国科学院自动化研究所 一种基于景深融合的光学投影断层成像图像获取方法
TW201342303A (zh) * 2012-04-13 2013-10-16 Hon Hai Prec Ind Co Ltd 三維空間圖像的獲取系統及方法
CN103558193A (zh) * 2013-10-24 2014-02-05 深圳先进技术研究院 一种双光子显微镜
CN105321152A (zh) * 2015-11-11 2016-02-10 佛山轻子精密测控技术有限公司 一种图像拼接方法和系统
CN107392946A (zh) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 一种面向三维形状重建的显微多焦距图像序列处理方法
CN107680152A (zh) * 2017-08-31 2018-02-09 太原理工大学 基于图像处理的目标物表面形貌测量方法和装置

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1438697A2 (de) * 2001-10-22 2004-07-21 Leica Microsystems Wetzlar GmbH Verfahren und vorrichtung zur erzeugung lichtmikroskopischer, dreidimensionaler bilder
DE102007045897A1 (de) * 2007-09-26 2009-04-09 Carl Zeiss Microimaging Gmbh Verfahren zur mikroskopischen dreidimensionalen Abbildung einer Probe
EP2289045A1 (en) * 2008-05-16 2011-03-02 Artivision Technologies Ltd. Method and device for analyzing video signals generated by a moving camera
US8362409B2 (en) * 2009-10-29 2013-01-29 Applied Precision, Inc. System and method for continuous, asynchronous autofocus of optical instruments
EP2657747A1 (en) * 2012-04-24 2013-10-30 Deutsches Krebsforschungszentrum 4Pi STED fluorescence light microscope with high three-dimensional spatial resolution
CN203502664U (zh) * 2012-11-09 2014-03-26 蒋礼阳 一种透射光学显微镜的样品梯度照明装置
CN105939673A (zh) * 2013-12-26 2016-09-14 诺森有限公司 超声波探头或光声探头、使用其的超声波诊断系统、超声波治疗系统、超声波诊断治疗系统以及超声波系统或光声系统
DE102014202860B4 (de) * 2014-02-17 2016-12-29 Leica Microsystems Cms Gmbh Bereitstellen von Probeninformationen mit einem Lasermikrodissektionssystem
CN104021522A (zh) * 2014-04-28 2014-09-03 中国科学院上海光学精密机械研究所 基于强度关联成像的目标图像分离装置及分离方法
CN104966282B (zh) * 2014-12-24 2017-12-08 广西师范大学 一种用于单个血红细胞检测的图像采集方法及系统
JP6503221B2 (ja) * 2015-05-13 2019-04-17 オリンパス株式会社 3次元情報取得装置、及び、3次元情報取得方法
CN105023270A (zh) * 2015-05-29 2015-11-04 汤一平 用于地下基础设施结构监测的主动式3d立体全景视觉传感器
CN106199941A (zh) * 2016-08-30 2016-12-07 浙江大学 一种移频光场显微镜以及三维超分辨微观显示方法
CN106548485B (zh) * 2017-01-18 2023-11-21 上海朗研光电科技有限公司 纳米颗粒荧光空间编码防伪标识方法
CN106983492B (zh) * 2017-02-22 2020-06-16 中国科学院深圳先进技术研究院 一种光声成像系统
CN206893310U (zh) * 2017-03-30 2018-01-16 鲁东大学 一种三维位置可控的阵列光镊装置
CN108693624B (zh) * 2017-04-10 2021-09-03 深圳市真迈生物科技有限公司 成像方法、装置及系统
US10646288B2 (en) * 2017-04-12 2020-05-12 Bio-Medical Engineering (HK) Limited Automated steering systems and methods for a robotic endoscope
EP3649231A4 (en) * 2017-05-19 2021-07-28 Thrive Bioscience, Inc. SYSTEMS AND PROCEDURES FOR CELL DISSOCIATION
CN108227233B (zh) * 2017-12-27 2020-02-21 清华大学 基于光片结构光的显微层析超分辨率成像方法及系统
CN108680544B (zh) * 2018-04-23 2021-04-06 浙江大学 一种结构化照明的光切片荧光显微成像方法和装置
CN108957719B (zh) * 2018-09-07 2020-04-10 苏州国科医疗科技发展有限公司 一种双光子受激发射损耗复合显微镜

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040026619A1 (en) * 2002-08-09 2004-02-12 Oh Chil Hwan Method and apparatus for extracting three-dimensional spacial data of object using electron microscope
TW201342303A (zh) * 2012-04-13 2013-10-16 Hon Hai Prec Ind Co Ltd 三維空間圖像的獲取系統及方法
CN102968792A (zh) * 2012-10-29 2013-03-13 中国科学院自动化研究所 显微视觉下多焦面物体成像的方法
CN103308452A (zh) * 2013-05-27 2013-09-18 中国科学院自动化研究所 一种基于景深融合的光学投影断层成像图像获取方法
CN103558193A (zh) * 2013-10-24 2014-02-05 深圳先进技术研究院 一种双光子显微镜
CN105321152A (zh) * 2015-11-11 2016-02-10 佛山轻子精密测控技术有限公司 一种图像拼接方法和系统
CN107392946A (zh) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 一种面向三维形状重建的显微多焦距图像序列处理方法
CN107680152A (zh) * 2017-08-31 2018-02-09 太原理工大学 基于图像处理的目标物表面形貌测量方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XU-WEN ZHAO, LU JING-QI: "Fast axial scanning system based on a few lenses in moving objective lens", JIGUANG YU HONGWAI - LASER & INFRARED, JIGUANG YU HONGWAI BIANJIBU, BEIJING, CN, vol. 46, no. 11, 1 November 2016 (2016-11-01), CN, pages 1379 - 1383, XP055716790, ISSN: 1001-5078 *

Also Published As

Publication number Publication date
CN111381357B (zh) 2021-07-20
CN111381357A (zh) 2020-07-07

Similar Documents

Publication Publication Date Title
Orth et al. Optical fiber bundles: Ultra-slim light field imaging probes
US9881373B2 (en) Image generating apparatus and image generating method
JP5824113B2 (ja) 光学ディフューザを用いて画像を記録するためのシステム、方法、及びメディア
Isobe et al. Enhancement of lateral resolution and optical sectioning capability of two-photon fluorescence microscopy by combining temporal-focusing with structured illumination
US20150279033A1 (en) Image data generating apparatus and image data generating method
Patwary et al. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions
CN109716434B (zh) 基于非再入型二次扭曲(nrqd)光栅和棱栅的四维多平面宽带成像系统
Yang et al. Single-shot smartphone-based quantitative phase imaging using a distorted grating
US11334743B2 (en) System and method for image analysis of multi-dimensional data
Feng et al. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging
WO2020192235A1 (zh) 一种双光子荧光成像方法、系统及图像处理设备
Liu et al. Dark-field illuminated reflectance fiber bundle endoscopic microscope
WO2017127345A1 (en) System and method for segmentation of three-dimensional microscope images
CN107518879A (zh) 一种荧光成像装置及方法
Pfeil et al. Examination of blood samples using deep learning and mobile microscopy
Obara et al. A novel method for quantified, superresolved, three-dimensional colocalisation of isotropic, fluorescent particles
Hoffmann et al. Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity
WO2020135040A1 (zh) 图像三维信息提取方法、对象成像方法、装置及系统
KR102561360B1 (ko) 보정을 사용하지 않고 파이버스코프 이미지를 처리하는 방법 및 이를 수행하는 파이버스코프 시스템
CN112132772A (zh) 一种病理切片实时判读方法、装置及系统
CN117455786A (zh) 一种多聚焦图像融合方法、装置、计算机设备及存储介质
CN107193118A (zh) 多光谱显微成像系统及显微镜
Motamedi et al. Analysis and characterization of high-resolution and high-aspect-ratio imaging fiber bundles
CN111886491A (zh) 散射辅助超定位显微术方法及相关装置
EP4030750A1 (en) Artificial intelligence-based image processing method, apparatus, device, and medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19901603

Country of ref document: EP

Kind code of ref document: A1