WO2019076265A1 - 光纤束图像处理方法和装置 - Google Patents

光纤束图像处理方法和装置 Download PDF

Info

Publication number
WO2019076265A1
WO2019076265A1 PCT/CN2018/110250 CN2018110250W WO2019076265A1 WO 2019076265 A1 WO2019076265 A1 WO 2019076265A1 CN 2018110250 W CN2018110250 W CN 2018110250W WO 2019076265 A1 WO2019076265 A1 WO 2019076265A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
reconstructed
pixel value
weight
Prior art date
Application number
PCT/CN2018/110250
Other languages
English (en)
French (fr)
Inventor
邵金华
段后利
孙锦
Original Assignee
苏州微景医学科技有限公司
南京亘瑞医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州微景医学科技有限公司, 南京亘瑞医疗科技有限公司 filed Critical 苏州微景医学科技有限公司
Priority to JP2020541844A priority Critical patent/JP7001295B2/ja
Priority to MX2020003986A priority patent/MX2020003986A/es
Priority to KR1020207013622A priority patent/KR102342575B1/ko
Priority to RU2020115459A priority patent/RU2747946C1/ru
Priority to BR112020007623-6A priority patent/BR112020007623B1/pt
Priority to AU2018353915A priority patent/AU2018353915A1/en
Priority to CA3079123A priority patent/CA3079123C/en
Priority to EP18868761.0A priority patent/EP3699667A4/en
Publication of WO2019076265A1 publication Critical patent/WO2019076265A1/zh
Priority to US16/850,048 priority patent/US11525996B2/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/361Optical details, e.g. image relay to the camera or image sensor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • G02B23/26Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes using light guides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive

Definitions

  • the present invention relates to the field of medical image processing, and more particularly to a fiber bundle image processing method and apparatus.
  • fiber-optic microscopy can achieve biological tissue tomography test, not only can detect the tendency of tumor lesions in biological tissues in advance, but also avoid the pain caused by puncture operation to clinical patients.
  • Fiber optic microscopy has broad market prospects in clinical patient testing, screening, and medical and biological research.
  • Figure 1 exemplarily shows a portion of an existing fiber bundle image.
  • the existing fiber bundle image has a honeycomb-shaped noise and does not well represent the target region of the biological tissue.
  • reconstruction techniques for fiber bundle images they are generally computationally intensive and time consuming.
  • the present invention has been made in consideration of the above problems.
  • the invention provides a fiber bundle image processing method and device,
  • a fiber bundle image processing method comprising:
  • the sample image is reconstructed based on the corrected pixel information to obtain a reconstructed image.
  • the correcting the determined pixel information comprises calculating the corrected pixel value according to the following formula:
  • F represents the corrected pixel value
  • I s represents the determined pixel value
  • I b represents the pixel value of the corresponding pixel in the background image
  • K represents the correction coefficient
  • I c represents the pixel value of the corresponding pixel in the reference image
  • k represents a scale factor equal to the median of the difference between the pixel value of the pixel in the reference image and the pixel value of the corresponding pixel in the background image. value.
  • the method further includes:
  • the non-fluorescent sample is sampled to obtain the background image.
  • the reconstructing the sample image based on the corrected pixel information comprises: using a interpolation method to obtain a reconstructed pixel value of the pixel based on the weight of the pixel and the corrected pixel information.
  • the obtaining the reconstructed pixel value of the pixel comprises:
  • the reconstructed pixel value of the pixel is calculated by linear interpolation according to the weight of the pixel.
  • the determining the weight of the pixel comprises:
  • the weights of the pixels corresponding to the vertices of the triangle are set to be inversely proportional to the distance between the pixels and the vertices.
  • the reconstructed pixel value for calculating the pixel is calculated according to the following formula:
  • Gx represents the reconstructed pixel value of pixel x
  • Wa and Ga respectively represent the weight of the pixel x corresponding to the vertex a of the triangle in which it is located and the corrected pixel value of the vertex a;
  • Wb and Gb respectively denote the weight of the pixel x corresponding to the vertex b of the triangle in which it is located and the corrected pixel value of the vertex b;
  • Wc and Gc respectively denote the weight of the pixel x corresponding to the vertex c of the triangle in which it is located and the corrected pixel value of the vertex c.
  • the method further includes performing registration processing on the reconstructed image and another image.
  • a fiber bundle image processing apparatus comprising:
  • the sample image is reconstructed based on the corrected pixel information to obtain a reconstructed image.
  • the above-mentioned fiber bundle image processing method and apparatus can not only obtain a more ideal processed image, but also have a small amount of calculation, and the entire calculation process takes a short time.
  • Figure 1 exemplarily shows a portion of an existing fiber bundle image
  • FIG. 2 is a schematic flow chart of a fiber bundle image processing method according to an embodiment of the present invention.
  • FIG. 3 is a partially enlarged schematic view showing a reconstructed image of the fiber bundle image shown in FIG. 1 according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a fiber bundle image analysis method according to an embodiment of the present invention.
  • Figure 5 illustrates a reference image acquired by sampling a uniform fluorescent sample in accordance with one embodiment of the present invention
  • Figure 6 shows a partial enlarged schematic view of a reference image in accordance with one embodiment of the present invention
  • FIG. 7 shows a partial enlarged schematic view of a reference image identifying pixels corresponding to a fiber center, in accordance with one embodiment of the present invention
  • FIG. 8 shows a schematic diagram of a fiber bundle image identifying pixels corresponding to a fiber center, in accordance with one embodiment of the present invention
  • Figure 9 shows a schematic flow chart of a reconstruction step in accordance with one embodiment of the present invention.
  • Figure 10 is a partially enlarged schematic view showing a sample image of a Delaunay triangulation according to an embodiment of the present invention
  • 11A and 11B illustrate a reconstructed image to be registered and another image, respectively, in accordance with one embodiment of the present invention
  • Figure 11C is a diagram showing the results of non-rigid registration of the two images shown in Figures 11A and 11B.
  • FIG. 2 illustrates a fiber bundle image processing method 200 in accordance with one embodiment of the present invention.
  • the bundle image processing method 200 can be used to reconstruct an original bundle image to more optimally present a target area of biological tissue.
  • step S210 pixel information of a position corresponding to the center of the optical fiber in the sample image is determined.
  • the sample image is a fiber bundle image acquired using a fiber bundle.
  • the fiber bundle includes many fibers, for example, up to 30,000.
  • the arrangement of these fibers in the bundle is irregular.
  • Each fiber can act as an optical path that transmits information in the target area of the biological tissue to produce a bundle image in the imaging device.
  • the size of the bundle image obtained by the same fiber bundle is the same, that is, its resolution, width and height are the same.
  • the imaged area of the bundle of fibers in the bundle image can be, for example, a circular area. Since the fiber bundle includes a plurality of optical fibers, it is difficult to avoid cellular noise in the bundle image, as shown in the bundle image of FIG. Each of these cells roughly corresponds to one fiber. The existence of cellular noise has caused great trouble for the user to observe the target area of the biological tissue by using the fiber bundle image, which seriously affects the user experience.
  • a cell in a bundle image typically includes a plurality of pixels.
  • each fiber in the bundle can correspond to a plurality of pixels in the bundle image, for example about twenty.
  • the pixel value corresponding to the center of the optical fiber can more ideally reflect the true face of the target region of the biological tissue.
  • pixels corresponding to the center of each of the optical fibers in the sample image are determined and pixel information of the pixel is extracted.
  • the determined pixel information includes a location of the pixel and a pixel value.
  • the position of the pixel can be represented by the row and column values of the pixel in the sample image.
  • the position of these pixels can be represented by a one-dimensional array.
  • An element in an array is a value that represents the position of a pixel.
  • the position Px of the pixel x can be expressed by the following formula.
  • Px number of rows where pixel x is x width of sample image + number of columns where pixel x is located.
  • the position Px of the pixel x is counted one by one from the first row of the bundle image and the pixels of the first column, and the pixel x is the first pixel.
  • the pixel value of the pixel can be obtained by querying the sample image by the position of the pixel.
  • step S220 the pixel information determined in step S210 is corrected.
  • the determined pixel information may include the position and pixel value of the pixel corresponding to the center of the fiber.
  • step S220 only the pixel values of the partial pixels corresponding to the center position of the fiber in the sample image are corrected to reflect the target area more realistically.
  • step S230 the sample image is reconstructed based on the pixel information corrected in step S220 to obtain a reconstructed image.
  • step S220 only the pixel values of the portion of the pixel in the sample image corresponding to the center of the fiber are adjusted. Based on the adjustment result, the pixel values of other pixels in the sample image, that is, the pixels that do not correspond to the center position of the fiber, are adjusted, thereby completing the reconstruction of the sample image.
  • 3 is a partially enlarged schematic view showing a reconstructed image of the fiber bundle image shown in FIG. 1 according to an embodiment of the present invention.
  • the image reconstructed by the above image processing operation eliminates the noise of the honeycomb shape in the original sample image, and further, the brightness of the entire reconstructed image is relatively uniform, avoiding the dark edge but the middle Bright question. From the perspective of image processing, the entire process is computationally intensive and time consuming.
  • a fiber bundle image analysis method is provided.
  • the analysis method can more accurately determine the pixel information of the position of the fiber bundle in the fiber bundle image.
  • the pixels in the captured bundle image and the fibers in the bundle The correspondence remains unchanged. Therefore, from this perspective, one fiber bundle image can be used to analyze all other fiber bundle images taken with the same fiber bundle.
  • FIG. 4 shows a schematic flow diagram of a fiber bundle image analysis method 400 in accordance with an embodiment of the present invention.
  • step S410 the fiber bundle image is acquired by the fiber bundle as a reference image.
  • the analysis of the reference image can be applied to all other fiber bundle images taken with the same fiber bundle.
  • the uniformly illuminated sample is sampled using a fiber bundle to obtain a reference image.
  • the reference image should be a bundle image with uniform pixel values and uniform brightness.
  • the pixels of the uniformly illuminated sample are identical in image, which do not themselves have any negative impact on the analysis method 400, ensuring more accurate determination of pixel information for the corresponding fiber center position in the reference image.
  • the uniformly illuminated sample can be a homogeneous fluorescent sample.
  • the reference image is a fiber bundle image with a constant fluorescence rate.
  • Figure 5 illustrates a reference image acquired by sampling a uniform fluorescent sample in accordance with one embodiment of the present invention. In practical applications of fiber bundle images, imaging is typically performed on samples that fluoresce. Therefore, the reference image obtained by sampling the uniform fluorescent sample better guarantees the accuracy of the analysis method. It will be appreciated that a uniform fluorescent sample is merely an example, and not a limitation, and a reference image may also be acquired by sampling a sample that emits other visible light.
  • step S420 reference pixels in the reference image acquired in step S410 are determined.
  • the pixel value of the reference pixel is higher than the pixel value of the peripheral pixel thereof, and the reference pixel corresponds to the center of the only one fiber in the fiber bundle.
  • FIG. 1 shows a partial enlarged schematic view of a reference image in accordance with one embodiment of the present invention.
  • FIG. 1 shows a partial enlarged schematic view of a reference image in accordance with one embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing a pixel corresponding to the center of an optical fiber in a partially enlarged schematic view of the reference image shown in FIG. 6 according to an embodiment of the present invention.
  • the pixel values of the reference pixels corresponding to the center of the fiber are identified by the "+" symbol.
  • a one-dimensional array may be employed to represent the location of the determined reference pixel.
  • step S430 the pixel position of the corresponding fiber center in the fiber bundle image acquired by the same fiber bundle is determined according to the pixel position of the reference pixel in the reference image.
  • the relative positions of the fibers in the bundle are fixed, the relative positions of the pixels corresponding to the center of the fiber in the bundle image acquired by the same bundle are also fixed.
  • the position of the corresponding fiber center in all the fiber bundle images acquired by the same fiber bundle can be determined, especially for the far end and the near end of the fiber bundle remain unchanged.
  • Figure 8 shows a schematic diagram of a bundle image identifying pixels in which the center of the fiber is corresponding, in accordance with one embodiment of the present invention.
  • the pixel value of the pixel corresponding to the center of the optical fiber is assigned a value of zero.
  • the reference image is used to determine pixel information of the position of the corresponding fiber center in the other bundle image.
  • the result of the above analysis method is not affected by the imaged object in the fiber bundle image, and the result is more accurate and easy to implement.
  • step S420 may specifically include step S421 and step S422.
  • step S421 image segmentation is performed on the reference image to determine a fiber bundle imaging region in the reference image.
  • the reference image includes a fiber bundle imaging area and an insignificant background area.
  • the fiber bundle imaging area is a circular area in the middle.
  • the background area is a black area around the circular area, and its analysis of the image is meaningless.
  • the image segmentation can be performed by image segmentation processing such as threshold division and region growing. The image segmentation operation can further reduce the amount of calculation of the entire image processing method.
  • step S422 reference pixels are determined in the fiber bundle imaging region.
  • the fiber bundle imaging region is first processed using a region maxima method. Then, it is determined that the pixel whose pixel value is the region maximum value is the reference pixel.
  • the region maximum value method is an image segmentation method. As previously mentioned, the pixel corresponding to the center of the fiber in the bundle is the brightest pixel of all pixels corresponding to the fiber, ie the brightest pixel in a cell.
  • the image of the reference image is analyzed by the region maximum value method, and the pixel that obtains the region maximum value is used as the reference pixel corresponding to the center of the fiber.
  • the reference pixel is determined by the region maximum value method.
  • the method effectively utilizes the following objective law: for all pixels corresponding to one fiber in the reference image, the pixel corresponding to the center of the fiber is compared with other pixels. The highest value. Therefore, the method can quickly and accurately determine the central pixel value of the reference fiber, thereby ensuring fast and accurate analysis of the fiber bundle image.
  • region maxima method is merely an example and not a limitation of the present invention, and other methods may be used to determine the reference fiber center pixel value, such as the empirical threshold method.
  • the aforementioned fiber bundle image analysis method can be included in the fiber bundle image processing method.
  • the fiber bundle image analysis method information of a pixel corresponding to the position of the center of the optical fiber in the bundle image including the sample image can be determined. Thereby, more accurate information is obtained.
  • the corresponding position of the sample image can be queried according to the position of the pixel corresponding to the center of the fiber in the reference image determined in the fiber bundle image analysis method.
  • the number of rows and the number of columns in which the pixel y is located is determined based on the position Py of the reference pixel y and the width of the bundle image.
  • the pixel value of the pixel in the sample image is obtained, and the pixel information of the pixel in the sample image can be obtained.
  • the fiber bundle image analysis method provides an accurate analysis result for the fiber bundle image processing method, thereby ensuring that the fiber bundle image processing method has a small calculation amount and a good processing effect.
  • the pixel information determined in step S210 is corrected using the background image. Specifically, the pixel information determined in step S210 can be corrected according to the following formula.
  • F represents the pixel value corrected for the pixels in the sample image
  • I s represents the pixel value determined in step S210
  • I b represents the pixel value of the corresponding pixel in the background image
  • K represents the correction coefficient
  • the background image is an image generated by imaging a non-illuminated sample, such as a fiber bundle image having a zero fluorescence rate.
  • a non-fluorescent sample can be sampled to obtain the background image.
  • corresponding pixel means that the positions of the pixels in the respective images are the same, and substantially, the pixels correspond to the same position of the same optical fiber in the bundle (for example, the center of the optical fiber). Therefore, the corresponding position in the background image can be queried according to the position of the pixel corresponding to the center of the fiber determined in step S210 to obtain the pixel value of the corresponding pixel in the background image.
  • the corresponding position in the background image can be directly queried based on the position of the reference pixel in the reference image.
  • the pixel value of the corresponding pixel in the background image can be obtained by querying the background image according to the location of the pixel. It can be understood that the corresponding pixels in the background image also correspond to the center of the same optical fiber.
  • the difference between the pixel value I b of the corresponding pixel in the background image and the first difference value is calculated; then, the difference is calculated.
  • the product of the correction factor can be any real number between 0.5 and 1.5. This correction factor K can be set empirically.
  • the reference image may be a reference image involved in the fiber bundle image analysis method described above.
  • the difference between them and the corresponding pixels in the background image is calculated, which is referred to as standard deviation.
  • the calculation amount is smaller while ensuring the calculation accuracy.
  • the difference between the pixel and the corresponding pixel in the background image may be separately calculated for each pixel in the reference image to obtain a standard deviation. Calculate the median k of all standard deviations.
  • the difference between the pixel value I c of the corresponding pixel in the reference image and the pixel value I b of the corresponding pixel in the background image is calculated, which is simply referred to as the second difference.
  • the correction coefficient K is determined based on the quotient of the median k and the second difference.
  • the correction operation in this example requires a complicated calculation to obtain a desired correction effect, thereby obtaining a desired image processing result.
  • the foregoing step S230 includes: obtaining a reconstructed pixel value of the pixel by using an interpolation method based on the weight of the pixel and the pixel information corrected in step S220.
  • the corrected pixel information reflects the imaging target more realistically, and the above correction operation is only for the pixel corresponding to the center of the fiber. Therefore, for each pixel in the bundle image, its weight can be determined based on the position of the pixel corresponding to the center of the fiber that is closer to the fiber.
  • FIG. 9 shows a schematic flow chart of step S230 in accordance with one embodiment of the present invention. As shown in FIG. 9, step S230 may include:
  • Step S231 triangulating the sample image based on the pixel information determined in step S210.
  • the pixel corresponding to the center of the fiber in the bundle is a finite set of points in the sample image. All the vertices of the triangle are formed by this set of points. Split the sample image into multiple triangles. Among them, any two triangles either do not intersect or just intersect at a common edge.
  • the above triangulation is implemented by using a Delaunay triangulation algorithm.
  • the arrangement of the fibers in the bundle is irregular, the distance between the centers of adjacent fibers is approximately uniform, approximately equal to the diameter of the fibers.
  • Figure 10 illustrates a portion of a sample image of a Delaunay triangulation, in accordance with one embodiment of the present invention.
  • the unique triangulation result can be obtained by the Delaunay triangulation algorithm, and it can be ensured that the vertices of other triangles do not appear in the circumscribed circle of any triangle.
  • the triangulation algorithm is more suitable for the image processing method according to the embodiment of the present invention, and a more desirable image processing result can be obtained.
  • Step S232 determining the weight of the pixel based on the triangle in which the pixel is obtained by the triangulation.
  • the weight can have multiple values, each weight corresponding to a pixel that is closer to the center of the fiber.
  • the pixel corresponding to the weight may be referred to as a reference pixel.
  • each reference pixel is a vertice of a triangle obtained by triangulation.
  • the final pixel value of the pixel can be determined based on the weight of the pixel and the pixel value of its corresponding reference pixel.
  • the further away from a certain reference pixel the smaller the weight of the pixel for the reference pixel; otherwise, the opposite.
  • the weight is determined based on the position of the three reference pixels.
  • a weight can be determined for each of the three reference pixels, thereby forming a weight lookup table.
  • Table 1 shows a weight lookup table in accordance with one embodiment of the present invention.
  • the first weight, the second weight, and the third weight respectively indicate the weights of the pixels to be determined for the first reference pixel, the second reference pixel, and the third reference pixel.
  • the first weight and the third weight are equal and relatively small, which indicates that the distance from the first reference pixel and the third reference pixel is equal, and the distance is relatively far;
  • the second weight is relatively large, indicating that it is relatively close to the second reference pixel.
  • each pixel in the sample image has a unique triangle, either on the three sides of the triangle or inside the triangle.
  • the three vertices of the unique triangle can be used as the reference pixels of the pixel.
  • the weight of the pixel corresponding to each of the reference pixels can be determined.
  • the distance from the pixel to each vertex (ie, the reference pixel) of the triangle in which the pixel is located can be determined.
  • the pixels can be on or inside the triangle.
  • setting the pixel to correspond to the vertex weight is inversely proportional to the distance between the pixel and the vertex. For example, pixels located at the center of the triangle correspond to weights of their respective reference pixels of 0.333.
  • the weight of each pixel in the fiber bundle image is obtained based on triangulation, and the calculation amount is smaller while ensuring the accuracy of the calculation result.
  • the Delaunay triangulation algorithm given above is only an example, and other methods can be used to obtain the weight of each pixel, such as the Krig method.
  • the above manner of determining the weight is only an example, not a limitation.
  • the weight of each pixel is determined in accordance with the position of the three reference pixels, this is merely an indication and not a limitation of the present invention.
  • the weight of each pixel can also be determined according to one reference pixel closest to the distance, and four or more reference pixels.
  • the weight of a pixel can be set empirically.
  • Step S233 calculating a reconstructed pixel value of the pixel by using a linear interpolation method according to the weight of the pixel.
  • the reconstructed pixel value Gx of the pixel x of the reconstructed image is calculated according to the following formula.
  • Wa and Ga respectively indicate that the pixel x corresponds to the weight of the vertex a of the triangle in which it is located and the corrected pixel value of the vertex a
  • Wb and Gb respectively represent the weight of the pixel x corresponding to the vertex b of the triangle in which it is located and the corrected pixel value of the vertex b
  • Wc and Gc respectively denote the weight of the pixel x corresponding to the vertex c of the triangle in which it is located and the corrected pixel value of the vertex c.
  • the bundle image processing method further comprises the step of registering the reconstructed image and the other image.
  • the other image may also be a reconstructed image.
  • Image registration is used to calculate the relative displacement of the two images. After image registration, the same content of the two images will spatially coincide.
  • the registration operation may employ a correlation coefficient method.
  • the correct displacement that can be used to register the two images is determined by searching for the maximum of the correlation coefficients for all possible displacements.
  • the correlation coefficient method the registration operation time is short, which can meet the real-time needs.
  • the registration operation may also employ an iterative registration method.
  • the iterative registration method although slower, can meet higher accuracy requirements.
  • the reconstructed image and the other image are iteratively registered directly based on the position of the pixel corresponding to the center of the optical fiber in the bundle, the corrected pixel information, and the other image.
  • the registration operation utilizes only the relevant elements corresponding to the center of the fiber in the bundle, ignoring other elements, such as pixels in the reconstructed image that do not correspond to the center of the fiber in the bundle. Therefore, while ensuring the calculation accuracy of the iterative registration, the calculation speed of the iterative registration is effectively improved.
  • non-rigid body registration is required.
  • the non-rigid body registration may employ a free deformation method or a Demons registration algorithm.
  • 11A, 11B and 11C illustrate the above-described process of non-rigid body registration.
  • 11A and 11B respectively show a reconstructed image to be registered and another image, in accordance with one embodiment of the present invention.
  • the dotted rectangular portion therein illustrates the overlap of the two determined by the rigid body registration. Another image is resampled for this overlap.
  • Figure 11C is a diagram showing the result of non-rigid registration of the overlapped portion of another image and the reconstructed image.
  • the image registration and splicing operations can be faster and more accurate.
  • a fiber bundle image processing apparatus includes a memory and a processor.
  • This memory is used to store programs.
  • the processor is for running the program.
  • the program is executed in the processor, and is configured to perform the following steps: Step S1, determining pixel information of a position corresponding to a fiber center in a sample image; Step S2, correcting the determined pixel information; and steps S3. Reconstruct the sample image based on the corrected pixel information to obtain a reconstructed image.
  • the fiber bundle image processing apparatus further includes an image acquisition device, the image acquisition device is configured to sample the uniform fluorescent sample to obtain the reference image, and sample the non-fluorescent sample to obtain the background image.
  • step S2 the corrected pixel value is calculated according to the following formula:
  • F represents the corrected pixel value
  • I s represents the determined pixel value
  • I b represents the pixel value of the corresponding pixel in the background image
  • K represents the correction coefficient
  • step S2 performing, according to the reference image and the background image, calculating the correction coefficient K by using the following formula:
  • I c represents the pixel value of the corresponding pixel in the reference image
  • k represents a scale factor equal to the median of the difference between the pixel value of the pixel in the reference image and the pixel value of the corresponding pixel in the background image. value.
  • step S3 when step S3 is performed, it is further performed that the reconstructed pixel value of the pixel is obtained by using an interpolation method based on the weight of the pixel and the corrected pixel information.
  • step S3 when performing step S3, further performing:
  • the reconstructed pixel value of the pixel is calculated by linear interpolation according to the weight of the pixel.
  • the weights of the pixels corresponding to the vertices of the triangle are set to be inversely proportional to the distance between the pixels and the vertices.
  • Gx represents the reconstructed pixel value of pixel x
  • Wa and Ga respectively represent the weight of the pixel x corresponding to the vertex a of the triangle in which it is located and the corrected pixel value of the vertex a;
  • Wb and Gb respectively denote the weight of the pixel x corresponding to the vertex b of the triangle in which it is located and the corrected pixel value of the vertex b;
  • Wc and Gc respectively denote the weight of the pixel x corresponding to the vertex c of the triangle in which it is located and the corrected pixel value of the vertex c.
  • the program when the program is run in the processor, the program is further configured to perform the registration process of the reconstructed image and another image.
  • the configuration and technical effects of the fiber bundle image processing apparatus can be understood by reading the detailed description of the fiber bundle image processing method and the fiber bundle image analysis method. For brevity, details are not described herein again.
  • a storage medium having stored thereon program instructions that, when executed by a computer or processor, cause the computer or processor to perform the practice of the present invention Corresponding steps of the fiber bundle image processing method of the example, and for implementing respective modules or units in the fiber bundle image processing apparatus according to an embodiment of the present invention.
  • the storage medium may include, for example, a memory card of a smart phone, a storage unit of a tablet, a hard disk of a personal computer, a read only memory (ROM), an erasable programmable read only memory (EPROM), a portable compact disk read only memory. (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer readable storage medium can be any combination of one or more computer readable storage media.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another device, or some features can be ignored or not executed.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor DSP
  • the invention can also be implemented as a device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals.
  • signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

光纤束图像处理方法(200)和装置。该方法(200)包括:确定样本图像中对应光纤中心的位置的像素信息;对所确定的像素信息进行校正;以及基于校正后的像素信息重建样本图像,以获得重建图像。该方法(200)和装置不仅能够获得更理想的光纤束处理图像,而且计算量较小,整个计算过程耗时较短。

Description

光纤束图像处理方法和装置 技术领域
本发明涉及医疗图像处理领域,更具体地涉及一种光纤束图像处理方法和装置。
背景技术
随着社会的进步和科技的发展,越来越多的电子成像设备被应用到医疗领域。由此,对于医疗图像后期处理的准确性和速度也提出了越来越高的要求。
例如,光纤显微镜能够实现生物组织层析检验,不仅可以提前检测生物组织的肿瘤病变倾向,还免去了穿刺手术操作给临床病人带来的痛苦。光纤显微镜在临床病人检查、筛查以及医学和生物学的研究中具有宽广的市场前景。
图1示例性地示出了现有光纤束图像的一部分。如图1所示,现有光纤束图像具有蜂窝形状的噪声,未能很好地呈现生物组织的目标区域。虽然目前存在一些针对光纤束图像的重建技术,但是其一般计算量大并且耗时长。
发明内容
考虑到上述问题而提出了本发明。本发明提供了一种光纤束图像处理方法和装置,
根据本发明一个方面,提供了一种光纤束图像处理方法,包括:
确定样本图像中对应光纤中心的位置的像素信息;
对所确定的像素信息进行校正;以及
基于校正后的像素信息重建所述样本图像,以获得重建图像。
示例性地,所述对所确定的像素信息进行校正包括根据如下公式计算校正后的像素值:
F=(I s-I b)x K,
其中,F表示所述校正后的像素值,I s表示所确定的像素值,I b表示背景图像中相应像素的像素值,K表示校正系数。
示例性地,在所述校正步骤之前还包括根据参考图像和所述背景图像采用如下公式计算所述校正系数K:
K=k/(I c-I b),
其中,I c表示所述参考图像中相应像素的像素值,k表示比例系数,其等于所述参考图像中像素的像素值分别与所述背景图像中其相应像素的像素值之差的中位值。
示例性地,所述方法还包括:
对均匀荧光样本进行采样获取所述参考图像;以及
对无荧光样本进行采样获取所述背景图像。
示例性地,所述基于校正后的像素信息重建所述样本图像包括:基于像素的权重和所述校正后的像素信息采用插值方法获得该像素的重建像素值。
示例性地,所述获得该像素的重建像素值包括:
基于所述所确定的像素信息对所述样本图像进行三角剖分;
基于所述三角剖分所获得的、该像素所在的三角形,确定该像素的权重;以及
根据该像素的权重采用线性插值法计算该像素的重建像素值。
示例性地,所述确定该像素的权重包括:
确定该像素至该像素所在的三角形的各顶点的距离;以及
设置该像素对应于该三角形的各顶点的权重分别为与该像素和该各顶点之间的距离成反比。
示例性地,所述计算该像素的重建像素值根据如下公式计算:
Gx=Wa*Ga+Wb*Gb+Wc*Gc,其中,
Gx表示像素x的重建像素值;
Wa和Ga分别表示像素x对应于其所在的三角形的顶点a的权重和该顶点a的校正后的像素值;
Wb和Gb分别表示该像素x对应于其所在的三角形的顶点b的权重和该顶点b的校正后的像素值;
Wc和Gc分别表示该像素x对应于其所在的三角形的顶点c的权重和该顶点c的校正后的像素值。
示例性地,所述方法还包括:将所述重建图像和另一图像进行配准处理。
根据本发明另一方面,还提供了一种光纤束图像处理装置,包括:
存储器,用于存储程序;
处理器,用于运行所述程序;
其中,所述程序在所述处理器中运行时,用于执行以下步骤:
确定样本图像中对应光纤中心的位置的像素信息;
对所确定的像素信息进行校正;以及
基于校正后的像素信息重建所述样本图像,以获得重建图像。
上述光纤束图像处理方法和装置不仅能够获得更理想的处理图像,而且计算量较小,整个计算过程耗时较短。
附图说明
通过结合附图对本发明实施例进行更详细的描述,本发明的上述以及其它目的、特征和优势将变得更加明显。附图用来提供对本发明实施例的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中,相同的参考标号通常代表相同或相似部件或步骤。
图1示例性地示出了现有光纤束图像的一部分;
图2示出了根据本发明一个实施例的光纤束图像处理方法的示意性流程图;
图3示出了根据本发明一个实施例的图1所示光纤束图像的重建图像的局部放大示意图;
图4示出了根据本发明一个具体实施例的光纤束图像分析方法的示意性流程图;
图5示出了根据本发明一个实施例对均匀荧光样本进行采样所获取的参考图像;
图6示出了根据本发明一个实施例的参考图像的局部放大示意图;
图7示出了根据本发明一个实施例的、标识了其中与光纤中心对应的像素的参考图像的局部放大示意图;
图8示出了根据本发明一个实施例的标识了其中与光纤中心对应的像素的光纤束图像的示意图;
图9示出了根据本发明一个实施例的重建步骤的示意性流程图;
图10示出了根据本发明一个实施例的经德洛内三角剖分的样本图像的局部放大示意图;
图11A和图11B分别示出了根据本发明一个实施例的待配准的重建图像和另一图像;以及
图11C示出了图11A和图11B所示两个图像的非刚体配准的结果示意图。
具体实施方式
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本发明中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。
图2示出了根据本发明一个实施例的光纤束图像处理方法200。光纤束图像处理方法200可用于对原始的光纤束图像进行重建,以更理想地呈现生物组织的目标区域。
如图2所示,在步骤S210中,确定样本图像中对应光纤中心的位置的像素信息。
样本图像是利用光纤束所获取的光纤束图像。光纤束中包括很多个光纤,例如多达三万多个。这些光纤在光纤束中的排列是不规则的。每个光纤可以作为一个光路,这些光路可以将生物组织的目标区域中的信息传输出来,以在成像设备中生成光纤束图像。同一个光纤束所获得的光纤束图像的大小是一致的,也就是说其分辨率、宽度和高度是相同的。在一些示例中,光纤束图像中的光纤束的成像区域可以例如是一个圆形区域。因为光纤束包括多个光纤,所以在光纤束图像中将难以避免地出现蜂窝状噪声,如图1的光纤束图像所示。其中的每个蜂窝大致对应于一个光纤。蜂窝状噪声的存在给用户利用光纤束图像观察生物组织的目标区域带来了很大困扰,严重影响了用户体验。
光纤束图像中的一个蜂窝一般包括多个像素。换言之,光纤束中的每个光纤可对应于光纤束图像中的多个像素,例如约二十个。该多个像素中存在一个像素,其与该光纤的中心一一对应。假设在利用同一光纤束拍摄不同的光纤束图像过程中,光纤束的远端或近端均不变化,那么光纤束图像中的像素与光纤束中的光纤的对应关系保持不变。由此,各个光纤的中心所对应的像素在光纤束图像中的位置保持不变。此外,光纤的中心所对应的像素值能够较理想地反映生物组织的目标区域的真实面目。
在此步骤S210中,确定样本图像中与光纤束中的各光纤的中心对应的像素并提取该像素的像素信息。可选地,所确定的像素信息包括像素的位置和像素值。像素的位置可以用像素在样本图像中的行列值表示。具体地,这些像素的位置可以用一个一维数组来表示。数组中的一个元素是表示一个像素的位置的值。像素x的位置Px可以用如下公式表示,
Px=像素x所在的行数x样本图像的宽度+像素x所在的列数。
像素x的位置Px即从光纤束图像的第一行、第一列的像素开始逐个计数,像素x为第几个像素。通过像素的位置查询样本图像,可以获得该像素的像素值。
在步骤S220中,对步骤S210中所确定的像素信息进行校正。
如步骤S210中所述,所确定的像素信息可以包括对应光纤中心的像素的位置和像素值。步骤S220中仅针对样本图像中对应光纤中心位置的这部分像素的像素值进行校正,以使其更逼真地反映目标区域。
在步骤S230中,基于步骤S220校正后的像素信息重建样本图像,以获得重建图像。在步骤S220中仅对样本图像中对应光纤中心的那部分像素的像素值进行了调整。基于该调整结果,调整样本图像中其他像素的像素值,即未对应光纤中心位置的像素,由此完成样本图像的重建。图3示出了根据本发明一个实施例的图1所示光纤束图像的重建图像的局部放大示意图。
从图像的角度看,如图3所示,经过上述图像处理操作所重建的图像中消除了原始样本图像中的蜂窝形状的噪声,此外,整个重建图像的亮度较均匀,避免了边缘暗但中间亮的问题。从图像处理的角度看,整个处理过程计算量较小、耗时较短。
根据本发明一个实施例,提供了一种光纤束图像分析方法。利用该分析方法可以更准确地确定光纤束图像中对应光纤中心的位置的像素信息。如前所述,在利用同一光纤束拍摄不同的光纤束图像过程中该光纤束的远端或近端均不变化的情况下,所拍摄的光纤束图像中的像素与该光纤束中的光纤的对应关系保持不变。因此,从这个角度来说,可以利用一个光纤束图像来分析与其采用同一光纤束拍摄的所有其他光纤束图像。图4示出了根据本发明一个具体实施例的光纤束图像分析方法400的示意性流程图。
在步骤S410中,利用光纤束采集光纤束图像,来作为参考图像。对该参考图像的分析结果可以应用于其他与其采用同一光纤束拍摄的所有其他光纤束图 像。
可选地,利用光纤束对均匀发光的样本进行采样来获取参考图像。理论上说,参考图像应该是像素值均匀、亮度一致的光纤束图像。均匀发光的样本所成图像的像素是一致的,其自身不会对分析方法400造成任何负面影响,保证了更准确地确定参考图像中的对应光纤中心位置的像素信息。
均匀发光的样本可以是均匀荧光样本。由此,参考图像是荧光率恒定的光纤束图像。图5示出了根据本发明一个实施例对均匀荧光样本进行采样所获取的参考图像。在光纤束图像的实际应用中,一般会针对发出荧光的样本进行成像。所以,对均匀荧光样本进行采样所获取的参考图像更好地保障了分析方法的准确性。可以理解,均匀荧光样本仅是示例,而非限制,还可以通过对发出其他可见光的样本进行采样来获取参考图像。
在步骤S420中,确定步骤S410中所采集的参考图像中的参考像素。其中,该参考像素的像素值高于其周边像素的像素值,参考像素与光纤束中唯一的一个光纤的中心相对应。
如前所述并且如图1所示,光纤束图像中存在分别与各个光纤一一对应的蜂窝。可以利用该蜂窝的像素值信息确定与光纤束中的光纤的中心对应的参考像素。通常,与光纤束中的光纤的中心对应的参考像素是与该光纤对应的所有像素中亮度最高的像素,即像素值最大的像素。换言之,与光纤中心对应的参考像素的像素值高于其周边像素(即与同一光纤对应的其他像素)的像素值。图6示出了根据本发明一个实施例的参考图像的局部放大示意图。图7示出了根据本发明一个实施例的图6所示的参考图像的局部放大示意图中的、与光纤的中心对应的像素的示意图。为了清楚起见,在图7所示的示意图中,与光纤的中心对应的参考像素的像素值被标识了“+”符号。
可选地,如前面所述,可以采用一维数组来表示所确定的参考像素的位置。
在步骤S430中,根据参考图像中的参考像素的像素位置,确定利用同一光纤束所采集的光纤束图像中对应光纤中心的像素位置。
如前所述,因为光纤束中的光纤的相对位置是固定的,所以利用同一光纤束采集的光纤束图像中的、与光纤中心对应的像素之间相对位置也是固定的。由此,根据参考图像中的参考像素的像素位置,即可确定利用同一光纤束所采集的所有光纤束图像中对应光纤中心的位置,特别是对于光纤束的远端和近端保持不变的情况。
图8示出了根据本发明一个实施例的标识了其中与光纤中心对应的像素的光纤束图像的示意图。在图8所示示意图中,与光纤的中心对应的像素的像素值被赋值为0。
在上述光纤束图像分析方法中,利用参考图像来确定其他光纤束图像中对应光纤中心的位置的像素信息。与直接基于光纤束图像自身的像素值来确定对应光纤中心的位置的像素信息相比,上述分析方法结果不受光纤束图像中的成像对象的影响,结果较准确,而且容易实现。
可选地,上述步骤S420可以具体包括步骤S421和步骤S422。
在步骤S421中,对参考图像进行图像分割以确定参考图像中的光纤束成像区域。如图5所示的参考图像所示,参考图像中包括光纤束成像区域和无实际意义的背景区域。其中的光纤束成像区域为中间的圆形区域。而背景区域为圆形区域周围的黑色区域,其对图像的分析是无意义的。该图像分割可以采用阈值分割、区域生长法等图像分割处理。图像分割操作可以进一步减少整个图像处理方法的计算量。
在步骤S422中,在光纤束成像区域中确定参考像素。
在一个示例中,首先利用区域极大值法对光纤束成像区域进行处理。然后,确定像素值是区域极大值的像素为参考像素。区域极大值法是一种图像分割方法。如前所述,与光纤束中的光纤中心对应的像素是与该光纤对应的所有像素中亮度最高的像素,即一个蜂窝内最亮的像素。利用区域极大值法对参考图像进行图像分析,将取得区域极大值的像素作为与光纤中心对应的参考像素。
在以上示例中,采用区域极大值法确定参考像素,该方法有效利用了以下客观规律:对于参考图像中与一个光纤对应的所有像素,与光纤中心对应的像素与其他像素相比,其像素值最高。由此该方法能够快速、准确地确定参考光纤中心像素值,从而能够保证光纤束图像的快速、准确分析。
本领域普通技术人员可以理解,区域极大值法仅是示例而非对本发明的限制,还可以采用其他方法确定参考光纤中心像素值,例如经验阈值法。
可以理解,前述光纤束图像分析方法可以包括在光纤束图像处理方法中。通过该光纤束图像分析方法可以确定包括样本图像的光纤束图像中的、对应光纤中心的位置的像素的信息。由此,获得更准确的信息。具体地,可以根据在光纤束图像分析方法中所确定的、参考图像中的与光纤中心对应的像素的位置来查询样本图像的相应位置。首先,根据参考像素y的位置Py以及光纤束图像 的宽度确定像素y所在的行数和列数。然后,根据像素y所在的行数和列数查询样本图像该位置的像素值,即可获得样本图像中该像素的像素信息。
由此,上述光纤束图像分析方法为光纤束图像处理方法提供了准确的分析结果,从而保证了光纤束图像处理方法计算量小,处理效果好。
在一个实施例中,利用背景图像对步骤S210所确定的像素信息进行校正。具体地,可以根据如下公式对步骤S210所确定的像素信息进行校正。
F=(I s-I b)x K,
其中,F表示对样本图像中的像素进行校正后的像素值,I s表示步骤S210所确定的像素值,I b表示背景图像中相应像素的像素值,K表示校正系数。
可选地,背景图像是对不发光样本进行成像所生成的图像,例如荧光率为零的光纤束图像。例如,可以对无荧光样本进行采样来获取该背景图像。只要光纤束的近端不变化,背景图像中的像素值就不会改变。这里的“相应像素”是指像素在各自的图像中的位置相同,实质上,像素与光纤束中的同一个光纤的同一个位置(例如光纤中心)相对应。因此,可以根据在步骤S210中所确定的与光纤中心对应的像素的位置来查询背景图像中的相应位置,以获得背景图像中相应像素的像素值。
如果通过前述光纤束图像分析方法400来确定步骤S210中的对应光纤中心的像素位置,则可以直接根据参考像素在参考图像中的位置来查询背景图像中的相应位置。根据像素所在的位置查询背景图像,即可获得该背景图像中相应像素的像素值。可以理解,该背景图像中相应像素也与同一个光纤的中心相对应。
在上述实施例中,对于步骤S210所确定的每个像素值I s,首先,计算其和背景图像中相应像素的像素值I b的差值,简称第一差值;然后,计算该差值与校正系数的乘积。校正系数K可以是0.5至1.5之间的任意实数。该校正系数K可以根据经验设置。
可选地,还可以根据背景图像和参考图像,采用如下公式计算校正系数K:K=k/(I c-I b),其中,I c表示参考图像中相应像素的像素值,I b表示背景图像中相应像素的像素值,k表示比例系数,其等于参考图像中像素的像素值分别与背景图像中其相应像素的像素值之差的中位值。
参考图像可以是前面描述的光纤束图像分析方法中所涉及的参考图像。在一个示例中,首先,针对参考图像中的各个对应光纤中心的位置的像素,分别 计算其与背景图像中的相应像素的差,简称标准差。利用这种方式计算标准差,在保证计算精度的同时,计算量更小。可替代地,还可以针对参考图像中的每个像素,分别计算其与背景图像中的相应像素的差,以获得标准差。计算所有标准差的中位数k。然后,针对步骤S210所确定的像素值I s,计算参考图像中相应像素的像素值I c和背景图像中相应像素的像素值I b的差值,简称第二差值。最后,根据该中位数k和该第二差值的商,确定校正系数K。
该示例中的校正操作无需复杂计算,即可获得较理想的校正效果,从而获得期望的图像处理结果。
可选地,上述步骤S230包括:基于像素的权重和步骤S220校正后的像素信息采用插值方法获得像素的重建像素值。校正后的像素信息是较真实地反映成像目标的,而上述校正操作仅针对于光纤中心对应的像素。所以,对于光纤束图像中的每个像素来说,其权重可以依据与其距离较近的、与光纤中心对应的像素的位置来确定。
图9示出了根据本发明一个实施例的步骤S230的示意性流程图。如图9所示,步骤S230可以包括:
步骤S231,基于步骤S210所确定的像素信息对样本图像进行三角剖分。
具体地,与光纤束中的光纤中心对应的像素是样本图像中的有限点集。由该点集构成三角形的所有顶点。把样本图像剖开成多个三角形。其中,任何两个三角形,要么不相交,要么恰好相交于一条公共边。
可选地,上述三角剖分是利用德洛内三角剖分算法实现的。虽然光纤在光纤束中的排列是不规则的,但是相邻光纤的中心的距离大致是一致的,大约等于光纤的直径。图10示出了根据本发明一个实施例的经德洛内三角剖分的样本图像的一部分。如图10所示,用德洛内三角剖分算法可以得到唯一的三角剖分结果,且能保证任一三角形的外接圆内不出现其他三角形的顶点。该三角剖分算法更适用于根据本发明实施例的图像处理方法,可以获得更理想的图像处理结果。
步骤S232,基于所述三角剖分所获得的、像素所在的三角形,确定所述像素的权重。
对于光纤束图像中的任一像素,其权重可以有多个数值,每个权重对应于一个与其距离较近的、对应光纤中心位置的像素。为了简洁,可以称该与权重对应的像素为基准像素。可以理解,每个基准像素都是三角剖分所获得的三角 形的顶点。根据像素的权重及其对应的基准像素的像素值可以确定该像素的最终像素值。
可选地,对于光纤束图像中的任一像素,其与某基准像素距离越远,则该像素针对于该基准像素的权重越小;否则反之。
示例性地,对于光纤束图像中的每个像素,其权重依据3个基准像素的位置来确定。可以针对3个基准像素中的每个基准像素确定一个权重,由此形成一个权重查找表。表1示出了根据本发明一个实施例的权重查找表。在表1中,第一权重、第二权重和第三权重分别表示待确定权重的像素针对于第一基准像素、第二基准像素和第三基准像素的权重。如表1所示,对于像素x 1来说,第一权重和第三权重相等且相对较小,这表明其与第一基准像素和第三基准像素的距离相等,且该距离相对较远;第二权重相对较大,这表明其与第二基准像素距离相对较近。
表1 权重查找表
Figure PCTCN2018110250-appb-000001
基于三角剖分的结果,样本图像中的每个像素都有一个唯一所在的三角形,其或者在三角形的三条边上,或者在三角形的内部。可以将该唯一所在的三角形的三个顶点作为该像素的基准像素。根据该像素与三个基准像素之间的距离,可以确定该像素对应于每个基准像素的权重。
对于样本图像中的每个像素,首先,可以确定该像素至该像素所在的三角形的各顶点(即基准像素)的距离。像素可以位于三角形的边上或者内部。然后,根据该像素与该所在的三角形的三个顶点的距离,确定该像素分别对应于该所在的三角形的三个顶点的权重。可选地,对于该三角形的一个顶点,设置该像素对应于该顶点权重为与该像素和该顶点之间的距离成反比。例如,位于三角形的外心的像素对应于其各个基准像素的权重均为0.333。对于位于三角形的顶点的像素,可以认为其对应于其所在顶点的权重为1,而对应于另外两个 顶点的权重为0。通过该方式确定每个像素的权重,重建效果更理想,过程简单易实现。
基于三角剖分获得光纤束图像中的每个像素的权重,在保证计算结果准确性的同时,计算量更小。
上面给出的德洛内三角剖分算法仅为示例,还可采用其他方法来获得每个像素的权重,例如克里格(Krig)法。
可以理解,上述确定权重的方式仅是示例,而非限制。例如,虽然在上述示例中,每个像素的权重依据3个参考像素的位置来确定,但是这仅是示意而非对本发明的限制。例如,每个像素的权重还可以依据1个与之距离最近的基准像素、4个甚至更多个基准像素来确定。又例如,像素的权重可以依据经验来设置。
步骤S233,根据该像素的权重采用线性插值法计算所述像素的重建像素值。
可选地,根据如下公式计算重建图像的像素x的重建像素值Gx。
Gx=Wa*Ga+Wb*Gb+Wc*Gc,其中,
Wa和Ga分别表示该像素x对应于所在的三角形的顶点a的权重和该顶点a的校正后的像素值,
Wb和Gb分别表示该像素x对应于所在的三角形的顶点b的权重和该顶点b的校正后的像素值,
Wc和Gc分别表示该像素x对应于所在的三角形的顶点c的权重和该顶点c的校正后的像素值。
根据本发明的一个实施例,光纤束图像处理方法还包括配准上述重建图像和另一图像的步骤。其中,该另一图像可以也是一个重建图像。图像配准用于计算两个图像的相对位移。经过图像配准之后,两个图像的相同内容将在空间上重合。
可选地,配准操作可以采用相关系数法。通过搜索所有可能位移所对应的相关系数中的最大值,来确定正确的、可用来配准两个图像的位移。利用相关系数法配准运算时间短,能够满足实时需要。
虽然相关系数配准方法相对较快速,但其配准精度较低。可选地,配准操作还可以采用迭代配准法。迭代配准法虽然速度较慢,但能够满足较高精度要求。
根据本发明一个实施例,直接根据上述与光纤束中的光纤的中心对应的像 素的位置、上述校正后的像素信息和上述另一图像,迭代配准所述重建图像和所述另一图像。在该实施例中,配准操作仅利用与光纤束中的光纤的中心对应的相关元素,忽略了其他一些元素,例如重建图像中的即不与光纤束中的光纤的中心对应的像素。由此,在保证迭代配准的计算精度的同时,有效提高了迭代配准的计算速度。
在实际应用中,除了刚性配准,有时候需要进行非刚体配准。例如,医生要检查的人体组织在采集样本图像的时间段内发生了蠕动;又例如,在采集样本图像期间,探头压力变化引起了目标组织的局部变形等。所以,可选地,配准重建图像和另一图像包括如下操作:首先,对重建图像和另一图像进行刚体配准;然后,根据刚体配准结果对另一图像进行重采样;最后,对经重采样的另一图像与重建图像的重叠部分做非刚体配准。可选地,该非刚体配准可以采用自由形变法或戴蒙斯(Demons)配准算法。图11A、图11B和图11C示意了上述非刚体配准的过程。图11A和图11B分别示出了根据本发明一个实施例的待配准的重建图像和另一图像。其中的虚线矩形部分示意了通过刚体配准确定的二者的重叠部分。针对该重叠部分,对另一图像重采样。图11C示出了对经重采样的另一图像与重建图像的重叠部分做非刚体配准的结果示意图。
由于前面的图像处理操作较迅速地获得了理想的重建图像,所以图像配准、拼接操作也能够更快速、更准确。
根据本发明另一方面,还提供了一种光纤束图像处理装置。该光纤束图像处理装置包括存储器和处理器。该存储器用于存储程序。该处理器用于运行所述程序。其中,所述程序在所述处理器中运行时,用于执行以下步骤:步骤S1,确定样本图像中对应光纤中心的位置的像素信息;步骤S2,对所确定的像素信息进行校正;以及步骤S3,基于校正后的像素信息重建所述样本图像,以获得重建图像。
可选地,所述光纤束图像处理装置还包括图像采集设备,所述图像采集设备用于对均匀荧光样本进行采样获取所述参考图像,并且对无荧光样本进行采样获取所述背景图像。
可选地,执行步骤S2时,具体执行根据如下公式计算校正后的像素值:
F=(I s-I b)x K,
其中,F表示所述校正后的像素值,I s表示所确定的像素值,I b表示背景图像中相应像素的像素值,K表示校正系数。
可选地,在执行步骤S2之前,还执行根据参考图像和所述背景图像采用如下公式计算所述校正系数K:
K=k/(I c-I b),
其中,I c表示所述参考图像中相应像素的像素值,k表示比例系数,其等于所述参考图像中像素的像素值分别与所述背景图像中其相应像素的像素值之差的中位值。
可选地,执行步骤S3时,进一步执行:基于像素的权重和所述校正后的像素信息采用插值方法获得该像素的重建像素值。
可选地,执行步骤S3时,进一步执行:
基于所述所确定的像素信息对所述样本图像进行三角剖分;
基于所述三角剖分所获得的、该像素所在的三角形,确定该像素的权重;以及
根据该像素的权重采用线性插值法计算该像素的重建像素值。
可选地,执行确定该像素的权重时,进一步执行:
确定该像素至该像素所在的三角形的各顶点的距离;以及
设置该像素对应于该三角形的各顶点的权重分别为与该像素和该各顶点之间的距离成反比。
可选地,执行计算该像素的重建像素值时,进一步执行:根据如下公式进行计算:Gx=Wa*Ga+Wb*Gb+Wc*Gc,其中,
Gx表示像素x的重建像素值;
Wa和Ga分别表示像素x对应于其所在的三角形的顶点a的权重和该顶点a的校正后的像素值;
Wb和Gb分别表示该像素x对应于其所在的三角形的顶点b的权重和该顶点b的校正后的像素值;
Wc和Gc分别表示该像素x对应于其所在的三角形的顶点c的权重和该顶点c的校正后的像素值。
可选地,所述程序在所述处理器中运行时,还用于执行以下步骤:将所述重建图像和另一图像进行配准处理。
通过阅读上文关于光纤束图像处理方法和光纤束图像分析方法的详细描述,可以理解该光纤束图像处理装置的构成和技术效果,为了简洁,在此不再赘述。
此外,根据本发明实施例,还提供了一种存储介质,在所述存储介质上存 储了程序指令,在所述程序指令被计算机或处理器运行时使得所述计算机或处理器执行本发明实施例的光纤束图像处理方法的相应步骤,并且用于实现根据本发明实施例的光纤束图像处理装置中的相应模块或单元。所述存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本发明的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本发明的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并 入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的光纤束图像处理装置中的一些模块的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
以上所述,仅为本发明的具体实施方式或对具体实施方式的说明,本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。 本发明的保护范围应以权利要求的保护范围为准。

Claims (10)

  1. 一种光纤束图像处理方法,包括:
    确定样本图像中对应光纤中心的位置的像素信息;
    对所确定的像素信息进行校正;以及
    基于校正后的像素信息重建所述样本图像,以获得重建图像。
  2. 如权利要求1所述的方法,其特征在于,所述对所确定的像素信息进行校正包括根据如下公式计算校正后的像素值:
    F=(I s-I b)x K,
    其中,F表示所述校正后的像素值,I s表示所确定的像素值,I b表示背景图像中相应像素的像素值,K表示校正系数。
  3. 如权利要求2所述的方法,其特征在于,在所述校正步骤之前还包括根据参考图像和所述背景图像采用如下公式计算所述校正系数K:
    K=k/(I c-I b),
    其中,I c表示所述参考图像中相应像素的像素值,k表示比例系数,其等于所述参考图像中像素的像素值分别与所述背景图像中其相应像素的像素值之差的中位值。
  4. 如权利要求3所述的方法,其特征在于,还包括:
    对均匀荧光样本进行采样获取所述参考图像;以及
    对无荧光样本进行采样获取所述背景图像。
  5. 如权利要求1所述的方法,其特征在于,所述基于校正后的像素信息重建所述样本图像包括:
    基于像素的权重和所述校正后的像素信息采用插值方法获得该像素的重建像素值。
  6. 如权利要求5所述的方法,其特征在于,所述获得该像素的重建像素值包括:
    基于所述所确定的像素信息对所述样本图像进行三角剖分;
    基于所述三角剖分所获得的、该像素所在的三角形,确定该像素的权重;以及
    根据该像素的权重采用线性插值法计算该像素的重建像素值。
  7. 如权利要求6所述的方法,其特征在于,所述确定该像素的权重包括:
    确定该像素至该像素所在的三角形的各顶点的距离;以及
    设置该像素对应于该三角形的各顶点的权重分别为与该像素和该各顶点之间的距离成反比。
  8. 如权利要求6或7所述的方法,其特征在于,所述计算该像素的重建像素值根据如下公式计算:
    Gx=Wa*Ga+Wb*Gb+Wc*Gc,其中,
    Gx表示像素x的重建像素值;
    Wa和Ga分别表示像素x对应于其所在的三角形的顶点a的权重和该顶点a的校正后的像素值;
    Wb和Gb分别表示该像素x对应于其所在的三角形的顶点b的权重和该顶点b的校正后的像素值;
    Wc和Gc分别表示该像素x对应于其所在的三角形的顶点c的权重和该顶点c的校正后的像素值。
  9. 如权利要求1所述的方法,其特征在于,还包括:
    将所述重建图像和另一图像进行配准处理。
  10. 一种光纤束图像处理装置,包括:
    存储器,用于存储程序;
    处理器,用于运行所述程序;
    其中,所述程序在所述处理器中运行时,用于执行以下步骤:
    确定样本图像中对应光纤中心的位置的像素信息;
    对所确定的像素信息进行校正;以及
    基于校正后的像素信息重建所述样本图像,以获得重建图像。
PCT/CN2018/110250 2017-10-16 2018-10-15 光纤束图像处理方法和装置 WO2019076265A1 (zh)

Priority Applications (9)

Application Number Priority Date Filing Date Title
JP2020541844A JP7001295B2 (ja) 2017-10-16 2018-10-15 光ファイバー束画像処理方法及び装置
MX2020003986A MX2020003986A (es) 2017-10-16 2018-10-15 Metodo y aparato para el procesamiento de imagenes de haz de fibras.
KR1020207013622A KR102342575B1 (ko) 2017-10-16 2018-10-15 광섬유 번들 이미지 처리 방법 및 장치
RU2020115459A RU2747946C1 (ru) 2017-10-16 2018-10-15 Способ и устройство для обработки изображения, полученного с помощью оптоволоконного жгута
BR112020007623-6A BR112020007623B1 (pt) 2017-10-16 2018-10-15 Método implementado por computador para processamento de imagem de feixe de fibra e aparelho de processamento de imagem de feixe de fibra
AU2018353915A AU2018353915A1 (en) 2017-10-16 2018-10-15 Optical fibre bundle image processing method and apparatus
CA3079123A CA3079123C (en) 2017-10-16 2018-10-15 Fiber bundle image processing method and apparatus
EP18868761.0A EP3699667A4 (en) 2017-10-16 2018-10-15 METHOD AND DEVICE FOR PROCESSING AN OPTICAL FIBER BUNDLE IMAGE
US16/850,048 US11525996B2 (en) 2017-10-16 2020-04-16 Fiber bundle image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710959003.1 2017-10-16
CN201710959003.1A CN107678153B (zh) 2017-10-16 2017-10-16 光纤束图像处理方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/850,048 Continuation US11525996B2 (en) 2017-10-16 2020-04-16 Fiber bundle image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2019076265A1 true WO2019076265A1 (zh) 2019-04-25

Family

ID=61140253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110250 WO2019076265A1 (zh) 2017-10-16 2018-10-15 光纤束图像处理方法和装置

Country Status (10)

Country Link
US (1) US11525996B2 (zh)
EP (1) EP3699667A4 (zh)
JP (1) JP7001295B2 (zh)
KR (1) KR102342575B1 (zh)
CN (1) CN107678153B (zh)
AU (1) AU2018353915A1 (zh)
CA (1) CA3079123C (zh)
MX (1) MX2020003986A (zh)
RU (1) RU2747946C1 (zh)
WO (1) WO2019076265A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107678153B (zh) * 2017-10-16 2020-08-11 苏州微景医学科技有限公司 光纤束图像处理方法和装置
CN107622491B (zh) * 2017-10-16 2022-03-11 苏州微景医学科技有限公司 光纤束图像分析方法和装置
KR102561360B1 (ko) * 2021-06-04 2023-08-01 한국과학기술연구원 보정을 사용하지 않고 파이버스코프 이미지를 처리하는 방법 및 이를 수행하는 파이버스코프 시스템
KR102553001B1 (ko) * 2021-10-05 2023-07-10 한국과학기술연구원 인공지능 기반의 이미지의 허니컴 아티팩트 제거 방법 및 장치

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4601537A (en) * 1984-01-06 1986-07-22 Ohio State University Research Foundation Apparatus and methods for forming images and for optical demultiplexing
CN103218991A (zh) * 2013-03-29 2013-07-24 中国电子科技集团公司第四十一研究所 基于fpga的光纤图像实时处理方法
CN105894450A (zh) * 2015-12-07 2016-08-24 乐视云计算有限公司 一种图像处理方法及装置
CN106204419A (zh) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 图像处理方法以及装置
CN106251339A (zh) * 2016-07-21 2016-12-21 深圳市大疆创新科技有限公司 图像处理方法及装置
CN106859579A (zh) * 2017-01-26 2017-06-20 浙江大学 一种基于亚像素的光纤束共聚焦荧光内窥成像方法及装置
CN107622491A (zh) * 2017-10-16 2018-01-23 南京亘瑞医疗科技有限公司 光纤束图像分析方法和装置
CN107678153A (zh) * 2017-10-16 2018-02-09 南京亘瑞医疗科技有限公司 光纤束图像处理方法和装置

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5810032A (ja) * 1981-07-09 1983-01-20 オリンパス光学工業株式会社 内視鏡
JPH08191440A (ja) * 1995-01-10 1996-07-23 Fukuda Denshi Co Ltd 内視画像補正方法及び装置
JPH1048689A (ja) * 1996-04-05 1998-02-20 Nikon Corp 顕微鏡写真撮影装置
JPH11355574A (ja) * 1998-06-11 1999-12-24 Toshiba Corp 画像処理装置と画像処理方法
FR2842628B1 (fr) * 2002-07-18 2004-09-24 Mauna Kea Technologies "procede de traitement d'une image acquise au moyen d'un guide compose d'une pluralite de fibres optiques"
CN100417225C (zh) * 2005-10-27 2008-09-03 中国科学院上海技术物理研究所 基于光纤耦合的焦平面阵列图像时空变换的方法
DE102006011707B4 (de) * 2006-03-14 2010-11-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren und Vorrichtung zum Erzeugen einer strukturfreien fiberskopischen Aufnahme
WO2010076662A2 (en) * 2008-12-29 2010-07-08 Mauna Kea Technologies Image processing method and apparatus
CN101882326A (zh) * 2010-05-18 2010-11-10 广州市刑事科学技术研究所 基于中国人全面部结构形数据的三维颅面复原方法
CN104346778B (zh) * 2013-07-30 2017-08-22 比亚迪股份有限公司 图像的边缘增强方法和装置以及数码摄像设备
KR102144994B1 (ko) * 2013-09-30 2020-08-14 삼성전자주식회사 영상의 노이즈를 저감하는 방법 및 이를 이용한 영상 처리 장치
JP6118698B2 (ja) * 2013-09-30 2017-04-19 株式会社Ihi 画像解析装置及びプログラム
JP2016020896A (ja) * 2014-06-20 2016-02-04 キヤノン株式会社 位置ずれ量測定方法、補正テーブル生成装置、撮像装置、および投影装置
CN104933707B (zh) * 2015-07-13 2018-06-08 福建师范大学 一种基于多光子共焦显微细胞图像的超像素重构分割与重建方法
US10176567B2 (en) * 2015-12-21 2019-01-08 Canon Kabushiki Kaisha Physical registration of images acquired by Fourier Ptychography
US9894254B2 (en) * 2016-05-10 2018-02-13 Massachusetts Institute Of Technology Methods and apparatus for optical fiber imaging
CN106844289A (zh) * 2017-01-22 2017-06-13 苏州蜗牛数字科技股份有限公司 基于手机摄像头扫描环境进行建模的方法
CN106981090B (zh) * 2017-02-16 2020-04-28 南京邮电大学 一种管内步进单向光束扫描断层图像的三维重建方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4601537A (en) * 1984-01-06 1986-07-22 Ohio State University Research Foundation Apparatus and methods for forming images and for optical demultiplexing
CN103218991A (zh) * 2013-03-29 2013-07-24 中国电子科技集团公司第四十一研究所 基于fpga的光纤图像实时处理方法
CN105894450A (zh) * 2015-12-07 2016-08-24 乐视云计算有限公司 一种图像处理方法及装置
CN106204419A (zh) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 图像处理方法以及装置
CN106251339A (zh) * 2016-07-21 2016-12-21 深圳市大疆创新科技有限公司 图像处理方法及装置
CN106859579A (zh) * 2017-01-26 2017-06-20 浙江大学 一种基于亚像素的光纤束共聚焦荧光内窥成像方法及装置
CN107622491A (zh) * 2017-10-16 2018-01-23 南京亘瑞医疗科技有限公司 光纤束图像分析方法和装置
CN107678153A (zh) * 2017-10-16 2018-02-09 南京亘瑞医疗科技有限公司 光纤束图像处理方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3699667A4 *

Also Published As

Publication number Publication date
KR20200070321A (ko) 2020-06-17
AU2018353915A1 (en) 2020-05-14
JP7001295B2 (ja) 2022-01-19
CA3079123C (en) 2023-07-11
MX2020003986A (es) 2020-08-13
CA3079123A1 (en) 2019-04-25
KR102342575B1 (ko) 2021-12-24
JP2020537275A (ja) 2020-12-17
BR112020007623A2 (pt) 2020-11-10
CN107678153A (zh) 2018-02-09
EP3699667A1 (en) 2020-08-26
US11525996B2 (en) 2022-12-13
CN107678153B (zh) 2020-08-11
EP3699667A4 (en) 2021-01-06
US20200241277A1 (en) 2020-07-30
RU2747946C1 (ru) 2021-05-17

Similar Documents

Publication Publication Date Title
WO2019076265A1 (zh) 光纤束图像处理方法和装置
Quinn et al. Rapid quantification of pixel-wise fiber orientation data in micrographs
US10861175B1 (en) Systems and methods for automatic detection and quantification of point cloud variance
US20130188878A1 (en) Image analysis systems having image sharpening capabilities and methods using same
WO2019076267A1 (zh) 光纤束图像分析方法和装置
WO2012064986A2 (en) System and method of ultrasound image processing
JP2007518461A (ja) 心臓関連の取得のための自動的な最適面の決定
JP2015536732A (ja) 画像処理装置および方法
JP6888041B2 (ja) 医用矢状面画像を取得する方法、医用矢状面画像を取得するニューラルネットワークのトレーニング方法及びコンピュータ装置
Kwak et al. Artificial intelligence‐based measurement outperforms current methods for colorectal polyp size measurement
Barbosa et al. Towards automatic quantification of the epicardial fat in non-contrasted CT images
JP2020523107A (ja) 骨に沿った人工ランドマークを使用した三次元画像の自動歪み補正および/または同時位置合わせのためのシステムおよび方法
Niri et al. A superpixel-wise fully convolutional neural network approach for diabetic foot ulcer tissue classification
Bayareh Mancilla et al. Anatomical 3D modeling using IR sensors and radiometric processing based on structure from motion: Towards a tool for the diabetic foot diagnosis
GB2461991A (en) Positron emission tomography data with time activity derived framing intervals.
JP2016142664A (ja) 核医学画像解析技術
CN113689397A (zh) 工件圆孔特征检测方法和工件圆孔特征检测装置
CN109754365B (zh) 一种图像处理方法及装置
CN116721240B (zh) 皮肤瘢痕测量及皮下硬块感知测试的ai系统、分析方法
BR112020007623B1 (pt) Método implementado por computador para processamento de imagem de feixe de fibra e aparelho de processamento de imagem de feixe de fibra
CN104224206A (zh) 一种x光机成像性能检测方法
Tosca et al. Development of a three-dimensional surface imaging system for melanocytic skin lesion evaluation
Tanveer et al. Cancer image quantification with and without, expensive whole slide imaging scanners
Haji Rassouliha A Toolbox for Precise and Robust Deformation Measurement
Mazzeo et al. Automatize skin prick test with a low cost Machine vision system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18868761

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020541844

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3079123

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207013622

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018353915

Country of ref document: AU

Date of ref document: 20181015

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018868761

Country of ref document: EP

Effective date: 20200518

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020007623

Country of ref document: BR

REG Reference to national code

Ref country code: BR

Ref legal event code: B01E

Ref document number: 112020007623

Country of ref document: BR

Free format text: APRESENTE DOCUMENTO DE CESSAO DA PRIORIDADE CN 201710959003.1 ASSINADO E DATADO POR NANJING GENRUI MEDICAL TECHNOLOGIES CO., LTD. CONTENDO, PELO MENOS, NUMERO E DATA DE DEPOSITO DO DOCUMENTO DE PATENTE QUE ESTA SENDO CEDIDO, UMA VEZ QUE O DOCUMENTO DISPONIBILIZADO NA PETICAO 870200048080 EVIDENCIA QUE ESTE E DEPOSITANTE DA PRIORIDADE E QUE ESTE DEPOSITANTE E DISTINTO DAQUELE QUE DEPOSITOU A PETICAO DE REQUERIMENTO DO PEDIDO NA FASE NACIONAL. A NAO APRESENTACAO DESSE DOCUMENTO ACARRETARA NA PERDA DA PRIORIDADE.

REG Reference to national code

Ref country code: BR

Ref legal event code: B01Y

Ref document number: 112020007623

Country of ref document: BR

Kind code of ref document: A2

Free format text: ANULADA A PUBLICACAO CODIGO 1.5 NA RPI NO 2595 DE 29/09/2020 POR TER SIDO INDEVIDA.

ENP Entry into the national phase

Ref document number: 112020007623

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20200416