WO2020029064A1 - 光学相干断层图像处理方法 - Google Patents

光学相干断层图像处理方法 Download PDF

Info

Publication number
WO2020029064A1
WO2020029064A1 PCT/CN2018/099142 CN2018099142W WO2020029064A1 WO 2020029064 A1 WO2020029064 A1 WO 2020029064A1 CN 2018099142 W CN2018099142 W CN 2018099142W WO 2020029064 A1 WO2020029064 A1 WO 2020029064A1
Authority
WO
WIPO (PCT)
Prior art keywords
anterior segment
image
iris
optical coherence
color
Prior art date
Application number
PCT/CN2018/099142
Other languages
English (en)
French (fr)
Inventor
赵云娥
黄锦海
于航
Original Assignee
温州医科大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 温州医科大学 filed Critical 温州医科大学
Priority to US16/613,379 priority Critical patent/US20210330182A1/en
Priority to CN201880059686.8A priority patent/CN111093525A/zh
Priority to PCT/CN2018/099142 priority patent/WO2020029064A1/zh
Publication of WO2020029064A1 publication Critical patent/WO2020029064A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the invention relates to an ophthalmic medical image processing method, in particular to an optical coherence tomographic image processing method.
  • the anterior segment of the eye is a part of the eyeball and specifically includes: all cornea, iris, ciliary body, anterior chamber, posterior chamber, lens suspensory ligament, atrial angle, partial lens, peripheral vitreous body, retinal and extraocular muscle attachment points and conjunctiva.
  • Anterior segment image analysis and processing is very important for judging eye diseases.
  • Medical image processing technology is the intersection of many disciplines such as medicine, mathematics, and computer. It has been continuously developed as the key to computer-aided diagnosis. With the rapid development of ophthalmology, people have higher and higher requirements for image processing and analysis of ophthalmology. Further research on medical image processing and analysis is of great significance.
  • an optical coherence tomography image processing method which includes the following steps: Step 1. Use an optical coherence tomography technology to acquire an anterior segment image to obtain a black and white image of the anterior segment; Step 2: The pseudo-color talk processing and color space conversion are performed on the black and white image of the anterior segment in step 1 to maximize the color distribution distance of the cornea, iris, and lens in the converted image, and then the color space converted
  • the image is binarized with the threshold value to distinguish the latent corneal area, the latent iris area, and the latent lens area, and then use blob shape analysis to remove the noise and interference areas, so as to obtain the corneal area, iris area, and lens area; step three, use Tectonic operations perform spot filling and collapsing on the corneal region, iris region, and lens region obtained in step 2.
  • Step four Use the level set algorithm to track the boundary of the image processed in step three to accurately track the surface boundaries of each
  • the color space conversion in step two uses the L * U * V * color space model; in the L * U * V * color space model, the color is represented by three components: L * represents the brightness of the image, U * and V * represents the color difference, and the color distance of different colors is defined by the European distance, as shown in the following formula:
  • a and b respectively represent two points in the image, and each point has three components of L * , U * , and V * , which are respectively represented as L * a , U * a , V * a, and L * b , U * b , V * b ; ⁇ d represents the color distance between a and b.
  • the binarization of the superimposed thresholds in step 2 requires 3n threshold spaces, n ⁇ 1. Because the corneal potential area, lens potential area, and iris potential area are to be separated initially, 3n threshold spaces are needed, n ⁇ 1.
  • the blob shape analysis for the corneal potential area, the iris potential area, and the lens potential area in step 2 needs to be performed n times. That is, each time the multi-threshold binarization is performed, a potential blob shape analysis is performed on the potential area that is distinguished. Since there are 3 potential areas of the corneal potential area, the iris potential area, and the lens potential area, multi-threshold binarization requires 3n thresholds, n ⁇ 1. The blob shape analysis for the latent corneal region, latent iris region, and latent lens region also needs to be done n times.
  • the surface boundary of each part of the anterior segment of the eye in step 4 refers to: the front surface of the cornea, the rear surface of the cornea, the front surface of the iris, and the front surface of the lens.
  • the tomographic image of the anterior segment is in bmp or jpeg format.
  • anterior segment tomographic images acquired in step 1 may be the same resolution or different resolutions.
  • speckle filling and collapse processing The purpose of speckle filling and collapse processing is to make the level set algorithm faster.
  • the level set algorithm requires a closed contour. If there are holes, it is equivalent to one more contour, which will reduce the speed of the subsequent level set algorithm.
  • the hole filling and expansion processing gives the level set algorithm an initial contour, which makes the speed faster; without this initial contour, performing the level set algorithm on the full graph will cause the speed to be too slow.
  • the present invention uses the level set algorithm to use the rough outline as the initial horizontal line to finely track the image contour, thereby overcoming the bottleneck of the slowness of the level set algorithm in the full image and realizing the immediate Real-time extraction and analysis of tomographic image features provide reliable basic data for subsequent determination of clinical parameters of the anterior segment.
  • FIG. 1 is a flowchart of a preferred embodiment of an optical coherence tomographic image processing method according to the present invention.
  • Fig. 1 shows an embodiment of the invention.
  • the process of the optical coherence tomographic image processing method is as follows: first, the optical coherence tomography technology acquires an anterior segment tomographic image, enters the acquired image into a computer, and saves the file format with a low compression ratio, such as bmp or jpeg Format to ensure that the image has more local details, and these images can be the same resolution or different resolutions. A black and white image of the anterior segment is obtained.
  • a low compression ratio such as bmp or jpeg Format
  • the black-and-white image of the anterior segment of the eye is subjected to color space conversion, so that in the converted image, the color distribution distance of the cornea, iris, and lens is maximized. Then, it superimposes the threshold value binarization to distinguish the corneal potential area, iris potential area, and lens potential area, and then uses blob shape analysis to remove the noise and interference areas of corneal potential area, iris potential area, and lens potential area. A corneal region, an iris region, and a lens region were obtained.
  • the color space conversion uses the L * U * V * color space model; in the L * U * V * color space model, the color is represented by three components: L * represents the brightness of the image, and U * and V * represent the color difference, respectively.
  • the color distance of a color can be defined by the European distance, as shown in the following formula:
  • a and b respectively represent two points in the image, and each point has three components of L * , U * , and V * , which are respectively represented as L * a , U * a , V * a, and L * b , U * b , V * b ; ⁇ d represents the color distance between a and b.
  • multi-threshold binarization requires 3n thresholds, n ⁇ 1. Then the blob shape analysis for the latent corneal area, latent iris area, and lens latent area also needs to be done n times.
  • the corneal region, the iris region, and the lens region are spot-filled and collapsed by using a structural operation.
  • the level set algorithm is used for boundary tracking to accurately trace the fine boundaries of the surface of each part of the anterior segment (including the anterior surface of the cornea, the posterior surface of the cornea, the anterior surface of the iris, and the anterior surface of the lens).

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Quality & Reliability (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

本发明公开了一种光学相干断层图像处理方法,包括步骤:一、用光学相干断层技术采集眼前节断层图像,得到眼前节黑白图像;二、将眼前节黑白图像进行伪彩话处理及色彩空间转换,并进行叠加阈值二值化,区分出角膜、虹膜和晶状体潜在区域,再分别利用blob形状分析,获得角膜、虹膜和晶状体区域;三、利用构造学运算对步骤二获得的角膜、虹膜和晶状体区域进行斑点填充和坍缩处理;四、运用水平集算法对图像进行边界跟踪,精确跟踪出眼前节各部分表面边界,找到眼前节。本发明克服了全图运用水平集算法速度慢的瓶颈,实现了眼前节断层图像特征的实时提取、分析,为后续求得眼前节临床参数提供了可靠的基础数据。

Description

光学相干断层图像处理方法 技术领域
本发明涉及眼科医学图像处理方法,尤其涉及一种光学相干断层图像处理方法。
背景技术
随着电子产品的增多,老龄化问题的日益严峻,眼部问题也越来越多。新的眼科医学影像技术不断发展起来,使眼科医生对眼睛的观察更加直接,确诊率也大幅提升。计算机辅助诊断技术主要是研究怎样通过图像处理技术对各种眼科医学影像信息进行有效的处理,辅助眼科医生进行诊断甚至设定手术规划。因此,计算机辅助诊断技术具有广阔的应用前景和重大的社会效益。
眼前节为眼球的一部分,具体包括:全部角膜、虹膜、睫状体、前房、后房、晶状体悬韧带、房角、部分晶状体、周边玻璃体、视网膜及眼外肌附着点部和结膜等。眼前节图像分析与处理对于判断眼部疾病十分重要。医学图像处理技术是医学、数学、计算机等多种学科的交叉,其作为计算机辅助诊断的关键得到了不断发展。随着眼科医疗的蓬勃发展,人们对眼科医学图像处理与分析提出的要求也越来越高,进一步研究医学图像处理与分析具有十分重要的意义。
因此,本领域的技术人员致力于开发一种基于机器视觉的眼前节断层图像特征提取方法——光学相干断层图像处理方法。现有技术中往往采用全图运用水平计算法,无法克服速度慢的瓶颈。
发明内容
有鉴于现有技术的缺陷,本发明所要解决的技术问题是提供一种光学相干断层图像处理方法,包括以下步骤:步骤一、用光学相干断层技术采集眼前节断层图像,得到眼前节黑白图像;步骤二、将步骤一得到的眼前节黑白图像进行伪彩话处理及色彩空间转换,使得在转换后的图像中,角膜、虹膜和晶状体的色彩分布距离最大化,然后对经过色彩空间转换后的图像进行叠加阈值二值化,区分出角膜潜在区域、虹膜潜在区域和晶状体潜在区域,再分别利用blob形状分析,去除噪声及干扰区域,从而获得角膜区域、虹膜区域和晶状体区域;步骤三、利用构造学运算对步骤二获得的角膜区域、虹膜区域和晶状体区域进行斑点填充和坍缩处理;步骤四、运用水平集算法对步骤三处理后的图像进行边 界跟踪,精确跟踪出眼前节各部分表面边界,找到眼前节。
进一步地,步骤二中的色彩空间转换采用L *U *V *色彩空间模型;在L *U *V *色彩空间模型中,用三个分量表示颜色:L *表示图像的亮度,U *和V *分别表示色差,不同颜色的色距用欧式距离来定义,如下式所示:
Figure PCTCN2018099142-appb-000001
其中,a、b分别表示图像中的两个点,每个点均具有L *、U *、V *三个分量,分别表示为L * a、U * a、V * a和L * b、U * b、V * b;Δd表示a和b之间的色距。
进一步地,步骤二中的叠加阈值二值化需要3n个阈值空间,n≥1。因为要初步分离出角膜潜在区域、晶状体潜在区域、虹膜潜在区域,所以需要3n个阈值空间,n≥1。
进一步地,步骤二中的针对角膜潜在区域、虹膜潜在区域和晶状体潜在区域的blob形状分析分别需要做n次。即每次对多阈值二值化之后都要对其区分出来的潜在区域进行1次blob形状分析。由于有角膜潜在区域、虹膜潜在区域和晶状体潜在区3个潜在区域,故多阈值二值化需要3n个阈值,n≥1。针对角膜潜在区域、虹膜潜在区域和晶状体潜在区域的blob形状分析也分别需要做n次。
进一步地,步骤四中的眼前节各部分表面边界是指:角膜前表面、角膜后表面、虹膜前表面和晶状体前表面边界。
进一步地,眼前节断层图像为bmp或jpeg格式。
进一步地,步骤一采集的眼前节断层图像可以是相同分辨率或不同分辨率的。
技术效果
斑点填充和坍缩处理的作用是为了水平集算法更加快速。水平集算法需要一个封闭的轮廓。如果有孔洞就相当于多了一个轮廓,会降低其后水平集算法的速度。孔洞填充和膨胀处理给了水平集算法一个初始轮廓,使得速度增快;若没有这个初始轮廓,对全图进行水平集算法则会导致速度太慢。
本发明在求得感兴趣区域粗略轮廓的前提下,再运用水平集算法以粗略轮廓作为初始水平线,对图像轮廓进行精细跟踪,克服了全图运用水平集算法速度慢的瓶颈,从而实现了眼前节断层图像特征的实时提取、分析,为后续求得眼前节临床参数提供了可靠的基础数据。
以下将结合附图对本发明的构思、具体结构及产生的技术效果作进一步说明,以充分地了解本发明的目的、特征和效果。
附图说明
图1是本发明所涉及的光学相干断层图像处理方法的一个较佳实施例的流程图。
具体实施方式
图1示出了本发明的一个实施例。在该实施例中,光学相干断层图像处理方法的流程是:先光学相干断层技术采集眼前节断层图像,输入采集到的图像到计算机中,保存为压缩率较低的文件格式,例如bmp或者jpeg格式,保证图像具有更多的局部细节,这些图像可以是相同分辨率或不同分辨率的。得到眼前节黑白图像。
然后将眼前节黑白图像进行色彩空间转换,使得在转换后的图像中,角膜、虹膜和晶状体的色彩分布距离最大化。接着对其进行叠加阈值二值化,区分出角膜潜在区域、虹膜潜在区域和晶状体潜在区域,再分别利用blob形状分析,去除角膜潜在区域、虹膜潜在区域和晶状体潜在区域的噪声及干扰区域,从而获得角膜区域、虹膜区域和晶状体区域。
色彩空间转换采用L *U *V *色彩空间模型;在L *U *V *色彩空间模型中,用三个分量表示颜色:L *表示图像的亮度,U *和V *分别表示色差,不同颜色的色距可以用欧式距离来定义,如下式所示:
Figure PCTCN2018099142-appb-000002
其中,a、b分别表示图像中的两个点,每个点均具有L *、U *、V *三个分量,分别表示为L * a、U * a、V * a和L * b、U * b、V * b;Δd表示a和b之间的色距。
因此在该色彩空间中,距离近的点颜色差异小,距离远的点颜色差异大。在此基础上运用有先验知识的多阈值图像分割,可粗略分离出角膜潜在区域、晶状体潜在区域和虹膜潜在区域。
另外,因为要初步分离出角膜潜在区域、晶状体潜在区域和虹膜潜在区域,所以多阈值二值化需要3n个阈值,n≥1。然后针对角膜潜在区域、虹膜潜在区域和晶状体潜在区域的blob形状分析也分别需要做n次。
再然后,利用构造学运算对角膜区域、虹膜区域和晶状体区域进行斑点填充和坍缩处理。再运用水平集算法进行边界跟踪,精确跟踪出眼前节各部分表面(包括角膜前表面、角膜后表面、虹膜前表面和晶状体前表面边界)的精细边界,找到眼前节,从计算机中输出结果。
以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领 域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。

Claims (7)

  1. 一种光学相干断层图像处理方法,其特征在于,包括以下步骤:
    步骤一、用光学相干断层技术采集眼前节断层图像,得到眼前节黑白图像;
    步骤二、将所述步骤一得到的眼前节黑白图像进行伪彩话处理及色彩空间转换,使得在转换后的图像中,角膜、虹膜和晶状体的色彩分布距离最大化,然后对经过色彩空间转换后的图像进行叠加阈值二值化,区分出角膜潜在区域、虹膜潜在区域和晶状体潜在区域,再分别利用blob形状分析,去除噪声及干扰区域,从而获得角膜区域、虹膜区域和晶状体区域;
    步骤三、利用构造学运算对所述步骤二获得的角膜区域、虹膜区域和晶状体区域进行斑点填充和坍缩处理;
    步骤四、运用水平集算法对所述步骤三处理后的图像进行边界跟踪,精确跟踪出眼前节各部分表面边界,找到眼前节。
  2. 根据权利要求1所述的光学相干断层图像处理方法,其特征在于,所述步骤二中的色彩空间转换采用L *U *V *色彩空间模型;在所述L *U *V *色彩空间模型中,用三个分量表示颜色:L *表示图像的亮度,U *和V *分别表示色差,不同颜色的色距用欧式距离来定义,如下式所示:
    Figure PCTCN2018099142-appb-100001
    其中,a、b分别表示图像中的两个点,每个点均具有L *、U *、V *三个分量,分别表示为L * a、U * a、V * a和L * b、U * b、V * b;Δd表示a和b之间的色距。
  3. 根据权利要求1所述的光学相干断层图像处理方法,其特征在于,所述步骤二中的叠加阈值二值化需要3n个阈值空间,n≥1。
  4. 根据权利要求3所述的光学相干断层图像处理方法,其特征在于,所述步骤二中的针对角膜潜在区域、虹膜潜在区域和晶状体潜在区域的blob形状分析分别需要做n次。
  5. 根据权利要求1所述的光学相干断层图像处理方法,其特征在于,所述步骤四中的眼前节各部分表面边界是指:角膜前表面、角膜后表面、虹膜前表面和晶状体前表面边界。
  6. 根据权利要求1所述的光学相干断层图像处理方法,其特征在于,眼 前节断层图像为bmp或jpeg格式。
  7. 根据权利要求1所述的光学相干断层图像处理方法,其特征在于,所述步骤一采集的眼前节断层图像可以是相同分辨率或不同分辨率的。
PCT/CN2018/099142 2018-08-07 2018-08-07 光学相干断层图像处理方法 WO2020029064A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/613,379 US20210330182A1 (en) 2018-08-07 2018-08-07 Optical coherence tomography image processing method
CN201880059686.8A CN111093525A (zh) 2018-08-07 2018-08-07 光学相干断层图像处理方法
PCT/CN2018/099142 WO2020029064A1 (zh) 2018-08-07 2018-08-07 光学相干断层图像处理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/099142 WO2020029064A1 (zh) 2018-08-07 2018-08-07 光学相干断层图像处理方法

Publications (1)

Publication Number Publication Date
WO2020029064A1 true WO2020029064A1 (zh) 2020-02-13

Family

ID=69415172

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099142 WO2020029064A1 (zh) 2018-08-07 2018-08-07 光学相干断层图像处理方法

Country Status (3)

Country Link
US (1) US20210330182A1 (zh)
CN (1) CN111093525A (zh)
WO (1) WO2020029064A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309661B (zh) * 2023-05-23 2023-08-08 广东麦特维逊医学研究发展有限公司 眼前节oct图像轮廓提取方法
CN116777794B (zh) * 2023-08-17 2023-11-03 简阳市人民医院 一种角膜异物图像的处理方法及系统

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1900951A (zh) * 2006-06-02 2007-01-24 哈尔滨工业大学 基于数学形态学的虹膜图像柔性分区方法
US20120218517A1 (en) * 2011-02-25 2012-08-30 Canon Kabushiki Kaisha Image processing apparatus and image processing system for displaying information about ocular blood flow
US8358819B2 (en) * 2005-06-24 2013-01-22 University Of Iowa Research Foundation System and methods for image segmentation in N-dimensional space
US20130201450A1 (en) * 2012-02-02 2013-08-08 The Ohio State University Detection and measurement of tissue images
CN104013384A (zh) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 眼前节断层图像特征提取方法
CN104050667A (zh) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 瞳孔跟踪图像处理方法
CN104751474A (zh) * 2015-04-13 2015-07-01 上海理工大学 一种级联式快速图像缺陷分割方法
CN105894498A (zh) * 2016-03-25 2016-08-24 湖南省科学技术研究开发院 一种视网膜光学相干图像分割方法
US20170039704A1 (en) * 2015-06-17 2017-02-09 Stoecker & Associates, LLC Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique
CN106447682A (zh) * 2016-08-29 2017-02-22 天津大学 基于帧间相关性的乳腺mri病灶的自动分割方法
CN106530316A (zh) * 2016-10-20 2017-03-22 天津大学 综合眼底图像边缘信息和亮度信息的视盘分割方法
CN106846314A (zh) * 2017-02-04 2017-06-13 苏州大学 一种基于术后角膜oct影像数据的图像分割方法
CN107016683A (zh) * 2017-04-07 2017-08-04 衢州学院 基于区域生长初始化的水平集海马图像分割方法
CN107169975A (zh) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 超声图像的分析方法及装置
CN107330897A (zh) * 2017-06-01 2017-11-07 福建师范大学 图像分割方法及其系统
CN107909589A (zh) * 2017-11-01 2018-04-13 浙江工业大学 一种结合C‑V水平集和GrabCut算法的牙齿图像分割方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751603A (zh) * 2008-12-10 2010-06-23 东北大学 在线棒型材图像自动计数设备及方法
CN202350737U (zh) * 2011-10-09 2012-07-25 长安大学 一种基于matlab的发动机喷雾图像优化处理装置
CN105761218B (zh) * 2016-02-02 2018-04-13 中国科学院上海光学精密机械研究所 光学相干层析成像的图像伪彩色处理方法
CN107133959B (zh) * 2017-06-12 2020-04-28 上海交通大学 一种快速的血管边界三维分割方法及系统

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358819B2 (en) * 2005-06-24 2013-01-22 University Of Iowa Research Foundation System and methods for image segmentation in N-dimensional space
CN1900951A (zh) * 2006-06-02 2007-01-24 哈尔滨工业大学 基于数学形态学的虹膜图像柔性分区方法
US20120218517A1 (en) * 2011-02-25 2012-08-30 Canon Kabushiki Kaisha Image processing apparatus and image processing system for displaying information about ocular blood flow
US20130201450A1 (en) * 2012-02-02 2013-08-08 The Ohio State University Detection and measurement of tissue images
CN104013384A (zh) * 2014-06-11 2014-09-03 温州眼视光发展有限公司 眼前节断层图像特征提取方法
CN104050667A (zh) * 2014-06-11 2014-09-17 温州眼视光发展有限公司 瞳孔跟踪图像处理方法
CN104751474A (zh) * 2015-04-13 2015-07-01 上海理工大学 一种级联式快速图像缺陷分割方法
US20170039704A1 (en) * 2015-06-17 2017-02-09 Stoecker & Associates, LLC Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique
CN105894498A (zh) * 2016-03-25 2016-08-24 湖南省科学技术研究开发院 一种视网膜光学相干图像分割方法
CN106447682A (zh) * 2016-08-29 2017-02-22 天津大学 基于帧间相关性的乳腺mri病灶的自动分割方法
CN106530316A (zh) * 2016-10-20 2017-03-22 天津大学 综合眼底图像边缘信息和亮度信息的视盘分割方法
CN106846314A (zh) * 2017-02-04 2017-06-13 苏州大学 一种基于术后角膜oct影像数据的图像分割方法
CN107169975A (zh) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 超声图像的分析方法及装置
CN107016683A (zh) * 2017-04-07 2017-08-04 衢州学院 基于区域生长初始化的水平集海马图像分割方法
CN107330897A (zh) * 2017-06-01 2017-11-07 福建师范大学 图像分割方法及其系统
CN107909589A (zh) * 2017-11-01 2018-04-13 浙江工业大学 一种结合C‑V水平集和GrabCut算法的牙齿图像分割方法

Also Published As

Publication number Publication date
US20210330182A1 (en) 2021-10-28
CN111093525A (zh) 2020-05-01

Similar Documents

Publication Publication Date Title
Zahoor et al. Fast optic disc segmentation in retina using polar transform
Yin et al. User-guided segmentation for volumetric retinal optical coherence tomography images
Tian et al. Automatic anterior chamber angle assessment for HD-OCT images
Hu et al. Multimodal retinal vessel segmentation from spectral-domain optical coherence tomography and fundus photography
CN108470348A (zh) 裂隙灯眼前节断层图像特征提取方法
Rabbani et al. Obtaining thickness maps of corneal layers using the optimal algorithm for intracorneal layer segmentation
Zhang et al. A novel technique for robust and fast segmentation of corneal layer interfaces based on spectral-domain optical coherence tomography imaging
CN112384127A (zh) 眼睑下垂检测方法及系统
WO2020211173A1 (zh) 基于机器视觉的眼前节断层图像的图像特征提取方法
WO2020029064A1 (zh) 光学相干断层图像处理方法
Chen et al. Automatic segmentation of fluid-associated abnormalities and pigment epithelial detachment in retinal SD-OCT images
Ali et al. Cost-effective broad learning-based ultrasound biomicroscopy with 3D reconstruction for ocular anterior segmentation
Maqsood et al. Detection of macula and recognition of aged-related macular degeneration in retinal fundus images
Lee et al. Screening glaucoma with red-free fundus photography using deep learning classifier and polar transformation
Marin et al. Segmentation of anterior segment boundaries in swept source OCT images
Ramaswamy et al. A study and comparison of automated techniques for exudate detection using digital fundus images of human eye: a review for early identification of diabetic retinopathy
WO2023103609A1 (zh) 用于眼前节octa的眼动追踪方法、装置、设备和存储介质
Septiarini et al. Peripapillary atrophy detection in fundus images based on sectors with scan lines approach
Aloudat et al. High intraocular pressure detection from frontal eye images: a machine learning based approach
Pham et al. Deep learning algorithms to isolate and quantify the structures of the anterior segment in optical coherence tomography images
Radha et al. Identification of retinal image features using bitplane separation and mathematical morphology
Syga et al. A fully automated 3D in-vivo delineation and shape parameterization of the human lamina cribrosa in optical coherence tomography
Lee et al. 3-D segmentation of the rim and cup in spectral-domain optical coherence tomography volumes of the optic nerve head
Tulasigeri et al. An advanced thresholding algorithm for diagnosis of glaucoma in fundus images
Patankar et al. Diagnosis of Ophthalmic Diseases in Fundus Image Using various Machine Learning Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18929734

Country of ref document: EP

Kind code of ref document: A1