WO2023103609A1 - 用于眼前节octa的眼动追踪方法、装置、设备和存储介质 - Google Patents

用于眼前节octa的眼动追踪方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2023103609A1
WO2023103609A1 PCT/CN2022/126616 CN2022126616W WO2023103609A1 WO 2023103609 A1 WO2023103609 A1 WO 2023103609A1 CN 2022126616 W CN2022126616 W CN 2022126616W WO 2023103609 A1 WO2023103609 A1 WO 2023103609A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
contour
images
contours
octa
Prior art date
Application number
PCT/CN2022/126616
Other languages
English (en)
French (fr)
Inventor
尚学森
汪霄
Original Assignee
图湃(北京)医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 图湃(北京)医疗科技有限公司 filed Critical 图湃(北京)医疗科技有限公司
Publication of WO2023103609A1 publication Critical patent/WO2023103609A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present disclosure relates to the technical field of eye movement tracking, for example, relates to an eye movement tracking method, device, device and storage medium for anterior segment optical coherence tomography angiography (Optical Coherence Tomography Angiography, OCTA).
  • Optical Coherence Tomography Angiography Optical Coherence Tomography Angiography, OCTA
  • OCTA was originally applied to the posterior segment of the eye, that is, the fundus. It is a new non-invasive fundus imaging technology that can identify retinal choroidal blood flow information with high resolution and perform imaging of retinal choroidal microvascular circulation in living tissues. It has unique advantages in normal retinal choroidal vascular changes and disease management follow-up and treatment effect detection.
  • OCT optical coherence tomography
  • Optical Coherence Tomography Optical Coherence Tomography
  • OCTA is not suitable for all patients. When patients have poor fixation or frequently blink or move their eyes, OCTA images will be less accurate.
  • the present disclosure provides an eye-tracking method, device, device and storage medium for OCTA of the anterior segment, which can improve OCTA that is not suitable for all patients, especially when the patient has poor fixation or frequently blinks or moves the eyes. In this case, it will lead to the problem that the OCTA image is relatively inaccurate.
  • an eye tracking method for anterior segment OCTA comprising:
  • an eye tracking device for anterior segment OCTA comprising:
  • An acquisition module configured to acquire two consecutive frames of pupil images
  • the extraction module is configured to extract the contours of the two frames of pupil images respectively to obtain two corresponding pupil contours
  • a determination module configured to determine whether the two pupil profiles are similar to a reference profile
  • a calculation module configured to calculate a center position offset of one of the two pupil profiles relative to the other pupil profile in response to the two pupil profiles being similar to the reference profile.
  • an electronic device in a third aspect of the present disclosure, includes: a memory and a processor, where a computer program is stored on the memory, and when the processor executes the computer program, the eye movement tracking method for anterior segment OCTA described in the embodiments of the present disclosure is implemented.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the eye movement for anterior segment OCTA described in the embodiments of the present disclosure is realized. tracking method.
  • Fig. 1 is a normal shooting pupil map provided by an embodiment of the present disclosure
  • Fig. 2 is a flowchart of an eye tracking method for anterior segment OCTA provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a pupil contour extraction process provided by an embodiment of the present disclosure
  • Fig. 4 is a schematic diagram of a pupil contour dissimilar to a reference contour provided by an embodiment of the present disclosure
  • Fig. 5 is a schematic diagram of a pupil contour similar to a reference contour provided by an embodiment of the present disclosure
  • Fig. 6 is a schematic structural diagram of an eye-tracking device for anterior segment OCTA provided by an embodiment of the present disclosure
  • Fig. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the eye-tracking method for anterior segment OCTA provided by the embodiments of the present disclosure can be applied in the technical field of eye-tracking.
  • OCTA detects the movement of red blood cells in the blood vessel cavity by measuring the OCT signal changes obtained by multiple scans of the same cross-section, and combines the information of continuous cross-sectional (en face) OCT images to obtain a complete three-dimensional vascular image of the retina and choroid.
  • en face OCT is a transverse tomographic image technology based on traditional high-density B-scan (B-scan) images and processed by software operations.
  • OCTA is not suitable for all patients. Only when the patient has better visual fixation and clear refractive media can OCTA images with better blood flow continuity and higher scanning signal quality be obtained.
  • the time required for a single OCTA blood flow imaging scan depends on the scan range and the frequency of the light source. When the scanning range is large and the frequency of the light source is high, the OCTA imaging time will be longer, which will lead to poor fixation or frequent eye blinking or eye movement in patients, which will lead to poor signal strength in OCTA scanning. Image quality is poor.
  • Fig. 1 is a pupil diagram of a normal shooting provided by an embodiment of the present disclosure.
  • a strategy for identifying pupils generally includes steps such as image filtering, image binarization, edge detection, and ellipse fitting.
  • image filtering that is, when part or all of the pupil is blocked by eyelids or eyelashes, the blocked part is generally excluded, and only the effective area is used for ellipse fitting.
  • the image is first extracted through a mask (Mask), and then a threshold is obtained through the histogram to binarize the image, and then the edge following algorithm is used to binarize the image Extract the contour, and finally fit the extracted contour to an ellipse. If the error between the fitted ellipse and the original contour is greater than a threshold, the random consistency algorithm is used to discard the outliers.
  • a mask Mask
  • the edge following algorithm is used to binarize the image Extract the contour, and finally fit the extracted contour to an ellipse. If the error between the fitted ellipse and the original contour is greater than a threshold, the random consistency algorithm is used to discard the outliers.
  • identifying the position of the pupil is obtained by fitting an ellipse. But doing so has several disadvantages. For example, in an ophthalmological examination, the patient's pupil may have been deformed. At this time, using a preset shape (such as an ellipse) to fit it will cause a large error. Moreover, ellipse fitting itself will also bring errors.
  • embodiments of the present disclosure provide an eye tracking method for anterior segment OCTA.
  • the eye tracking method for anterior segment OCTA can be performed by an electronic device.
  • Fig. 2 is a flowchart of an eye tracking method for anterior segment OCTA provided by an embodiment of the present disclosure.
  • the eye tracking method for anterior segment OCTA in this embodiment includes:
  • Operation 201 acquire two consecutive frames of pupil images.
  • the two consecutive frames of pupil images are two consecutive frames of pupil images arbitrarily selected from the normally captured pupil images when performing OCTA image recognition on the patient.
  • the number of image frames to be acquired is determined according to the resolution of the diagnostic imaging device based on the optical principle. Generally, the number of image frames acquired within one second is 30 frames, 60 frames or 120 frames. Devices that perform diagnostic imaging by optical principles acquire multiple frames over a period of time.
  • the diagnostic imaging equipment based on the optical principle includes an OCT imaging equipment and a pupil camera imaging equipment.
  • any group of two consecutive frames of pupil images in each group of multi-frame images is selected, which is the two consecutive frames of pupil images in the specific embodiment, and explain it.
  • two consecutive frames of pupil images will be used for subsequent contour extraction to improve the accuracy of judging whether the patient blinks or not.
  • contour extraction is performed on two frames of pupil images respectively to obtain two corresponding pupil contours.
  • the contours of the pupils in the two frames of pupil maps are respectively extracted through the polar coordinate transformation and the shortest path algorithm to obtain two pupil contours.
  • Fig. 3 is a schematic diagram of a pupil contour extraction process provided by an embodiment of the present disclosure.
  • the method of contour extraction for one frame of pupil images in two frames of pupil images is to obtain the transformed pupil image (b) by performing polar coordinate transformation on the normal photographed pupil image (a); based on the shortest
  • the path algorithm extracts the boundary of the pupil in the transformed pupil map (b) to obtain the pupil map (c) after the boundary is extracted; then performs polar coordinate inverse transformation on the boundary in the pupil map (c) after the boundary is extracted, A pupil map (d) including the pupil contour is obtained.
  • the pupil contour shown in the pupil diagram (d) including the pupil contour is a contour that is a certain distance from the center of the image in the pupil and surrounds it in a closed loop.
  • the contours of the pupils in the two frames of pupil maps are respectively extracted, which can avoid the use of image binarization in the related art, because the binarization requires
  • the problem that the threshold is not well determined leads to the problem that the area identified by the binarized image is not necessarily the pupil.
  • contour extraction is performed on a frame of pupil images in operation 202, including: operation A1 to operation A3.
  • Operate A1 to perform polar coordinate transformation on the pupil map to obtain the transformed pupil map.
  • the boundary of the pupil is extracted from the transformed pupil map based on the shortest path algorithm.
  • the polar coordinate inverse transformation is performed on the boundary to obtain the pupil contour.
  • polar coordinate transformation is used to detect closed contours in an image.
  • a Cartesian coordinate system xoy is established in the pupil map, and the image center (x 0 , y 0 ) is generally selected as the transformation center.
  • Shortest path algorithms include depth or breadth first search algorithms, Floyd's algorithm, Dijkstra's algorithm, and Bellman-Ford algorithm.
  • Dijkstra's algorithm is used to find the pupil boundary in polar coordinates.
  • the steps to find the pupil boundary in polar coordinates using Dijkstra's algorithm are as follows:
  • Operation 1 maintain an array N and two sets P and Q in the transformed pupil map, the array N is used to store the shortest distance from the starting point to each vertex, the set P is used to store untraversed points, and the set Q stores the points traversed.
  • Operation 2 select the starting point from the set P, add the starting point to the set Q, and delete it from the set P, then add the distance of the point adjacent to the starting point to the array N, and the distance to the non-adjacent point represented by infinity.
  • Operation 3 select a point M closest to the set Q (that is, the point with the smallest edge weight among the untraversed points connected to all traversed points), add this point to the set Q, and delete it from the set P.
  • Operation 4 find point C adjacent to point M, and judge whether the distance to point C stored in array N is less than the distance from the starting point to point C after passing M. If the distance to point C stored in array N is less than the distance from the starting point to point C via M, then update array N, if the distance to point C stored in array N is not less than the distance from the starting point to point C via M, continue Find the next point adjacent to point M, and repeat operation 4 until all adjacent points of point M are traversed.
  • the found pupil boundary is converted from polar coordinates to Cartesian coordinates, and corresponds to the coordinates on the Cartesian coordinate system through the following calculation formula:
  • the pupil profile under the Cartesian coordinate system in the pupil map (d) comprising the pupil profile is represented by the following formula:
  • n is the number of points in the contour.
  • the reference contour is the pupil contour extracted from the pupil reference map.
  • the patient in the pupil reference map is the same patient as the patient in the acquired two consecutive frames of pupil maps, and they are all taken for the same eye of the patient. Pupil diagram.
  • the state of the eye can be judged, so as to achieve better tracking.
  • the similarity ratio is not affected by the size of the contour, but only related to the shape.
  • the size change of the pupil due to uneven incident light can be ignored.
  • the reference profile in operation 203 may be obtained through the following operations (operation B1 to operation B4).
  • Operation B1 is to obtain a pupil reference map, which is a reference map in which the pupil is not blocked.
  • the polar coordinate transformation is performed on the pupil reference map to obtain the transformed pupil reference map.
  • Operation B3 based on the shortest path algorithm, extracting the boundary of the pupil in the transformed pupil reference map.
  • the operation of extracting the reference contour of the pupil reference map is consistent with the contour extraction process for a frame of pupil map in the above-mentioned operation 202, and can refer to operation A1 to operation A3 in operation 202, which will not be repeated here. .
  • Fig. 4 is a schematic diagram of a pupil profile dissimilar to a reference profile provided by an embodiment of the present disclosure.
  • the reference contour in the pupil reference map (D) is not similar to the two pupil contours (d1), (d2).
  • the patient blinks his eyes, or his pupils are blocked by his eyelids or eyelashes, that is, his pupils are blocked.
  • Fig. 5 is a schematic diagram of a pupil contour similar to a reference contour provided by an embodiment of the present disclosure.
  • the reference contour in the pupil reference map (D) is similar to the pupil contour in the pupil map (d3).
  • the patient did not blink his eyes, or his pupils were blocked by his eyelids or eyelashes, that is, his pupils were not blocked.
  • the distance between the reference contour in the pupil reference map (D) and the pupil contour in the pupil map is within a preset range, it is determined that the pupil contour is similar to the reference contour, and then it can be determined that the patient's Pupils are not occluded.
  • operation 203 includes: operation C1 to operation C2.
  • Operation C1 calculates the distance between the pupil contour and the reference contour.
  • the distance between the pupil contour and the reference contour is disclosed as the image moment (Hu moment).
  • the Hu moment of the image is an image feature with translation, rotation and scale invariance.
  • the Hu moment calculation includes ordinary moment calculation, central moment calculation and normalized central moment calculation.
  • the normalized central moment calculation is used to calculate the distance between the pupil contour and the reference contour.
  • the normalized central moment is a linear combination of normalized moments, which can still maintain invariance after image rotation, translation, zooming and other operations, so the normalized central moment is often used to identify the features of the image.
  • A represents the pupil contour
  • B represents the reference contour
  • D(A, B) represents the distance between the pupil contour and the reference contour
  • the distance between the pupil contour and the reference contour can also be calculated by the following formula:
  • A represents the pupil contour
  • B represents the reference contour
  • D(A, B) represents the distance between the pupil contour and the reference contour
  • the preset distance range is a range of distance values specified by humans, which can be set according to the needs of the research.
  • the pupil contour is similar to the reference contour.
  • Operation 204 in response to the two pupil profiles being similar to the reference profile, calculating a center position offset of one of the two pupil profiles relative to the other pupil profile.
  • the two pupil contours of the two consecutive frames of pupil images are similar to the reference contour, that is, the two consecutive frames of pupil images are normal
  • the two pupil contours are calculated One of the pupil contours in is offset relative to the center of the other pupil contour.
  • the center position offset of one pupil contour relative to another pupil contour is calculated by the following formula:
  • (x A , y A ) and (x B , y B ) represent the center positions of two pupils respectively, and ( ⁇ x, ⁇ y) represent the center position offset of one pupil contour relative to the other pupil contour.
  • operation 204 includes: operation D1 to operation D2.
  • the pupil center position is obtained by calculating the centroid of the contour.
  • the position of the center of the pupil (ie, the center of mass of the pupil contour) can be obtained by calculating points on the contour. Taking the acquisition method of (x A , y A ) in the pupil center position as an example, the pupil center position is calculated by the following formula:
  • (x A , y A ) represents the coordinates of the center of the pupil (that is, the center of mass of the pupil contour), and ( xi , y i ) represents the coordinates of the i-th point on the contour.
  • the direct calculation of the pupil center position is used instead of the ellipse fitting algorithm to fit the pupil center position, which can avoid the error caused by the ellipse fitting algorithm itself due to the use of the ellipse fitting method, and can avoid Because the patient's pupil may have been deformed, a preset shape (such as an ellipse) is used to fit and introduce a large error, thereby avoiding the problem of inaccurate pupil identification.
  • a preset shape such as an ellipse
  • the shooting method for obtaining the pupil map is continuous shooting, and one pupil map is obtained each time, that is, the first pupil map in any group of two consecutive frames of pupil maps in each group of multiple frames of images.
  • the contour extraction of a pupil map compare the similarity between the contour of the first pupil map and the reference contour. If the contour of the first pupil map is similar to the reference contour, the shooting will continue to obtain the next pupil map, that is, the second one of any group of two consecutive frames of pupil maps in each group of multiple frames of images
  • the similarity between the contour of the second pupil map and the reference contour is compared. If the pupil contours corresponding to the two pupil maps are similar to the reference contour, it is confirmed that the two pupil maps are valid, and the offset of the corresponding centers of the two pupil contours can be calculated.
  • the method further includes: operation 205 .
  • Operation 205 in response to the fact that the two pupil contours are not similar to the reference contour, reacquire two consecutive frames of pupil maps, and perform "contour extraction respectively on the two consecutive frames of pupil maps to obtain two corresponding pupil contours" and " The operation of determining whether the two pupil contours are similar to the reference contour"; wherein, the reacquired two consecutive frames of pupil images are images adjacent to the two consecutive frames of pupil images obtained in operation 201 .
  • the two pupil contours are not similar to the reference contour, it means that the patient blinks his eyes, and the pupil is blocked by eyelids or eyelashes, that is, the patient's pupil is blocked. In the case that the patient's pupil is blocked, the subsequent results obtained by using the OCTA algorithm are also inaccurate.
  • two consecutive frames of pupil images adjacent to the two consecutive frames of pupil images obtained in operation 201 will be reacquired, and the "respectively re-acquired two frames of pupil images" will be executed.
  • Contour extraction is performed on consecutive pupil maps to obtain two corresponding pupil contours" and "determine whether the two pupil contours are similar to the reference contour" until a more accurate OCTA image is obtained.
  • each subsequent image needs to be executed by this method, so as to improve the accuracy of the OCTA image.
  • the eye tracking method for anterior segment OCTA provided by the embodiment of the present disclosure, two consecutive frames of pupil images are obtained, and then the contours of the two frames of pupil images are extracted respectively to obtain two corresponding pupils Then compare the two pupil contours with the reference contour. If the two pupil contours are similar to the reference contour, it is determined that the patient’s pupil is not blocked when the two frames of pupil images are collected. At this time, the two pupil contours are calculated. One of the pupil contours is shifted relative to the center of the other pupil contour, so that it can be used in the subsequent OCTA algorithm to obtain a more accurate OCTA image, which can improve the related technology. OCTA is not suitable for all patients.
  • Fig. 6 is a schematic structural diagram of an eye-tracking device for anterior segment OCTA provided by an embodiment of the present disclosure.
  • the eye tracking device for anterior segment OCTA includes an acquisition module 601 , an extraction module 602 , a determination module 603 and a calculation module 604 .
  • the acquisition module 601 is configured to acquire two consecutive frames of pupil images; the extraction module 602 is configured to extract the contours of the two frames of pupil images respectively to obtain two corresponding pupil contours; the determination module 603 is configured to determine these Whether the two pupil contours are similar to the reference contour; the calculation module 604 is configured to calculate the center position offset of one pupil contour relative to the other pupil contour in response to the two pupil contours being similar to the reference contour .
  • the eye tracking device for anterior segment OCTA further includes:
  • the reacquisition module 605 is configured to reacquire two consecutive frames of pupil images in response to the dissimilarity between the two pupil contours and the reference contour, and perform "contour extraction on the reacquired two consecutive frames of pupil images, respectively, to obtain corresponding two pupil contours" and "determine whether the two pupil contours are similar to the reference contour"; wherein, the two consecutive frames of pupil maps reacquired are adjacent to the two consecutive frames of pupil maps acquired last time Image.
  • Fig. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 700 shown in FIG. 7 includes: a processor 701 and a memory 703 . Wherein, the processor 701 is connected to the memory 703 .
  • the electronic device 700 may further include a transceiver 704 . In practical applications, the transceiver 704 is not limited to one, and the structure of the electronic device 700 does not limit the embodiment of the present disclosure.
  • the processor 701 can be a central processing unit (Central Processing Unit, CPU), a general purpose processor, a data signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array ( Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor 701 may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • Bus 702 may include a path for communicating information between the components described above.
  • the bus 702 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus 702 can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 7 , but it does not mean that there is only one bus or one type of bus.
  • the memory 703 can be a read-only memory (Read Only Memory, ROM) or other types of static storage devices that can store static information and instructions, and a random access memory (Random Access Memory, RAM) or other types that can store information and instructions.
  • a dynamic storage device can also be an Electrically Erasable Programmable Read Only Memory (EEPROM), a Compact Disc Read Only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compression optical disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage medium or other magnetic storage device, or can be configured to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer Any other medium, but not limited to it.
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • optical disc storage including compression optical disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.
  • the memory 703 is configured to store application program codes for implementing the solutions of the present disclosure, and the execution is controlled by the processor 701 .
  • the processor 701 is configured to execute the application program code stored in the memory 703, so as to realize the contents shown in the foregoing method embodiments.
  • Electronic equipment 700 includes but not limited to: mobile phone, notebook computer, digital broadcast receiver, personal digital assistant (Personal Digital Assistant, PDA), tablet computer (Portable Android Device, PAD), portable multimedia player (Portable Media Player, PMP ), mobile terminals such as vehicle-mounted terminals (eg, vehicle-mounted navigation terminals), and fixed terminals such as digital television (Television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PAD Portable Android Device
  • PMP portable multimedia player
  • mobile terminals such as vehicle-mounted terminals (eg, vehicle-mounted navigation terminals)
  • fixed terminals such as digital television (Television, TV), desktop computers, etc.
  • the electronic device 700 shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • Embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run on a computer, the computer can execute the corresponding content in the foregoing method embodiments.
  • two consecutive frames of pupil images are obtained, and then the contours of the two frames of pupil images are respectively extracted to obtain two corresponding pupil contours, and then the two pupil contours are compared with the reference If the two pupil contours are similar to the reference contour, it is determined that the patient's pupil was not blocked when the two frames of pupil images were collected. At this time, the relative ratio of one pupil contour of the two pupil contours to the other is calculated.
  • OCTA is not suitable for all patients, especially when patients have poor fixation or frequently blink their eyes. In the case of eye movement or eyeball movement, it will lead to inaccurate OCTA images, so as to improve the adaptability of OCTA and eliminate the use of pupil images collected by patients with poor fixation or frequent eye blinking or eye movement. , so the effect of improving the accuracy of the OCTA image can be achieved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

本公开提供了一种用于眼前节OCTA的眼动追踪方法、装置、设备和存储介质。用于眼前节OCTA的眼动追踪方法,包括:获取连续的两帧瞳孔图;分别对两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;确定两个瞳孔轮廓与基准轮廓是否相似;响应于两个瞳孔轮廓与基准轮廓相似,计算两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。

Description

用于眼前节OCTA的眼动追踪方法、装置、设备和存储介质
本申请要求在2021年12月07日提交中国专利局、申请号为202111489187.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及眼动追踪技术领域,例如涉及一种用于眼前节光学相干断层扫描血管成像(Optical Coherence Tomography Angiography,OCTA)的眼动追踪方法、装置、设备和存储介质。
背景技术
OCTA最初应用于眼后节,即眼底,是一种非侵入性的新型眼底影像检查技术,可高分辨率识别视网膜脉络膜血流运动信息,对活体组织进行视网膜脉络膜微血管循环成像。其在正常视网膜脉络膜血管改变及疾病的管理随访和治疗效果检测等方面具有独特优势。在眼前节中,通过对同一横断面进行多次扫描的光学相干断层扫描(OCT,Optical Coherence Tomography)信号变化来获得扫描区域的血流信号,将多个横断面连续扫描,就可得到眼前节所扫描区域的OCTA图像。
当前OCTA并不适用于所有的患者,当患者固视欠佳或者频繁眨动眼睛或眼球移动的情况下,会导致OCTA图像较为不准确。
发明内容
本公开提供了一种用于眼前节OCTA的眼动追踪方法、装置、设备和存储介质,能够改善OCTA不适用于所有的患者,尤其当患者固视欠佳或者频繁眨动眼睛或眼球移动的情况下,会导致OCTA图像较为不准确的问题。
在本公开的第一方面,提供了一种用于眼前节OCTA的眼动追踪方法,包括:
获取连续的两帧瞳孔图;
分别对所述两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;
确定所述两个瞳孔轮廓与基准轮廓是否相似;
响应于所述两个瞳孔轮廓与所述基准轮廓相似,计算所述两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
在本公开的第二方面,提供了一种用于眼前节OCTA的眼动追踪装置,包括:
获取模块,被配置为获取连续的两帧瞳孔图;
提取模块,被配置为分别对所述两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;
确定模块,被配置为确定所述两个瞳孔轮廓与基准轮廓是否相似;
计算模块,被配置为响应于所述两个瞳孔轮廓与基准轮廓相似,计算所述两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
在本公开的第三方面,提供了一种电子设备。该电子设备包括:存储器和处理器,所述存储器上存储有计算机程序,所述处理器执行所述计算机程序时实现本公开实施例所述的用于眼前节OCTA的眼动追踪方法。
在本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本公开实施例所述的用于眼前节OCTA的眼动追踪方法。
附图说明
图1是本公开实施例提供的一种正常拍摄的瞳孔图;
图2是本公开实施例提供的一种用于眼前节OCTA的眼动追踪方法的流程图;
图3是本公开实施例提供的一种瞳孔轮廓提取过程的示意图;
图4是本公开实施例提供的一种瞳孔轮廓与基准轮廓不相似的示意图;
图5是本公开实施例提供的一种瞳孔轮廓与基准轮廓相似的示意图;
图6是本公开实施例提供的一种用于眼前节OCTA的眼动追踪装置的结构示意图;
图7是本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行描述。
本公开实施例提供的用于眼前节OCTA的眼动追踪方法可以应用于眼动追踪技术领域。
OCTA是通过对同一横断面进行多次扫描获得的OCT信号变化测量来探测血管腔中红细胞的运动,合并连续横断面(en face)OCT图像的信息后,得到完整的视网膜脉络膜三维血管图像。其中,en face OCT是在传统高密度B扫描(B-scan)图像的基础上经软件运算处理而成的横向断层图像技术。
OCTA并不适用于所有患者,只有患者固视较佳、屈光间质清晰的情况下,方能获得血流连续性较好、扫描信号质量较高的OCTA图像。OCTA单次血流成像扫描所需时间取决于扫描范围和光源频率。当扫描范围较大且对光源频率要求较高时,OCTA成像时间也会较长,从而导致患者出现固视欠佳或者频繁眨动眼睛或眼球移动的情况,进而导致OCTA扫描信号强度差,所得图像质量差。
由此,在扫描过程中,引入眼动追踪是十分有必要的。通过引入眼动追踪,可以实现患者瞳孔的自动定位并识别、排除眼睛眨动以及眼球移动的情况,从而得到前后两个连续位置高质量瞳孔图之间的移动方向和大小,使得后续OCTA成像更加准确。
图1是本公开实施例提供的一种正常拍摄的瞳孔图。参见图1,在相关技术中,通常识别瞳孔的策略包括图像滤波、图像二值化、边缘检测以及椭圆拟合等步骤。而对于眼睛眨动,即部分或全部瞳孔被眼皮或睫毛遮挡的情况,一般会排除掉被遮挡的部分,只使用有效区域进行椭圆拟合。
就椭圆拟合方法的过程来说,首先将图像通过掩膜(Mask)将感兴趣区域提取出来,随后通过直方图得到一个阈值对图像进行二值化,然后使用边缘跟随算法对二值化图像提取轮廓,最后将提取的轮廓进行椭圆拟合。如果拟合椭圆与原轮廓误差大于一个阈值,则使用随机一致性算法将离群值舍弃掉。
就相关技术而言,大多技术方案中都需要对图像进行二值化,但二值化所需要的阈值很不好确定。一旦图像质量较差或者阈值选取不佳,那么二值化图像所标识出来的区域不一定是瞳孔。
此外,大部分技术方案中,识别瞳孔的位置是通过拟合椭圆得到的。但这样做存在多个缺点。例如,在眼科检查中,患者的瞳孔有可能已经发生了形变,此时用一个预设的形状(如椭圆)去拟合则会带来较大的误差。并且,椭圆拟合本身也会带来误差。
还没有专门针对眼前节OCTA的眼动追踪方案。针对此定制化方案,还需要识别患者是否眨眼,而大部分技术方案均未对此进行涉及。
为解决以上技术问题,本公开的实施例提供了一种用于眼前节OCTA的 眼动追踪方法。在一些实施例中,该用于眼前节OCTA的眼动追踪方法可以由电子设备执行。
图2是本公开实施例提供的一种用于眼前节OCTA的眼动追踪方法流程图。参见图2,本实施例中用于眼前节OCTA的眼动追踪方法包括:
操作201,获取连续的两帧瞳孔图。
在本公开实施例中,连续的两帧瞳孔图为对患者进行OCTA图像识别时,从正常拍摄的瞳孔图中任意选取的前后连续的两帧瞳孔图。可以通过光学原理进行诊断成像的设备,获取前后连续的两帧瞳孔图。
在本公开实施例中,获取的图像帧数根据光学原理进行诊断成像的设备的分辨率确定,一般一秒内获取的图像帧数为30帧、60帧或120帧。一段时间内,通过光学原理进行诊断成像的设备可获取多帧图像。
在本公开实施例中,通过光学原理进行诊断成像的设备,包括OCT成像设备和瞳孔相机成像设备。
为方便说明该用于眼前节OCTA的眼动追踪方法,选取多帧图像中每一组前后连续的两帧瞳孔图中的任意一组,为具体实施例中的前后连续的两帧瞳孔图,并就其进行说明。
在本公开实施例中,连续的两帧瞳孔图将用于后续的轮廓提取,提高判断患者是否眨眼等的准确性。
操作202,分别对两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓。
在本公开实施例中,通过极坐标变换和最短路径算法,分别对两帧瞳孔图中的瞳孔进行轮廓提取,得到两个瞳孔轮廓。
图3是本公开实施例提供的一种瞳孔轮廓提取过程的示意图。参见图3,针对两帧瞳孔图中的一帧瞳孔图进行轮廓提取的方式为,通过对正常拍摄的瞳孔图(a)进行极坐标变换,获取到变换后的瞳孔图(b);基于最短路径算法,对变换后的瞳孔图(b)中瞳孔的边界进行提取,得到提取边界后的瞳孔图(c);再对提取边界后的瞳孔图(c)中的边界进行极坐标逆变换,得到包括瞳孔轮廓的瞳孔图(d)。
包括瞳孔轮廓的瞳孔图(d)中显示出的瞳孔轮廓,为在瞳孔中距离图像中心一定距离且围绕成闭环的轮廓。
在本公开实施例中,通过极坐标变换和最短路径算法,分别对两帧瞳孔图中的瞳孔进行轮廓提取,可以避免相关技术中采用图像二值化这一方法,因二值化所需要的阈值不好确定的问题,导致二值化图像所标识出来的区域不一定 是瞳孔的问题。
在一些实施例中,操作202中针对一帧瞳孔图进行轮廓提取,包括:操作A1至操作A3。
操作A1,对瞳孔图进行极坐标变换,得到变换后的瞳孔图。
操作A2,基于最短路径算法,提取变换后的瞳孔图中瞳孔的边界。
操作A3,对边界进行极坐标逆变换,得到瞳孔轮廓。
在本公开实施例中,针对一帧瞳孔图进行轮廓提取的方法说明如下。
在本公开实施例中,极坐标变换用来检测图像中封闭的轮廓。参见图3,在瞳孔图中建立笛卡尔坐标系xoy,一般选取图像中心(x 0,y 0)为变换中心。在笛卡尔坐标系xoy平面上任意一点(x,y),以(x 0,y 0)为中心,通过以下计算公式对应到极坐标系上的极坐标(θ,r):
Figure PCTCN2022126616-appb-000001
Figure PCTCN2022126616-appb-000002
如图3中的正常拍摄的瞳孔图(a)到变换后的瞳孔图(b)的过程。
在本公开实施例中,基于最短路径算法,找到极坐标下的瞳孔边界。最短路径算法包括深度或广度优先搜索算法、费罗伊德算法、迪杰斯特拉算法以及Bellman-Ford算法。
一实施例中,采用迪杰斯特拉算法找到极坐标下的瞳孔边界。采用迪杰斯特拉算法找到极坐标下的瞳孔边界的步骤如下所示:
操作1,在变换后的瞳孔图中维护一个数组N和两个集合P、Q,数组N用来储存起点到每个顶点的最短距离,集合P用来存储未遍历的点,集合Q存储已遍历的点。
操作2,从集合P中选择起点,将该起点添加到集合Q中,并从集合P中删除,再将与该起点相邻的点的距离添加到数组N中,到不相邻点的距离用无穷大表示。
操作3,选择距离集合Q最近的一个点M(即与所有已遍历点相连的未遍历点中,边的权值最小的点),将该点加入集合Q,并从集合P中删除。
操作4,找到与点M相邻的点C,判断数组N中存储的到达C点的距离是否小于起点经过M到达C点的距离。若数组N中存储的到达C点的距离小于起点经过M到达C点的距离,则更新数组N,若数组N中存储的到达C点的距离不小于起点经过M到达C点的距离,则继续寻找下一个与点M相邻的点, 重复操作4直到遍历完点M的所有邻点。
重复“选择距离集合Q最近的一个点M(即与所有已遍历点相连的未遍历点中,边的权值最小的点),将该点加入集合Q,并从集合P中删除”和“找到与点M相邻的点C,判断数组N中存储的到达C点的距离是否小于起点经过M到达C点的距离。若数组N中存储的到达C点的距离小于起点经过M到达C点的距离,则更新数组N,若数组N中存储的到达C点的距离不小于起点经过M到达C点的距离,则继续寻找下一个与点M相邻的点,重复操作4直到遍历完点M的所有邻点”,直至遍历集合P为空,即数组N构成极坐标下的瞳孔边界。
如图3中的变换后的瞳孔图(b)到提取边界后的瞳孔图(c)的过程。
在本公开实施例中,将找到的瞳孔边界从极坐标转化为笛卡尔坐标,通过以下计算公式对应到笛卡尔坐标系上的坐标:
x=x 0+rcosθ
y=y 0+rcosθ
则,如图3中的包括瞳孔轮廓的瞳孔图(d)中的笛卡尔坐标系下的瞳孔轮廓通过以下公式表示:
S={x i,y i},i=0,1,2,…,n
其中,n为轮廓内点的个数。
操作203,确定两个瞳孔轮廓与基准轮廓是否相似。
在本公开实施例中,基准轮廓为从瞳孔基准图中提取的瞳孔轮廓。瞳孔基准图的患者与获取的连续的两帧瞳孔图的患者为同一患者,且都是针对患者的同一只眼睛拍摄的,瞳孔基准图即为拍摄的同一患者的瞳孔未被遮挡的正常拍摄的瞳孔图。
在本公开实施例中,通过将瞳孔轮廓与基准轮廓的相似性比对,可以判断眼睛的状态,从而更好的实现追踪。同时,相似性比对不受轮廓大小的影响,只和形态有关。由此,可以忽略瞳孔因为入射光线不均等造成的忽大忽小的尺寸变化。
在一些实施例中,操作203中基准轮廓可以通过以下操作(操作B1至操作B4)获取。
操作B1,获取瞳孔基准图,瞳孔基准图为瞳孔未被遮挡的基准图。
操作B2,对瞳孔基准图进行极坐标变换,得到变换后的瞳孔基准图。
操作B3,基于最短路径算法,提取变换后的瞳孔基准图中瞳孔的边界。
操作B4,对边界进行极坐标逆变换,得到瞳孔的基准轮廓。
在本公开实施例中,瞳孔基准图的基准轮廓的提取的操作与上述操作202中针对一帧瞳孔图进行轮廓提取过程一致,可参见操作202中的操作A1至操作A3,在此不做赘述。
图4是本公开实施例提供的一种瞳孔轮廓与基准轮廓不相似的示意图。参见图4,瞳孔基准图(D)中的基准轮廓与两个瞳孔轮廓(d1)、(d2)均不相似。换句话说,在获取连续的两帧瞳孔图时,患者出现了眼睛眨动、眼皮或者睫毛遮挡住瞳孔等情况,即患者瞳孔被遮挡。
图5是本公开实施例提供的一种瞳孔轮廓与基准轮廓相似的示意图。参见图5,针对其中一个瞳孔图(d3),瞳孔基准图(D)中的基准轮廓与瞳孔图(d3)中的瞳孔轮廓相似。换句话说,在获取连续的两帧瞳孔图时,患者未出现了眼睛眨动、眼皮或者睫毛遮挡住瞳孔等情况,即患者瞳孔未被遮挡。其中,如果瞳孔基准图(D)中的基准轮廓与瞳孔图中的瞳孔轮廓之间的距离在预设范围内,则认定该瞳孔轮廓与该基准轮廓相似,进而可以确定采集该瞳孔图时患者瞳孔未被遮挡。
在一些实施例中,操作203包括:操作C1至操作C2。
操作C1,计算瞳孔轮廓与基准轮廓之间的距离。
操作C2,根据预设距离范围和距离(即执行操作C1所得的距离计算结果),判断瞳孔轮廓与基准轮廓是否相似。
公开瞳孔轮廓与基准轮廓之间的距离为图像矩(Hu矩)。图像的Hu矩是一种具有平移、旋转和尺度不变性的图像特征,Hu矩计算包括普通矩计算、中心矩计算和归一化中心矩计算。
一实施例中,采用归一化中心矩计算来计算瞳孔轮廓与基准轮廓之间的距离。归一化中心矩是归一化矩的线性组合,在图像旋转、平移、缩放等操作后,仍能保持不变性,所以经常使用归一化中心矩来识别图像的特征。
通过下式计算瞳孔轮廓与基准轮廓之间的距离:
Figure PCTCN2022126616-appb-000003
Figure PCTCN2022126616-appb-000004
Figure PCTCN2022126616-appb-000005
其中,A表示瞳孔轮廓,B表示基准轮廓,D(A,B)表示瞳孔轮廓与基准轮廓之间的距离;
Figure PCTCN2022126616-appb-000006
Figure PCTCN2022126616-appb-000007
表示两个轮廓的log变换Hu矩,
Figure PCTCN2022126616-appb-000008
Figure PCTCN2022126616-appb-000009
为轮廓A和B的Hu矩。
在本公开实施例中,还可以通过下式计算瞳孔轮廓与基准轮廓之间的距离:
Figure PCTCN2022126616-appb-000010
Figure PCTCN2022126616-appb-000011
Figure PCTCN2022126616-appb-000012
其中,A表示瞳孔轮廓,B表示基准轮廓,D(A,B)表示瞳孔轮廓与基准轮廓之间的距离;
Figure PCTCN2022126616-appb-000013
Figure PCTCN2022126616-appb-000014
表示两个轮廓的log变换Hu矩,
Figure PCTCN2022126616-appb-000015
Figure PCTCN2022126616-appb-000016
为轮廓A和B的Hu矩。
在本公开实施例中,预设距离范围为由人为规定的距离取值范围,可以根据研究的需求自行设定。当计算的距离在预设距离范围之内时,则瞳孔轮廓与基准轮廓相似。
操作204,响应于两个瞳孔轮廓与基准轮廓相似,计算这两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
在本公开实施例中,在确定前后连续的两帧瞳孔图的两个瞳孔轮廓均与基准轮廓相似的情况下,即前后连续的两帧瞳孔图均正常的情况下,计算这两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
通过下式计算一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移:
Δx=x B-x A
Δy=y B-y A
其中,(x A,y A)和(x B,y B)分别表示两个瞳孔中心位置,(Δx,Δy)表示一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
在一些实施例中,操作204包括:操作D1至操作D2。
操作D1,根据每个瞳孔轮廓,获取瞳孔轮廓的质心。
操作D2,根据两个质心,计算中心位置偏移。
在本公开实施例中,瞳孔中心位置通过计算轮廓的质心获取。
在本公开实施例中,瞳孔中心位置(即瞳孔轮廓的质心)可通过计算轮廓上的 点获取。以瞳孔中心位置中(x A,y A)的获取方法为例,通过下式计算瞳孔中心位置:
Figure PCTCN2022126616-appb-000017
Figure PCTCN2022126616-appb-000018
其中,(x A,y A)代表瞳孔中心位置(即瞳孔轮廓的质心)坐标,(x i,y i)表示轮廓上的第i个点的坐标。
在本公开实施例中,采用直接计算瞳孔中心位置代替采用椭圆拟合算法拟合瞳孔中心位置,可以避免因采用椭圆拟合这一方法而引入椭圆拟合算法本身带来的误差,以及可以避免因患者瞳孔有可能已经发生形变而采用一个预设的形状(如椭圆)去拟合而引入较大的误差,进而可以避免由此导致的瞳孔标识不准确的问题。
在本公开实施例中,对操作201至操作204概述如下。
获取瞳孔图的拍摄方式为连续拍摄,每次拍摄获取一张瞳孔图,即多帧图像中每一组前后连续的两帧瞳孔图中的任意一组中的第一张瞳孔图,在对第一张瞳孔图进行轮廓提取后,将第一张瞳孔图的轮廓与基准轮廓比较相似性。如果第一张瞳孔图的轮廓与基准轮廓相似,则会继续拍摄,获取下一张瞳孔图,即多帧图像中每一组前后连续的两帧瞳孔图中的任意一组中的第二张瞳孔图,在对第二张瞳孔图进行轮廓提取后,将第二张瞳孔图的轮廓与基准轮廓比较相似性。如果这两张瞳孔图对应的瞳孔轮廓均与基准轮廓相似,则确认该两张瞳孔图有效,可以计算对应的两个瞳孔轮廓中心的偏移。
在一些实施例中,所述方法还包括:操作205。
操作205,响应于两个瞳孔轮廓与基准轮廓不相似,重新获取两帧连续的瞳孔图,并执行“分别对该两帧连续的瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓”以及“确定该两个瞳孔轮廓与基准轮廓是否相似”的操作;其中,重新获取的两帧连续的瞳孔图为与操作201中获得连续的两帧瞳孔图相邻的图像。
在本公开实施例中,若两个瞳孔轮廓与基准轮廓不相似,表示患者出现了眼睛眨动,眼皮或者睫毛遮挡住瞳孔等情况,即患者瞳孔被遮挡。在患者瞳孔被遮挡的情况下,后续使用OCTA算法获取到的结果也是不准确的。
在本公开实施例中,为获取较为准确的OCTA图像,将重新获取与操作201中获得的连续的两帧瞳孔图相邻的两帧连续的瞳孔图,并执行“分别对重新 获取的该两帧连续的瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓”以及“确定这两个瞳孔轮廓与基准轮廓是否相似”,直至获取到较为准确的OCTA图像为止。
在本公开实施例中,根据研究需求,后续拍摄的每张图像均需通过该方法执行,以便提高OCTA图像的准确度。
通过采用以上技术方案,本公开实施例提供的用于眼前节OCTA的眼动追踪方法中,获取连续的两帧瞳孔图,再分别对这两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓,进而将这两个瞳孔轮廓均与基准轮廓进行对比,若这两个瞳孔轮廓均与基准轮廓相似,则判定采集这两帧瞳孔图时患者的瞳孔未被遮挡,此时计算这两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移,以便后续OCTA算法中使用,可以获得更加准确的OCTA图像,由此能够改善相关技术中OCTA并不适用于所有的患者,尤其在患者固视欠佳或者频繁眨动眼睛或眼球移动的情况下,会导致OCTA图像较为不准确的问题,达到提高OCTA的适配性,以及排除对患者固视欠佳或者频繁眨动眼睛或眼球移动等情况下采集的瞳孔图像的使用,因而可以实现提高OCTA图像的准确度的效果。
对于前述的多个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,一些操作可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
以上是关于方法实施例的介绍,以下通过装置实施例,对本公开所述方案进行说明。
图6是本公开实施例提供的一种用于眼前节OCTA的眼动追踪装置的结构示意图。参见图6,该用于眼前节OCTA的眼动追踪装置包括获取模块601、提取模块602、确定模块603和计算模块604。
获取模块601,被配置为获取连续的两帧瞳孔图;提取模块602,被配置为分别对这两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;确定模块603,被配置为确定这两个瞳孔轮廓与基准轮廓是否相似;计算模块604,被配置为响应于两个瞳孔轮廓与基准轮廓相似,计算这两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
在一些实施例中,该用于眼前节OCTA的眼动追踪装置还包括:
重新获取模块605,被配置为响应于这两个瞳孔轮廓与基准轮廓不相似时,重新获取两帧连续的瞳孔图,并执行“分别对重新获取的两帧连续的瞳孔图进行 轮廓提取,得到对应的两个瞳孔轮廓”以及“确定这两个瞳孔轮廓与基准轮廓是否相似”的操作;其中,重新获取的这两帧连续的瞳孔图为与上次获取的连续的两帧瞳孔图相邻的图像。
为描述的方便和简洁,所述描述的模块的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图7是本公开实施例提供的一种电子设备的结构示意图。如图7所示,图7所示的电子设备700包括:处理器701和存储器703。其中,处理器701和存储器703相连。一实施例中,电子设备700还可以包括收发器704。实际应用中收发器704不限于一个,该电子设备700的结构并不构成对本公开实施例的限定。
处理器701可以是中央处理器(Central Processing Unit,CPU),通用处理器,数据信号处理器(Digital Signal Processor,DSP),专用集成电路(Application Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本公开公开内容所描述的多种示例性的逻辑方框,模块和电路。处理器701也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。
总线702可包括一通路,在上述组件之间传送信息。总线702可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。总线702可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器703可以是只读存储器(Read Only Memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM)、只读光盘(Compact Disc Read Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够设置为携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
存储器703设置为存储执行本公开方案的应用程序代码,并由处理器701来控制执行。处理器701设置为执行存储器703中存储的应用程序代码,以实现前述方法实施例所示的内容。
电子设备700包括但不限于:移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图7示出的电子设备700仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当其在计算机上运行时,使得计算机可以执行前述方法实施例中相应内容。与相关技术相比,本公开实施例中,获取连续的两帧瞳孔图,再分别对这两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓,进而将这两个瞳孔轮廓均与基准轮廓进行对比,若这两个瞳孔轮廓均与基准轮廓相似,则判定采集这两帧瞳孔图时患者的瞳孔未被遮挡,此时计算这两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移,以便后续OCTA算法中使用,可以获得更加准确的OCTA图像,由此能够改善相关技术中OCTA并不适用于所有的患者,尤其在患者固视欠佳或者频繁眨动眼睛或眼球移动的情况下,会导致OCTA图像较为不准确的问题,达到提高OCTA的适配性,以及排除对患者固视欠佳或者频繁眨动眼睛或眼球移动等情况下采集的瞳孔图像的使用,因而可以实现提高OCTA图像的准确度的效果。
虽然附图的流程图中的多个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。

Claims (10)

  1. 一种用于眼前节光学相干断层扫描血管成像OCTA的眼动追踪方法,包括:
    获取连续的两帧瞳孔图;
    分别对所述两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;
    确定所述两个瞳孔轮廓与基准轮廓是否相似;
    响应于所述两个瞳孔轮廓与所述基准轮廓相似,计算所述两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
  2. 根据权利要求1所述的方法,其中,针对一帧瞳孔图进行轮廓提取,包括:
    对所述瞳孔图进行极坐标变换,得到变换后的瞳孔图;
    基于最短路径算法,提取所述变换后的瞳孔图中瞳孔的边界;
    对所述边界进行极坐标逆变换,得到对应的瞳孔轮廓。
  3. 根据权利要求1所述的方法,其中,确定一个瞳孔轮廓与所述基准轮廓是否相似,包括:
    计算所述瞳孔轮廓与所述基准轮廓之间的距离;
    根据预设距离范围和所述距离,确定所述瞳孔轮廓与所述基准轮廓是否相似。
  4. 根据权利要求1所述的方法,其中,所述计算所述两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移,包括:
    根据每个瞳孔轮廓,获取瞳孔轮廓的质心;
    根据两个质心,计算所述中心位置偏移。
  5. 根据权利要求1所述的方法,还包括:
    响应于所述两个瞳孔轮廓与所述基准轮廓不相似,重新获取两帧连续的瞳孔图,并执行分别对所述两帧连续的瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓以及确定该两个瞳孔轮廓与所述基准轮廓是否相似的操作;
    其中,重新获取的两帧瞳孔图为与所述连续的两帧瞳孔图相邻的图像。
  6. 根据权利要求1所述的方法,其中,所述基准轮廓通过以下操作获取:
    获取瞳孔基准图,其中,所述瞳孔基准图为瞳孔未被遮挡情况下拍摄的基准图;
    对所述瞳孔基准图进行极坐标变换,得到变换后的瞳孔基准图;
    基于最短路径算法,提取所述变换后的瞳孔基准图中瞳孔的边界;
    对所述边界进行极坐标逆变换,得到瞳孔的基准轮廓。
  7. 一种用于眼前节光学相干断层扫描血管成像OCTA的眼动追踪装置,包括:
    获取模块,被配置为获取连续的两帧瞳孔图;
    提取模块,被配置为分别对所述两帧瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓;
    确定模块,被配置为确定所述两个瞳孔轮廓与基准轮廓是否相似;
    计算模块,被配置为响应于所述两个瞳孔轮廓与所述基准轮廓相似,计算所述两个瞳孔轮廓中的一个瞳孔轮廓相对于另一个瞳孔轮廓的中心位置偏移。
  8. 根据权利要求7所述的装置,还包括:
    重新获取模块,被配置为响应于所述两个瞳孔轮廓与基准轮廓不相似,重新获取两帧连续的瞳孔图,并执行分别对所述两帧连续的瞳孔图进行轮廓提取,得到对应的两个瞳孔轮廓以及确定所述两个瞳孔轮廓与所述基准轮廓是否相似的操作;
    其中,重新获取的两帧瞳孔图为与所述连续的两帧瞳孔图相邻的图像。
  9. 一种电子设备,包括存储器和处理器,所述存储器上存储有计算机程序,其中,所述处理器执行所述计算机程序时实现如权利要求1至6中任一项所述的用于眼前节光学相干断层扫描血管成像OCTA的眼动追踪方法。
  10. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的用于眼前节光学相干断层扫描血管成像OCTA的眼动追踪方法。
PCT/CN2022/126616 2021-12-07 2022-10-21 用于眼前节octa的眼动追踪方法、装置、设备和存储介质 WO2023103609A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111489187.2 2021-12-07
CN202111489187.2A CN114373216A (zh) 2021-12-07 2021-12-07 用于眼前节octa的眼动追踪方法、装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023103609A1 true WO2023103609A1 (zh) 2023-06-15

Family

ID=81139471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126616 WO2023103609A1 (zh) 2021-12-07 2022-10-21 用于眼前节octa的眼动追踪方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN114373216A (zh)
WO (1) WO2023103609A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373216A (zh) * 2021-12-07 2022-04-19 图湃(北京)医疗科技有限公司 用于眼前节octa的眼动追踪方法、装置、设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791353A (zh) * 2015-12-16 2017-05-31 深圳市汇顶科技股份有限公司 自动对焦的方法、装置和系统
CN110807427A (zh) * 2019-11-05 2020-02-18 中航华东光电(上海)有限公司 一种视线追踪方法、装置、计算机设备和存储介质
CN111148460A (zh) * 2017-08-14 2020-05-12 奥普托斯股份有限公司 视网膜位置跟踪
CN114373216A (zh) * 2021-12-07 2022-04-19 图湃(北京)医疗科技有限公司 用于眼前节octa的眼动追踪方法、装置、设备和存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458766B (zh) * 2008-12-16 2011-04-27 南京大学 计算机处理天文观测灰度图像信息以进行目标追踪的方法
CN105095840B (zh) * 2014-05-22 2019-05-07 兰州大学 基于眼震影像的多方向上眼震信号提取方法
CN104545788B (zh) * 2014-12-26 2017-01-04 温州医科大学附属第一医院 一种基于眼球的运动特征对眼球肿瘤区域的实时定位系统
CN107767392A (zh) * 2017-10-20 2018-03-06 西南交通大学 一种适应遮挡场景的球类运动轨迹追踪方法
CN107875526B (zh) * 2017-11-27 2020-01-24 温州医科大学附属第一医院 一种眼部肿瘤自适应放射治疗时放疗仪器的精准控制方法
JP2022502221A (ja) * 2018-09-21 2022-01-11 マクロジックス インコーポレイテッドMaculogix, Inc. 眼球検査及び測定を行う方法、装置並びにシステム
CN109389622B (zh) * 2018-09-30 2019-12-13 佳都新太科技股份有限公司 车辆追踪方法、装置、识别设备及存储介质
CN109664891A (zh) * 2018-12-27 2019-04-23 北京七鑫易维信息技术有限公司 辅助驾驶方法、装置、设备及存储介质
CN109977833B (zh) * 2019-03-19 2021-08-13 网易(杭州)网络有限公司 物体追踪方法、物体追踪装置、存储介质及电子设备
CN112883767B (zh) * 2019-11-29 2024-03-12 Oppo广东移动通信有限公司 眼跳图像的处理方法及相关产品
CN112732071B (zh) * 2020-12-11 2023-04-07 浙江大学 一种免校准眼动追踪系统及应用
CN113040701A (zh) * 2021-03-11 2021-06-29 视微影像(河南)科技有限公司 一种三维眼动追踪系统及其追踪方法
CN113129334A (zh) * 2021-03-11 2021-07-16 宇龙计算机通信科技(深圳)有限公司 物体追踪方法、装置、存储介质及可穿戴电子设备
CN112991394B (zh) * 2021-04-16 2024-01-19 北京京航计算通讯研究所 基于三次样条插值和马尔科夫链的kcf目标跟踪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791353A (zh) * 2015-12-16 2017-05-31 深圳市汇顶科技股份有限公司 自动对焦的方法、装置和系统
CN111148460A (zh) * 2017-08-14 2020-05-12 奥普托斯股份有限公司 视网膜位置跟踪
CN110807427A (zh) * 2019-11-05 2020-02-18 中航华东光电(上海)有限公司 一种视线追踪方法、装置、计算机设备和存储介质
CN114373216A (zh) * 2021-12-07 2022-04-19 图湃(北京)医疗科技有限公司 用于眼前节octa的眼动追踪方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN114373216A (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
AU2021202217B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
US20110137157A1 (en) Image processing apparatus and image processing method
Boyer et al. Automatic recovery of the optic nervehead geometry in optical coherence tomography
WO2021208739A1 (zh) 眼底彩照图像血管评估方法、装置、计算机设备和介质
JP7413147B2 (ja) 画像処理装置、画像処理方法、及びプログラム
EP2742460A1 (en) Motion correction and normalization of features in optical coherence tomography
Almazroa et al. An automatic image processing system for glaucoma screening
US20240005545A1 (en) Measuring method and measuring apparatus of blood vessel diameter of fundus image
WO2023103609A1 (zh) 用于眼前节octa的眼动追踪方法、装置、设备和存储介质
Mathai et al. Learning to segment corneal tissue interfaces in oct images
Abràmoff Image processing
KR20200075152A (ko) 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법
Aharony et al. Automatic characterization of retinal blood flow using OCT angiograms
Aruchamy et al. Automated glaucoma screening in retinal fundus images
Pan et al. Segmentation guided registration for 3d spectral-domain optical coherence tomography images
Septiarini et al. Peripapillary atrophy detection in fundus images based on sectors with scan lines approach
CN116030042A (zh) 一种针对医生目诊的诊断装置、方法、设备及存储介质
Yugander et al. Extraction of blood vessels from retinal fundus images using maximum principal curvatures and adaptive histogram equalization
WO2020172999A1 (zh) 冠状动脉造影图像序列的质量评分方法和装置
Almazroa A novel automatic optic disc and cup image segmentation system for diagnosing glaucoma using riga dataset
Krishna et al. Retinal vessel segmentation techniques
Karn et al. Advancing Ocular Imaging: A Hybrid Attention Mechanism-Based U-Net Model for Precise Segmentation of Sub-Retinal Layers in OCT Images
CN116309594B (zh) 眼前节oct图像处理方法
Garduno-Alvarado et al. Fast optic disc segmentation in fundus images
Cao et al. Microvasculature segmentation of co-registered retinal angiogram sequences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903033

Country of ref document: EP

Kind code of ref document: A1