CN112220448A - Fundus camera and fundus image synthesis method - Google Patents

Fundus camera and fundus image synthesis method Download PDF

Info

Publication number
CN112220448A
CN112220448A CN202011095594.0A CN202011095594A CN112220448A CN 112220448 A CN112220448 A CN 112220448A CN 202011095594 A CN202011095594 A CN 202011095594A CN 112220448 A CN112220448 A CN 112220448A
Authority
CN
China
Prior art keywords
fundus
image
quality
lens
fundus images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011095594.0A
Other languages
Chinese (zh)
Other versions
CN112220448B (en
Inventor
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202011095594.0A priority Critical patent/CN112220448B/en
Publication of CN112220448A publication Critical patent/CN112220448A/en
Application granted granted Critical
Publication of CN112220448B publication Critical patent/CN112220448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/15Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
    • A61B3/152Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for aligning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Abstract

The invention provides a fundus camera and a fundus image synthesis method, wherein the method comprises the following steps: acquiring a plurality of fundus images shot under the condition that the lens state is not changed; extracting high-quality regions in the plurality of fundus images, respectively; synthesizing a fundus image using the plurality of high-quality regions.

Description

Fundus camera and fundus image synthesis method
Technical Field
The invention relates to the field of ophthalmologic instruments, in particular to an eyeground camera and an eyeground image synthesis method.
Background
The retina is the only tissue in which the human body can directly observe capillaries and nerves, and not only health problems of the eye but also systemic pathologies such as diabetic complications and hypertension can be detected by observing the retina. The fundus camera is a dedicated device for photographing the retina.
The existing fundus camera can automatically shoot fundus images, and is provided with a main camera and an auxiliary camera, wherein the main camera is arranged on a platform which can move in X, Y, Z three directions and is used for shooting the fundus; the auxiliary camera is installed near the main camera and is used for shooting the face and the outer eyes. The automatic shooting process mainly relates to automatically aligning a main lens with a pupil, automatically adjusting the axial distance between the main lens and the pupil and automatically adjusting the focal length.
Although the existing fundus camera is provided with a plurality of auxiliary functions to ensure the shooting quality of fundus images, in actual use, a photographed person still needs to keep stable for a long time, once the photographed person blinks, slightly moves the head and the like in the shooting process, shooting failure can be caused, a professional needs to observe a shooting result in real time, if the shooting quality is poor, shooting can be carried out again, the requirement of the shooting process on the photographed person is high, and the shooting success rate is low.
Disclosure of Invention
In view of the above, the present invention provides a fundus image synthesizing method, including: acquiring a plurality of fundus images shot under the condition that the lens state is not changed; extracting high-quality regions in the plurality of fundus images, respectively; synthesizing a fundus image using the plurality of high-quality regions.
The invention also provides a fundus image shooting method, which comprises the following steps: keeping the state of a lens unchanged and shooting a plurality of fundus images; determining the quality of the plurality of fundus images, respectively; when all the plurality of fundus images do not reach the set standard in quality, the fundus images are synthesized according to the above-described synthesizing method.
Accordingly, the present invention provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the above fundus image synthesis method.
Accordingly, the present invention provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the above fundus image capturing method.
Accordingly, the present invention provides a fundus camera comprising: a lighting assembly lens and a processor, and a memory communicatively coupled to the processor; wherein the memory stores instructions executable by the one processor to cause the processor to perform the above-described fundus image synthesizing method.
Accordingly, the present invention provides a fundus camera comprising: a lighting assembly lens and a processor, and a memory communicatively coupled to the processor; wherein the memory stores instructions executable by the one processor to cause the processor to perform the bottom image capture method described above.
According to the fundus camera and the fundus image synthesizing and shooting method provided by the embodiment of the invention, when a plurality of fundus images shot by a shot person have defects, high-quality areas are respectively extracted from the plurality of fundus images by using the scheme, and the high-quality complete fundus images can be obtained by splicing and fusing, so that the difficulty of self-photographing of the fundus images by a user is reduced, and the shooting success rate is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a configuration diagram of a fundus camera in an embodiment of the present invention;
FIG. 2 is a schematic view of a patch assembly of a fundus camera in an embodiment of the present invention;
FIG. 3 is a schematic view of a lens and a positioning assembly;
FIG. 4 is a flowchart illustrating a fundus image capturing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of pupil labeling;
FIG. 6 is a flowchart of a fundus image synthesis method according to an embodiment of the present invention;
FIG. 7 is a schematic view of a pupil larger than the illumination beam;
FIG. 8 is a schematic view of a pupil smaller than the illumination beam;
FIG. 9 is a schematic view of capturing fundus images with pupils smaller than the illumination beam;
FIG. 10 is an image of a cornea reflecting an illumination beam;
FIG. 11 is a schematic view of the distance between the lens barrel and the eyeball;
FIG. 12 is a schematic view of spot labeling;
FIG. 13 is an image of the cornea reflecting the illumination beam up to the working distance;
FIG. 14 is a schematic view of a disc label;
fig. 15 is a schematic view of shifting the lens position according to the light spot when taking a fundus image;
fig. 16 is a schematic view of two fundus images in which an unusable area exists;
fig. 17 is a schematic diagram of a synthesizing method of the fundus image.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 shows a full-automatic portable self-photographing fundus camera, which includes a surface patch assembly 01, a motion assembly, a positioning assembly 03 and a lens barrel 1, wherein an illumination assembly, a focusing assembly, a lens (an objective lens), an optical lens group, an imaging detector 10 and the like are arranged inside the lens barrel 1, and the internal structure of the lens barrel 1 can refer to chinese patent document CN 111134616A. The actual product also comprises a housing inside which the moving assembly and the barrel 1 are located. The surface patch component 01 is connected to the front part of the shell in a sealing mode and comprises a surface patch body and a window through hole which is formed in the surface patch body and used for accommodating eyes of a shot when the eyes are fitted. The surface patch assembly 01 is a member for contacting the eyes of the subject, and the lens barrel 1 collects the fundus retinal image of the subject through the through hole of the surface patch assembly 01.
The surface of the surface mount body facing away from the lens barrel 1 is configured to be in a shape that fits the contour of the face around the eyes of the subject. Specifically, the face patch assembly 01 is formed with a concave shape inward to fit the arc shape of the head of a human body, and the size of the through hole is at least capable of accommodating both eyes when the eyes of the person to be measured are fitted to the assembly. The surface of the surface-mounted component 01 facing inwards (in the shell and the lens barrel) is provided with at least one specific position for detecting various functions of the camera. In a specific embodiment, referring to fig. 1 and 2, fig. 2 shows an inward surface of the surface mount component 01, and a protrusion 012 is provided at an upper edge of a middle portion of the through hole 011, so that the lens of the lens barrel 1 can be aligned with the inward surface and capture an image. More preferably, a pattern or a simple pattern is provided as a target on the protruding part 012. The specific position has multiple purposes, including detecting whether the lighting assembly and the focusing assembly of the camera are normal, detecting whether the eyes of the photographed person are correctly attached to the face-attaching assembly 01, and the like, which will be described in detail below.
The motion assembly is used for controlling the lens barrel 1 to move in a three-dimensional space, and can move on three axes X, Y, Z in the drawing by taking a coordinate system in fig. 1 as an example. Note that, when the lens barrel 1 moves to the extreme position in the Z direction, the end portion does not protrude outside the surface mount component 01. As a specific example, the movement assembly includes three rail assemblies, a first set of rails 021 for controlling the movement of the lens barrel 1 in the X axis, a second set of rails 022 for controlling the movement of the lens barrel 1 in the Y axis, and a third set of rails, not shown, for controlling the movement of the lens barrel 1 in the Z axis. Specifically, the lens barrel 1 is disposed on a platform (base) together with the second set of rails 022, the first set of rails 021 can drive the base to move integrally, and the third set of rails can drive the base and the first set of rails 021 to move, so that the whole approaches to or departs from the face sticker assembly 01.
The positioning assembly 03 is used to detect the movement of the lens barrel 1. Specifically, the positioning component 03 may be an electromagnetic sensor, and the lens barrel 1 is sensed to move to the position of the positioning component 03 according to an electromagnetic induction signal. Referring to fig. 3, in the present embodiment, 3 positioning assemblies 03 are provided, two of which are disposed on two sides of a movable base to detect the movement of the lens barrel 1 in the X axis, and a third positioning assembly is disposed on the base to detect the movement of the lens barrel 1 in the Y axis, that is, the positioning assemblies 03 are used to detect the movement of the lens barrel 1 in the XY plane.
According to the fundus camera provided by the invention, the illumination component, the focusing component, the ocular objective, the optical lens group and the imaging detector for imaging are integrated in one lens cone to realize the miniaturization of an optical path structure, reduce the volume of the fundus camera and improve the portability; the face of eye ground camera pastes the subassembly and is equipped with the window through-hole that is used for holding by shooter's eye, and the user can wear the eye ground camera by oneself, arranges the eye in window through-hole position, and motion assembly drive lens cone searches for the pupil in window through-hole scope to adjust working distance, thereby shoot the eye ground image, this scheme has reduced the complexity and the use degree of difficulty of eye ground camera hardware, lets the user independently shoot the eye ground image, promotes the popularization of eye ground camera.
An embodiment of the present invention provides a method for fully automatically capturing fundus images, which may be performed by a fundus camera itself or by an electronic device such as a computer or a server (as a control method), and includes the following steps:
and S300, moving the fundus camera lens to align with the pupil.
S400, controlling the lens to approach the eyeball and collecting an image, wherein the image is an image of the illumination light beam reflected by the cornea.
And S500, determining the working distance by using the image.
And S600, adjusting the focal length, acquiring fundus images, and determining the shooting focal length by using the fundus images.
S700, a fundus image is photographed at a working distance using the photographing focal length.
In a preferred embodiment, before the step S100, a step of detecting a camera status and a user usage status may be further performed, and the method may further include:
s100, detecting whether a motion assembly, an illumination assembly and a focusing assembly of the fundus camera are normal. This step may be performed at the time of startup of the fundus camera as an optional operation. If the abnormality of a certain component is detected, the subsequent shooting operation is terminated and corresponding abnormality prompt is carried out.
And S200, detecting whether the head of the human body is attached to the surface attaching component of the fundus camera. As an optional operation, if the human head is detected not to be attached to the surface-mounted component of the eye fundus camera, the voice module can be used for prompting the user to guide the user to correctly wear the eye fundus camera.
When a camera starts shooting, the pupil and the ocular objective lens in an actual application scene cannot be completely aligned, and at this time, the camera needs to judge the position of the lens relative to the pupil through the imaging of the pupil on a sensor, then the lens is moved to the front of the pupil, and then shooting is performed. In view of the above step S300, an embodiment of the present invention provides a fundus camera lens automatic alignment method, which may be executed by a fundus camera itself or by an electronic device such as a computer or a server (as a control method), and the method includes the steps of:
and S1, identifying the image collected by the lens of the fundus camera, and judging whether a pupil exists in the image. Specifically, after the user wears the fundus camera, the system continuously (for example, frame by frame) acquires images of the pupil, and if the pupil can be identified in the images, the pupil is indicated to be within the imaging range, and in this case, fine adjustment is performed so that the lens is completely aligned with the pupil, so that shooting can be performed. If the pupil cannot be identified in the image, the lens is greatly deviated from the pupil position, which may be caused by improper initial position of the lens, or abnormal wearing manner of the user, and the like.
There are various ways to identify the pupil imagery in the image, for example, a machine vision algorithm may be used to detect pupil contours and locations from graphical features in the image. However, since the fundus camera is illuminated by infrared light before the final photographing, the pupil is not imaged very clearly, and the reflection of the cornea also causes much difficulty in pupil detection, which is easily misjudged by the computer vision algorithm, so that the deep learning algorithm is used in a preferred embodiment to solve the problem.
First, a large number of pictures of the pupil are taken, these pictures being images of different persons taken at different times and in different directions and distances from the above mentioned ocular objective of the fundus camera. The pupil in each image is then labeled, thereby yielding training data for training the neural network. The labeled data is used to train a neural network model (such as a YOLO network), and after training, the recognition result of the neural network model includes a detection box for representing the position and size of the pupil in the image.
In one specific embodiment, as shown in fig. 5, a square box 51 is used to mark the pupil in the training data, and the recognition result of the trained neural network model will also be a square detection box. In other embodiments, the labeling may be performed using a circular box, or other similar labeling methods are possible.
No matter what pupil detection method is adopted, in this step, it is only necessary to identify whether there is a pupil in the image, if there is no pupil in the image, step S2 is executed, otherwise, step S3 is executed.
S2, the fundus camera lens is controlled to move near the current position to search for a pupil. The lens barrel is moved by the motion component, for example, the lens barrel moves in a spiral track and gradually spreads from the current position to the periphery. It should be noted that the present embodiment relates only to the movement in the XY plane, and does not discuss the movement in the Z axis, which relates to the optimal working distance of the fundus camera, for the time being, as will be described in the following embodiments.
If the pupil can not be searched after the pupil is moved to the limit position, prompting the user to adjust the wearing state; if the pupil is searched, further judging whether the eye of the user is far away from the lens and exceeds the movable range of the motion assembly, such as judging whether the moving distance of the lens exceeds a moving threshold value, and prompting the user to slightly move the head in the surface patch assembly to adapt to the moving range of the lens when the moving distance exceeds the moving threshold value. The search is then continued, and step S3 is executed when the movement distance does not exceed the movement threshold.
S3, it is determined whether the pupil in the image meets the set conditions. Specifically, various setting conditions such as a condition regarding the size, a condition regarding the shape, and the like can be set.
In an optional embodiment, the setting condition includes a size threshold, and it is determined whether the size of the pupil in the image is larger than the size threshold, and when the size of the pupil in the image is larger than the size threshold, it is determined that there is a pupil that meets the setting condition; otherwise, prompting the user to close the eye and rest for a period of time, and starting shooting after the pupil is enlarged. Because two eyes generally need to be photographed in sequence when fundus images are photographed, and the pupil is shrunk after the first eye is photographed, the system can also enable a user to have a rest by closing the eyes and restore the pupil size.
In another optional embodiment, the setting condition includes a morphological feature, and it is determined whether the shape of the pupil in the image conforms to the set morphological feature, and when the shape of the pupil in the image conforms to the set morphological feature, it is determined that there is a pupil that conforms to the setting condition; otherwise the user is prompted to open their eyes, try not to blink, etc. The set shape feature is a circle or an approximate circle, which is generally caused by the user's eyes not being open if the detected pupil does not conform to the preset shape feature, such as may be flat.
In a third alternative embodiment, the neural network model is used for pupil detection, and the recognition result of the neural network model further includes confidence information of the pupil, that is, a probability value representing that the model determines that the pupil exists in the image. The set condition comprises a confidence coefficient threshold value, and whether confidence coefficient information obtained by the neural network model is larger than the confidence coefficient threshold value is judged. When the confidence coefficient information is larger than a confidence coefficient threshold value, judging that pupils meeting set conditions exist; otherwise, prompting the user to open the eyes and remove the occlusion such as hair. The confidence of the pupils obtained by the neural network model is low, which indicates that the pupils may be interfered by other objects although the pupils exist in the image, and the user is prompted to adjust the images in order to improve the shooting quality.
The three embodiments described above may be used alternatively or in combination. And executing the step S4 when the pupil in the image meets the set condition, otherwise, waiting for the user to adjust the self state and continuing to judge until the set condition is met.
S4, the fundus camera lens is moved to align with the pupil according to the position of the pupil in the image. The lens barrel is moved by the movement assembly, and the moving direction and distance depend on the deviation of the pupil and the lens in the image. And taking the central point of the collected image as the central point of the lens, and identifying the central point of the pupil in the image. For example, when the pupil is detected using the neural network model, the center point of the detection frame may be regarded as the center point of the pupil. Step S4 specifically includes:
s41, determining the moving distance and the moving direction according to the deviation between the central position of the detection frame and the central position of the image;
and S42, moving the fundus camera lens to align with the pupil according to the determined moving distance and moving direction.
According to the fundus image shooting method provided by the embodiment of the invention, whether the current pupil state of a photographed person is suitable for shooting a fundus image or not can be automatically determined by judging the pupil state in an image, when the state of the photographed person is not suitable for shooting the fundus image, a corresponding prompt can be sent to the photographed person to enable the photographed person to adjust the state of the photographed person, and when the state of the photographed person is suitable for shooting the fundus image, the pupil position is identified to automatically align the photographed person, and then shooting is carried out, so that an unusable fundus image is prevented from being shot, no professional is required to participate in the whole process, and the autonomous shooting of a user is realized.
In practical situations, it may be the case that the size of the pupil may be smaller than the size of the annular illumination beam, in which case aligning the pupil with the objective results in no light entering the pupil, and the captured image is therefore black.
In order to solve this problem, with respect to the above step S700, an embodiment of the present invention provides a preferable fundus image capturing method including the steps of:
s51, it is determined whether or not the pupil size in the image is smaller than the annular illumination light beam size of the fundus camera illumination unit. Fig. 7 shows a case where one pupil 72 size is larger than the size of the ring-shaped light beam 71, in which case step S52 is performed.
Fig. 8 shows the case where the size of two annular illumination beams is larger than the size of the pupil, the illumination source is a complete annular illumination lamp or a light source formed by a plurality of illumination lamps arranged in an annular shape, and the inner diameter of the annular beam 71 is larger than the diameter of the pupil 72.
Step S53 is performed when the pupil size is smaller than the annular illumination beam size, i.e. as is the case in fig. 8.
S52, a fundus image is captured at the current lens position. This is an image taken with the fundus well illuminated by the light source.
S53, shifting the lens and the pupil in a plurality of directions respectively to make the annular illumination light beam portion irradiate in the pupil, and acquiring a plurality of fundus images. Taking the movement shown in fig. 9 as an example, in the present embodiment, the lens is moved in two horizontal directions, and when the lens is moved to one side so that a part 73 of the annular light beam 71 is irradiated into the pupil 72, a fundus image is captured; when the lens is moved to the other side so that the other portion 74 of the annular light beam 71 is irradiated into the pupil 72, another fundus image is captured at this time.
The movement and illumination shown in fig. 9 are only examples for explaining the photographing situation, and in practical applications, more directions may be moved to photograph more fundus images. However, the fundus image captured with such movement and illumination may have a partial area overexposure phenomenon, which cannot be directly taken as a result of the capturing, and therefore step S54 is executed.
In addition, to reduce the overexposed area, in a preferred embodiment, the following is used to move and photograph:
and S531, determining the edge position of the pupil. Specifically, the left edge point 721 and the right edge point 722 of the pupil 72 in fig. 9 can be obtained using a machine vision algorithm or using the neural network model described above.
And S532, determining the moving distance according to the edge position of the pupil. Specifically, the moving distance of the moving member can be calculated from the positional relationship between the position O of the current lens center (image center position) and the left edge point 721 and the right edge point 722.
And S533, moving the lens to a plurality of directions according to the determined moving distance, wherein the determined moving distance enables the edge of the annular illumination light beam to be overlapped with the edge position of the pupil. As shown in fig. 9, the outer peripheral edge of the annular beam 71 coincides with the edge of the pupil 72, so that the portion of the annular beam 71 entering the fundus can be located at the edge of the fundus, reducing the influence on imaging of the central region of the fundus.
S54, the plurality of fundus images are merged into one fundus image. In this step, usable regions are extracted from the respective fundus images, and a complete fundus image is stitched and fused using these fundus images. There are various ways of splicing and fusing, and as an alternative embodiment, step S54 specifically includes:
s541a of calculating displacement deviations of the plurality of fundus images from the lens shift distances corresponding to the acquired fundus images;
s542a, selecting an effective region in the plurality of fundus images;
s543a, the plurality of effective regions are stitched based on the displacement deviation, and a stitched fundus image is obtained. And further, carrying out fusion processing on the spliced positions of the effective areas by using an image fusion algorithm.
As another alternative embodiment, step S54 specifically includes:
s541b of detecting respective feature points in the plurality of fundus images;
s542b, calculating a spatial conversion relationship of the plurality of fundus images from the positions of the feature points;
s543b, setting a plurality of fundus images in the same coordinate system according to the spatial transformation relationship;
at S544b, effective regions are selected from the fundus images in the same coordinate system and stitched together to obtain a stitched fundus image.
According to the fundus image shooting method provided by the embodiment of the invention, when a lens of a fundus camera is aligned with a pupil, the size of the pupil in an image and the size of annular light speed emitted by the camera are firstly judged and compared, if the size of the pupil is too small, the illumination light beam cannot be normally irradiated on the fundus, the lens is moved, the annular illumination light beam is partially irradiated on the pupil by deviating from the current alignment position, fundus images are obtained at a plurality of deviation positions, and finally a fundus image is fused according to a plurality of fundus images.
The following is a description about the movement of the camera lens (barrel) in the Z-axis, which is related to the optimal working distance of the fundus camera. In relation to the above steps S400 to S500, the present embodiment provides a working distance adjustment method of a fundus camera, which may be executed by the fundus camera itself, or may be executed by an electronic apparatus such as a computer or a server (as a control method). The method comprises the following steps:
s1, the lens is controlled to approach the eyeball and capture an image that is an image of the illumination beam reflected by the cornea. This step is performed with the lens aligned with the pupil in the XY plane according to the scheme of the above embodiment, and controlling the lens to approach the eyeball in this step means controlling the lens to move in the Z axis direction to the eyeball by the moving component. At the initial distance, the light source of the illumination assembly passes through the optical lens, and the reflected light irradiated on the cornea of the eye is imaged on the cmos to obtain the result as shown in fig. 10. In other embodiments, the illumination source may be shaped as shown in FIG. 8, and the captured image will show a corresponding shape or arrangement of spots.
S2, detecting whether the feature of the light spot in the image is consistent with the set feature. As shown in fig. 11, as the lens barrel 1 moves toward the eyeball 01 in the Z-axis, the reflected corneal light image will change. In particular, the position, size and sharpness of the image is related to the distance between the objective lens and the cornea. The closer the distance, the larger the angle between the incident ray and the normal of the cornea, the heavier the reflected scattering effect, and the larger the spot size, the more divergent, and the lower the brightness.
There are various ways to identify the spot features in the image, for example, a machine vision algorithm can be used to detect the spot profile and location from the pattern features in the image. However, since the range of variation of the sharpness, size, etc. of the light spot is large, and this situation is easily misjudged by computer vision algorithms, a deep learning algorithm is used in a preferred embodiment to solve the problem.
First, images of a large number of light spots are acquired, wherein the images are acquired by different people at different time and in different directions and distances from the ocular objective lens of the fundus camera. The light spots in each image are then labeled, thereby obtaining training data for training the neural network. And training a neural network model (such as a YOLO network) by using the labeled data, wherein the recognition result of the neural network model comprises a detection box for representing the position and the size of the light spot in the image after training.
As shown in fig. 12, in a specific embodiment, a square frame 121 is used in the training data to mark the light spot, and the recognition result of the trained neural network model will also be a square detection frame. In other embodiments, the labeling may be performed using a circular box, or other similar labeling methods are possible.
No matter what light spot detection method is adopted, the light spot characteristics in the current image are identified in the step to be in accordance with the set characteristics. The set feature may be a feature regarding size, such as determining to conform to a set feature when a spot size in the image is smaller than the set size; speckle disappearance may also be possible, such as when speckle in an image cannot be detected using machine vision algorithms or neural networks to determine compliance with a set feature.
And if the light spots in the image accord with the set characteristics, executing the step S3, otherwise, returning to the step S1, and continuously moving the lens and acquiring the image.
And S3, determining that the working distance is reached. When the characteristic of the light spot in the image is determined to be in accordance with the set characteristic, the distance between the lens and the eyeball at the moment can be regarded as the working distance. In a specific embodiment, according to the hardware parameter, a distance compensation can be further performed on the basis of the distance, and the compensation direction and the distance value are related to the hardware parameter. For example, fig. 13 shows an image with a light spot according to the set characteristic, in which the distance between the lens 1 and the eyeball 01 is WD, and then the lens is controlled to move continuously in the direction approaching the eyeball by the preset distance d, so as to achieve a more accurate working distance WD +.
At the working distance, the fundus image can be shot by further adjusting the focal length. The manner of adjusting the focal length will be specifically described in the following embodiments.
According to the working distance adjusting method provided by the embodiment of the invention, the imaging of the illumination light beam reflected by the cornea is collected and identified, the distance between the lens and the eyeball is judged and adjusted through the light spot characteristics in the image, no additional optics or hardware is required to be arranged on the eye fundus camera, and the accurate positioning of the working distance can be realized only by arranging the proper illumination light beam, so that the cost of the eye fundus camera can be reduced, and the working distance adjusting efficiency can be improved.
Considering that the user may slightly rotate the head or the like during the movement of the lens toward the eyeball, which results in the lens no longer being in a state of being aligned with the pupil, the position of the lens will also be adjusted on the XY plane to maintain the alignment with the pupil during the adjustment of the working distance. The present embodiment provides a preferred method for adjusting a working distance, which includes the following steps:
S1A, collecting the image of the illumination light beam reflected by the cornea;
and S2A, calling a neural network to detect the light spots in the image and judging whether the light spots exist in the image. Step S6A is performed when no light spot is present in the image, otherwise step S3A is performed.
And S3A, identifying the central point of the light spot in the image, and judging whether the central point of the light spot is coincident with the central point of the image. The center of the detection frame obtained by the neural network is regarded as the center of the light spot. The central point of the image is regarded as the center of the lens, if the central point of the image is coincident with the center of the light spot, the lens is aligned with the pupil, and step S5A is executed; if they do not coincide with each other, the lens is shifted to the aligned position, and step S4A is performed.
And S4A, fine-tuning the position of the lens according to the offset between the central point of the light spot and the central point of the image. Detection-adjustment-re-detection is a feedback process, where as a preferred embodiment a smooth adjustment algorithm is used:
Adjustment(i)=a*Shift(i)+(1-a)Adjustment(i-1),
wherein, Adjustment (i-1) represents the displacement of the last lens Adjustment, shift (i) represents the offset (the deviation between the pupil center and the image center), Adjustment (i) represents the displacement of the lens which needs to be adjusted at this time, and a is a coefficient between 0 and 1. Since the position of the lens is a two-dimensional coordinate on the XY plane, both Adjustment and Shift are two-dimensional vectors.
After the center point of the light spot and the center point of the image are adjusted to coincide with each other, step S5A is performed.
And S5A, controlling the lens to move towards the direction close to the eyeball so as to reduce the distance. Then, returning to step S1A, as the lens is repeatedly moved closer to the eyeball, the spot size in the corresponding image is decreased. In order to accurately capture the critical point of the spot disappearance, each frame of image can be collected and the above determination and adjustment can be made accordingly until the image of the spot disappearance is detected.
And S6A, controlling the lens to continuously move to the direction close to the eyeball for a preset distance so as to reach the working distance.
In a preferred embodiment, it is detected whether the light spot in the image is complete while the above adjustment process is performed, and when the light spot is incomplete, such as half, this means that the user is blinking or the eyes are not open, and at this time, the system prompts the user to open the eyes by voice, so as to avoid blinking as much as possible, and so on.
According to the working distance adjusting method provided by the embodiment of the invention, the distance between the lens and the eyeball is adjusted, and meanwhile, the position of the lens is finely adjusted according to the position of the light spot in the image, so that the lens is kept aligned with the pupil when the working distance is adjusted.
After the working distance is automatically adjusted and automatically aligned by the above-described embodiment, a clear fundus image needs to be captured by setting an appropriate focal length. Referring to the above step S600, the present embodiment provides a method for adjusting the focal length of a fundus camera, which may be performed by the fundus camera itself or by an electronic device such as a computer or a server (as a control method). The method comprises the following steps:
s1, the focal length is adjusted and fundus images are acquired. This step is performed when the fundus camera lens is aligned with the pupil and reaches a working distance, the position of the lens and the eyeball at this time being as shown in fig. 13. It should be noted that, when the image is acquired during the adjustment of the lens position and the working distance in the above embodiment, it is of course also necessary to set a fixed focal length, for example, when the working distance is adjusted, the focal length may be fixedly adjusted to 0 diopter position. If the person to be photographed is normal in refraction, the fundus image can be directly photographed after the working distance is adjusted to the right position. However, in practical application, the actual diopter of the photographed person needs to be considered, so that the proper focal distance is set.
Before the fundus camera is exposed to shoot the fundus image, infrared light is used for imaging in processes such as automatic alignment and automatic working distance determination, and a light source used for collecting the image is still infrared light. Although the current focal length does not enable the fundus to be imaged clearly, the image acquired at this time already substantially characterizes the fundus, and at least the optic disc can be displayed in the image, so the acquired image is referred to as the fundus image.
S2, the optic disc region is identified in the fundus image. Since the optic disk region is a region having the most texture and the highest brightness in the fundus, it is most suitable for focusing.
There are a number of ways to identify the optic disc in the fundus image, for example machine vision algorithms can be used to detect the disc contour and position from the graphical features in the fundus image. However, since the use of infrared imaging is relatively blurred and presents a significant challenge to disc recognition, which is easily misjudged by computer vision algorithms, deep learning algorithms are used in a preferred embodiment to address this problem.
First a large number of fundus images are acquired, which images are of different persons, fundus images acquired using different focal lengths. The optic discs in each image are then labeled, thereby yielding training data for training the neural network. The labeled data is used to train a neural network model (e.g., a YOLO network), and after training, the recognition result of the neural network model includes a detection box for characterizing the position of the optic disc in the fundus image.
In a specific embodiment, as shown in fig. 14, the optic disc is labeled with a square frame 141 in the training data, and the recognition result of the trained neural network model will also be a square detection frame. In other embodiments, the labeling may be performed using a circular box, or other similar labeling methods are possible.
And S3, determining the shooting focal length according to the definition of the optic disc area. Specifically, the focal length can be continuously changed and corresponding fundus images are collected from the initial focal length in a gradient ascending manner, whether the definition of the optic disc reaches a preset standard or not is judged, and once the definition of the optic disc reaches the preset standard, the current focal length is judged to be the optimal focal length without continuing searching; or all available focal lengths can be used in the focal length adjustable range, corresponding fundus images are collected, a fundus image with the highest disc definition is determined from all the fundus images, and the focal length when the image is collected is judged to be the best focal length.
In one particular embodiment, the focus is first adjusted by the first set step 40 within the set focus range 800-1300 and a first set of fundus images is acquired in a traversal, thereby arriving at the fundus image at focus 800, the fundus image at focus 840, the fundus image at focus 880, the fundus image at focus 1300 of the fundus image … …. The optic disc regions are identified in these fundus images, respectively, and the sharpness of each fundus image is determined, respectively, in this embodiment the mean of the pixel values within the optic disc region is calculated as the sharpness. Then, a fundus image having the highest resolution can be determined from the first group of fundus images, and the focal length X (first focal length) used when acquiring the fundus image can be taken as the photographing focal length.
In order to obtain better shooting effect, the focus can be further searched, for example, another traversal is performed near the focus X, and the second setting step used in this traversal is smaller than the first setting step, for example, the second setting step is 10, so that a second group of fundus images, that is, a fundus image at focus X +10, a fundus image at focus X +20, a fundus image at X-10, a fundus image at X-20, and the like can be further obtained. Then, the optic disc area is respectively identified in the fundus images, the definition of each fundus image is respectively determined, for example, when the fundus image with the focal length X-20 is determined to be the fundus image with the highest definition, the focal length X-20 (second focal length) is taken as the shooting focal length.
Regarding the further search for the range of focal lengths, as a preferred embodiment, the first focal length X may be taken as a midpoint to increase the focal length range in which the first setting step is a maximum value and to decrease the first setting step as a minimum value, the range being X ± 40.
According to the focal length adjusting method provided by the embodiment of the invention, fundus images are collected at different focal lengths, whether the current focal length is suitable for shooting the fundus images is judged through the disc definition in the fundus images, and the best focusing position can be found only by setting an image recognition algorithm without setting any additional optics or hardware on the fundus camera, so that the cost of the fundus camera can be reduced, and the focal length adjusting efficiency can be improved.
Considering that the user may slightly rotate the head during the adjustment of the focal length, etc., which will result in the lens no longer being in a state of being aligned with the pupil, the position of the lens will also be adjusted on the XY plane during the adjustment of the focal length to maintain the alignment with the pupil. Further, it is already about to capture a fundus image at this stage, and if the subject blinks at this time or closes the eye, the image cannot be captured successfully, so that it is necessary to perform blink and/or eye closure detection during this process. The present embodiment provides a preferred focal length adjustment method, which includes the following steps:
S1A, a fundus image is acquired with the current focal length.
And S2A, judging whether the shot person blinks and/or closes the eye through the fundus image. Prompting when the photographed person blinks and/or closes the eyes, such as prompting the user not to blink or close the eyes by voice, and the like, and then returning to step S1A; otherwise, step S3A is executed. The blink and eye closure detection can also be realized through a machine vision algorithm or a neural network algorithm, when a photographed person blinks or closes the eyes, the acquired image is completely black or very fuzzy, the characteristics are relatively obvious, various methods can be adopted for detection, and details are not repeated here.
S3A, it is identified whether or not there is a spot formed by the illumination light beam reflected by the cornea in the fundus image. Unlike the above embodiment, in which the lens is kept aligned with the pupil when the working distance is adjusted, after the working distance is reached, if the lens is in an aligned state, the illumination beam reflected by the cornea should not be within the imaging range, and the above-mentioned flare should no longer appear in the fundus image, and especially complete imaging of the flare is not possible. Even the presence of a spot, which in one embodiment uses a light source formed from a plurality of lamps arranged in a ring, will be a portion of the entire spot, as shown in fig. 12. If a spot appears in the fundus image when the focal length is adjusted, it will be the case as shown in fig. 15 in which only a part of the spot 151 is present. If the light source itself is a complete ring lamp, this appears as a band in the image.
When a light spot is present in the fundus image, step S4A is executed, and otherwise step S5A is executed.
And S4A, fine-adjusting the position of the lens according to at least the position of the light spot to remove the light spot so as to keep the lens aligned with the pupil. When the light spot appears at different positions, its size and brightness may differ. As a preferred embodiment, the vector offset can be calculated in combination with the position, size and intensity of the spot in the image. Taking fig. 15 as an example, a coordinate system is established with the image center as the origin (0,0), and the image radius is R. An approximately circular area of each spot 151 is calculated, which in this embodiment is the smallest circular area that contains the spot 151. For example, the center coordinate of the approximately circular area of the ith light spot is (x)i,yi) Radius is ri. It can be concluded that the direction in which the ith spot needs to be moved is vi=(xi,yi) The distance to be moved is
Figure BDA0002723614100000141
Wherein k is xi 2+yi 2And then the current light spot needs to be moved v is obtainedimiThe sum of the amounts of movement required for all the spots is obtained, and the vector 152 of movement required for the lens is Σ vm.
The process returns to step S1A after aligning the lens with the pupil again.
S5A, the optic disc region is identified in the fundus image, and it is determined whether the clarity of the optic disc region meets a set criterion. In this embodiment, the disc is identified using the mobilene-yolov 3 neural network model, and the disc area output by the neural network is the area containing the disc and the background. Then, the edge of the optic disc is detected in the optic disc area through an edge detection algorithm (such as sobel, Laplace and the like), so as to obtain an accurate optic disc image, and the mean value of the optic disc image is calculated as a definition value.
For example, the obtained definition value may be compared with a threshold value to determine whether the set criterion is met, and if the definition of the optic disc region does not meet the set criterion, step S6A is executed. If the definition of the optic disc area reaches the set standard, the current focal length is judged to be suitable for shooting the fundus image, then the infrared light can be turned off, the white light is used for exposure, and the fundus image is shot.
S6A, adjusting the focal length, and then returning to the step S1A. According to the initial focal length used in step S1A, for example, the initial focal length is the minimum value of the adjustable focal lengths, the focal length is increased by a fixed step size or a variable step size, and otherwise, the focal length is decreased.
After the lens is aligned with the pupil, adjusted to the optimal working distance and the focal length is determined by using the scheme provided by each embodiment, the fundus image is started to be shot. In taking a fundus image, exposure using an illumination unit is required (the light source used by the camera of the present embodiment is white light). However, during exposure shooting, the subject may still affect the shooting quality of the fundus image, such as pupil thinning, eyelid blocking, blinking, light leakage from the mask assembly, and the like, when an unusable area will appear in the captured fundus image. In order to improve the success rate of photographing, the present embodiment provides a fundus image photographing method, which may be executed by the fundus camera itself or by an electronic device such as a computer or a server (as a control method), with respect to the above step S700. As shown in fig. 4, the method includes the steps of:
s1, a plurality of fundus images are captured with the lens state unchanged. Specifically, the method according to each of the above embodiments fixes the lens at a position in the XY plane aligned with the pupil and positioned at a distance in the Z axis, and with the fixed focal length, exposes the illumination assembly and captures a plurality of fundus images while the lens position, the working distance, and the focal length are kept unchanged.
S2, the quality of the plurality of fundus images is determined, respectively. There are various means for analyzing the quality of the fundus image, and for example, reference may be made to the detection method for the fundus image provided in chinese patent document CN 108346149A. In this embodiment, the image quality is analyzed by using a neural network model, and the neural network model can perform a classification task to classify the image quality, such as outputting a high-quality or low-quality classification result; a regression prediction task may also be performed to quantify the image quality, such as outputting 1-10 points to express an evaluation of the image quality.
With respect to model training, a large number of white light-exposed retinal pictures are collected in advance, the image quality is manually marked as good or not good (suitable for classification models), or the image quality is scored (e.g., 1 to 10 points, suitable for regression prediction models). And training a neural network model by using the fundus images and the labels or scores as training data, wherein the quality of the fundus images can be identified after the model is converged.
S3, it is judged whether or not the quality of each fundus image meets a set standard, and if any one of the fundus images meets the set standard, the fundus image may be taken as a photographing result (the photographing result is output). If all of the qualities of the plurality of fundus images do not reach the setting criterion, step S4 is executed.
S4, one fundus image is synthesized as a result of photographing using a plurality of fundus images. The multiple fundus images photographed continuously may not have good overall quality, but each fundus image may have a part of regions with good quality, and the usable regions are used for splicing and fusing to obtain a high-quality and complete fundus image.
According to the fundus image shooting method provided by the embodiment of the invention, the plurality of fundus images are shot while the lens state is kept unchanged, the quality of the plurality of fundus images is respectively determined, and when all the fundus images are judged to be unavailable, the plurality of fundus images are utilized to synthesize a complete fundus image.
Further, an embodiment of the present invention provides a fundus image synthesizing method, as shown in fig. 6, the method includes the following steps:
s41, a plurality of fundus images captured with the lens state unchanged are acquired. These fundus images have a poor quality region and a good quality region, respectively. Of course, if some fundus images are of very poor quality, such as images with a score of 0 that may be all black or all white, these completely unusable images may be removed directly.
S42, high-quality regions are extracted in the plurality of fundus images, respectively. In this step, luminance may be calculated from pixel values of the fundus image, and by comparing with a luminance threshold value, a region with high luminance and a region with low luminance are removed, thereby removing over-exposed and under-exposed regions, thereby extracting a region with moderate luminance, i.e., a high-quality region; it is also possible to calculate the sharpness from the pixel values of the fundus image, and remove a region with low sharpness by comparison with a sharpness threshold, thereby removing an exposure blur region, resulting in a high-quality region; or a high quality region is extracted based on a combination of brightness and sharpness.
The regions extracted from the actual brightness and/or sharpness of the fundus image are generally regions of irregular boundaries such as two high-quality regions shown in fig. 16, the region shown on the left side from the upper part of one fundus image and the region shown on the right side from the lower part of one fundus image.
In other alternative embodiments, it is also possible to divide each fundus image into meshes in a fixed division manner, and then analyze the quality of each mesh region separately to extract high-quality meshes, so that high-quality regions with regular boundaries can be obtained.
S43, synthesizing fundus images using the plurality of high-quality regions. Since each fundus image may have some offset, in order to synthesize the fundus images more accurately, each fundus image may be mapped to the same coordinate system according to the offset amount, and then subjected to stitching and fusion processing.
As a preferred embodiment, as shown in fig. 17, first, abnormal region detection is performed on a plurality of fundus images to extract a high-quality region. In step S43, feature point extraction (or referred to as key points) is first performed on each of the plurality of fundus images, and the feature points may be a central point of the optic disk, a point of intersection of blood vessels, or other significant positions. After feature point matching is performed to match feature points between different fundus images, matching information is used to calculate the amount of shift between the respective fundus images (projection matrix calculation). And then mapping the plurality of high-quality areas into one fundus image according to the offset amount. For an overlapping portion existing between a plurality of high quality regions, such as two regions shown in fig. 16, whose middle portions are repeated, the pixel values of the overlapping portion may be determined using the pixel values of the plurality of high quality regions and the corresponding weights. This is a weighted average based fusion process that can be represented, by way of example, as q1/(q1+ q2) > image1+ q2/(q1+ q2) > image2, where q1 represents the weight corresponding to the first high quality region, q2 represents the weight corresponding to the second high quality region, image1 represents the first high quality region, and image2 represents the second high quality region.
The values of the above weights are set according to the overall quality of the fundus image, for example the first high-quality region is taken from the first fundus image and the second high-quality region is taken from the second fundus image, and the quality of the first fundus image (for example the score of the neural network output) obtained according to the above quality analysis method is higher than that of the second fundus image, then the corresponding weight q1 is greater than q 2.
The situations shown in fig. 16 and 17 are only examples for explaining the principle of the present solution, and in practical use, more fundus images will be captured, and it is ensured that more high-quality areas are extracted as much as possible to ensure that the generated fundus images are complete.
According to the fundus image synthesis method provided by the embodiment of the invention, when a plurality of fundus images shot by a shot person have defects, high-quality areas are respectively extracted from the plurality of fundus images by using the scheme, and the high-quality complete fundus images can be obtained by splicing and fusing, so that the difficulty of self-photographing the fundus images by a user is reduced, and the shooting success rate is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (11)

1. A method of synthesizing a fundus image, comprising:
acquiring a plurality of fundus images shot under the condition that the lens state is not changed;
extracting high-quality regions in the plurality of fundus images, respectively;
synthesizing a fundus image using the plurality of high-quality regions.
2. The method according to claim 1, characterized in that in the step of extracting high-quality regions in the plurality of fundus images, respectively, luminance is calculated from pixel values for each of the fundus images, and a region of moderate luminance is extracted as the high-quality region.
3. The method according to claim 1 or 2, characterized in that in the step of extracting high-quality regions in the plurality of fundus images, respectively, sharpness is calculated from pixel values for the respective fundus images, and a region with higher sharpness is extracted as the high-quality region.
4. A method according to any of claims 1-3, characterized in that the high quality areas are areas of irregular borders determined on the basis of image brightness and/or sharpness.
5. The method according to any one of claims 1 to 4, wherein synthesizing a fundus image using a plurality of the high-quality regions specifically comprises:
extracting feature points in the plurality of fundus images, respectively;
calculating shift amounts of the plurality of fundus images by matching the feature points;
and mapping a plurality of high-quality areas into a fundus image according to the offset amount.
6. The method according to claim 1 or 5, characterized in that in the step of synthesizing the fundus image using a plurality of the high-quality regions, if there is an overlapping portion between a plurality of the high-quality regions, the pixel value of the overlapping portion is determined using the pixel values of the plurality of the high-quality regions and the corresponding weights.
7. The method of claim 6, further comprising: and determining the corresponding weight of the high-quality area according to the overall quality of the fundus images.
8. A fundus image photographing method, comprising:
keeping the state of a lens unchanged and shooting a plurality of fundus images;
determining the quality of the plurality of fundus images, respectively;
synthesizing fundus images according to the method of any one of claims 1-7 when the quality of all of the plurality of fundus images has not reached a set standard.
9. The method according to claim 8, wherein in the step of determining the quality of the plurality of fundus images, respectively, the plurality of fundus images are recognized using a neural network model, and a result of recognition on the image quality is output.
10. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image synthesizing method according to any one of claims 1 to 7 or the fundus image capturing method according to claim 8 or 9.
11. A fundus camera, comprising: an illumination assembly, a lens, and at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus image synthesizing method according to any one of claims 1 to 7 or the fundus image capturing method according to claim 8 or 9.
CN202011095594.0A 2020-10-14 2020-10-14 Fundus camera and fundus image synthesis method Active CN112220448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011095594.0A CN112220448B (en) 2020-10-14 2020-10-14 Fundus camera and fundus image synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011095594.0A CN112220448B (en) 2020-10-14 2020-10-14 Fundus camera and fundus image synthesis method

Publications (2)

Publication Number Publication Date
CN112220448A true CN112220448A (en) 2021-01-15
CN112220448B CN112220448B (en) 2022-04-22

Family

ID=74112616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011095594.0A Active CN112220448B (en) 2020-10-14 2020-10-14 Fundus camera and fundus image synthesis method

Country Status (1)

Country Link
CN (1) CN112220448B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087754A1 (en) * 2021-11-19 2023-05-25 北京鹰瞳科技发展股份有限公司 Method for repairing optic disc area of fundus image and related product

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5382989A (en) * 1992-09-17 1995-01-17 Atr Auditory And Visual Perception Research Laboratories Apparatus for examining gaze shift in depth direction
US5815242A (en) * 1993-11-09 1998-09-29 Optos Public Limited Company Wide field scanning laser opthalmoscope
JP2000023921A (en) * 1998-07-13 2000-01-25 Nippon Telegr & Teleph Corp <Ntt> Eyeground image synthesizing method, eyeground image synthesizing device and recording medium
US20070292010A1 (en) * 2006-06-20 2007-12-20 Ophthalmic Imaging Systems Incorporated In The State Of California Of California Device, method and system for automatic montage of segmented retinal images
CN101211356A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Image inquiry method based on marking area
CN101308543A (en) * 2008-07-04 2008-11-19 刘显福 Segmenting and recognizing method of image frame of data stream and apparatus thereof
CN101309725A (en) * 2005-11-16 2008-11-19 眼科医疗公司 Multiple spot photomedical treatment using a laser indirect ophthalmoscope
CN101408933A (en) * 2008-05-21 2009-04-15 浙江师范大学 Method for recognizing license plate character based on wide gridding characteristic extraction and BP neural network
EP2449957A1 (en) * 2010-11-05 2012-05-09 Nidek Co., Ltd. Control method of a fundus examination apparatus
CN102961119A (en) * 2012-11-26 2013-03-13 黄立贤 Centrometer
EP2630908A1 (en) * 2012-02-21 2013-08-28 Canon Kabushiki Kaisha Imaging apparatus
CN103308537A (en) * 2013-06-13 2013-09-18 中北大学 Gradient-energy X-ray imaging image fusion method
CN104766319A (en) * 2015-04-02 2015-07-08 西安电子科技大学 Method for improving registration precision of images photographed at night
CN104992406A (en) * 2015-06-16 2015-10-21 华南理工大学 Road bridge floor image obtaining method of non-closed traffic
CN105049706A (en) * 2015-06-26 2015-11-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN105496352A (en) * 2014-10-09 2016-04-20 安尼迪斯公司 Method and apparatus for imaging the choroid
US20160113488A1 (en) * 2013-05-15 2016-04-28 Kabushiki Kaisha Topcon Fundus photographing apparatus
CN106249236A (en) * 2016-07-12 2016-12-21 北京航空航天大学 A kind of spaceborne InSAR long-short baselines image associating method for registering
US20170007799A1 (en) * 2015-03-16 2017-01-12 Magic Leap, Inc. Augmented and virtual reality display platforms and methods for delivering health treatments to a user
CN107736872A (en) * 2017-10-18 2018-02-27 泰山医学院 Method for the human eye body mould and OCT image quality evaluation of eyeground fault imaging
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN109658393A (en) * 2018-12-06 2019-04-19 代黎明 Eye fundus image joining method and system
CN109948719A (en) * 2019-03-26 2019-06-28 天津工业大学 A kind of eye fundus image quality automatic classification method based on the intensive module network structure of residual error
CN110522408A (en) * 2019-07-25 2019-12-03 北京爱诺斯科技有限公司 A kind of eye eyesight based on eccentricity cycles technology judges system and method
CN111028218A (en) * 2019-12-10 2020-04-17 上海志唐健康科技有限公司 Method and device for training fundus image quality judgment model and computer equipment
CN111031894A (en) * 2017-08-14 2020-04-17 威里利生命科学有限责任公司 Dynamic illumination during continuous retinal imaging
CN111080577A (en) * 2019-11-27 2020-04-28 北京至真互联网技术有限公司 Method, system, device and storage medium for evaluating quality of fundus image
CN111093470A (en) * 2017-08-29 2020-05-01 威里利生命科学有限责任公司 Focal stack for retinal imaging
CN111134615A (en) * 2020-02-25 2020-05-12 上海鹰瞳医疗科技有限公司 Refractive adjustment device of eye ground camera and eye ground camera
CN111295128A (en) * 2017-10-30 2020-06-16 威里利生命科学有限责任公司 Active visual alignment stimulation in fundus photography
CN111553436A (en) * 2020-04-30 2020-08-18 上海鹰瞳医疗科技有限公司 Training data generation method, model training method and device
CN111614894A (en) * 2020-04-28 2020-09-01 深圳英飞拓智能技术有限公司 Image acquisition method and device and terminal equipment

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5382989A (en) * 1992-09-17 1995-01-17 Atr Auditory And Visual Perception Research Laboratories Apparatus for examining gaze shift in depth direction
US5815242A (en) * 1993-11-09 1998-09-29 Optos Public Limited Company Wide field scanning laser opthalmoscope
JP2000023921A (en) * 1998-07-13 2000-01-25 Nippon Telegr & Teleph Corp <Ntt> Eyeground image synthesizing method, eyeground image synthesizing device and recording medium
CN101309725A (en) * 2005-11-16 2008-11-19 眼科医疗公司 Multiple spot photomedical treatment using a laser indirect ophthalmoscope
US20070292010A1 (en) * 2006-06-20 2007-12-20 Ophthalmic Imaging Systems Incorporated In The State Of California Of California Device, method and system for automatic montage of segmented retinal images
CN101211356A (en) * 2006-12-30 2008-07-02 中国科学院计算技术研究所 Image inquiry method based on marking area
CN101408933A (en) * 2008-05-21 2009-04-15 浙江师范大学 Method for recognizing license plate character based on wide gridding characteristic extraction and BP neural network
CN101308543A (en) * 2008-07-04 2008-11-19 刘显福 Segmenting and recognizing method of image frame of data stream and apparatus thereof
EP2449957A1 (en) * 2010-11-05 2012-05-09 Nidek Co., Ltd. Control method of a fundus examination apparatus
EP2630908A1 (en) * 2012-02-21 2013-08-28 Canon Kabushiki Kaisha Imaging apparatus
CN102961119A (en) * 2012-11-26 2013-03-13 黄立贤 Centrometer
US20160113488A1 (en) * 2013-05-15 2016-04-28 Kabushiki Kaisha Topcon Fundus photographing apparatus
CN103308537A (en) * 2013-06-13 2013-09-18 中北大学 Gradient-energy X-ray imaging image fusion method
CN105496352A (en) * 2014-10-09 2016-04-20 安尼迪斯公司 Method and apparatus for imaging the choroid
US20170007799A1 (en) * 2015-03-16 2017-01-12 Magic Leap, Inc. Augmented and virtual reality display platforms and methods for delivering health treatments to a user
CN104766319A (en) * 2015-04-02 2015-07-08 西安电子科技大学 Method for improving registration precision of images photographed at night
CN104992406A (en) * 2015-06-16 2015-10-21 华南理工大学 Road bridge floor image obtaining method of non-closed traffic
CN105049706A (en) * 2015-06-26 2015-11-11 深圳市金立通信设备有限公司 Image processing method and terminal
CN106249236A (en) * 2016-07-12 2016-12-21 北京航空航天大学 A kind of spaceborne InSAR long-short baselines image associating method for registering
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN111031894A (en) * 2017-08-14 2020-04-17 威里利生命科学有限责任公司 Dynamic illumination during continuous retinal imaging
CN111093470A (en) * 2017-08-29 2020-05-01 威里利生命科学有限责任公司 Focal stack for retinal imaging
CN107736872A (en) * 2017-10-18 2018-02-27 泰山医学院 Method for the human eye body mould and OCT image quality evaluation of eyeground fault imaging
CN111295128A (en) * 2017-10-30 2020-06-16 威里利生命科学有限责任公司 Active visual alignment stimulation in fundus photography
CN109658393A (en) * 2018-12-06 2019-04-19 代黎明 Eye fundus image joining method and system
CN109948719A (en) * 2019-03-26 2019-06-28 天津工业大学 A kind of eye fundus image quality automatic classification method based on the intensive module network structure of residual error
CN110522408A (en) * 2019-07-25 2019-12-03 北京爱诺斯科技有限公司 A kind of eye eyesight based on eccentricity cycles technology judges system and method
CN111080577A (en) * 2019-11-27 2020-04-28 北京至真互联网技术有限公司 Method, system, device and storage medium for evaluating quality of fundus image
CN111028218A (en) * 2019-12-10 2020-04-17 上海志唐健康科技有限公司 Method and device for training fundus image quality judgment model and computer equipment
CN111134615A (en) * 2020-02-25 2020-05-12 上海鹰瞳医疗科技有限公司 Refractive adjustment device of eye ground camera and eye ground camera
CN111614894A (en) * 2020-04-28 2020-09-01 深圳英飞拓智能技术有限公司 Image acquisition method and device and terminal equipment
CN111553436A (en) * 2020-04-30 2020-08-18 上海鹰瞳医疗科技有限公司 Training data generation method, model training method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUO,YH: "《Dehaze of cataractous retinal images using an unpaired generative adversarial netword》", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
李文博等: "《眼底病手术前后干眼调查分析》", 《中华眼外伤职业眼病杂志 》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087754A1 (en) * 2021-11-19 2023-05-25 北京鹰瞳科技发展股份有限公司 Method for repairing optic disc area of fundus image and related product

Also Published As

Publication number Publication date
CN112220448B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN112043236B (en) Fundus camera and full-automatic fundus image shooting method
CN112075921B (en) Fundus camera and focal length adjusting method thereof
CN112220447B (en) Fundus camera and fundus image shooting method
JP5651119B2 (en) Eye imaging apparatus and method
US8644562B2 (en) Multimodal ocular biometric system and methods
JP4583527B2 (en) How to determine eye position
CN112075920B (en) Fundus camera and working distance adjusting method thereof
US20120050515A1 (en) Image processing apparatus and image processing method
JP2571318B2 (en) Stereoscopic fundus camera
CN112190227B (en) Fundus camera and method for detecting use state thereof
KR101992016B1 (en) fundus fluorescence image acquisition apparatus with optical source and focus automatic control function, and method thereof
CN112220448B (en) Fundus camera and fundus image synthesis method
CN112043237A (en) Full-automatic portable self-timer fundus camera
CN110996761A (en) Non-mydriatic, non-contact system and method for performing wide-field fundus photographic imaging of an eye
CN112190228B (en) Fundus camera and detection method thereof
CN212281326U (en) Full-automatic portable self-timer fundus camera
KR102263830B1 (en) Fundus image photography apparatus using auto focusing function
EP3695775B1 (en) Smartphone-based handheld optical device and method for capturing non-mydriatic retinal images
JP2021145914A (en) Ophthalmologic apparatus and operation method for the same
US20230000344A1 (en) Ophthalmology inspection device and pupil tracking method
KR102085285B1 (en) System for measuring iris position and facerecognition based on deep-learning image analysis
KR102183197B1 (en) Imaging apparatus for fundus and optical module location automatic adjusting method of the same
JP7283932B2 (en) ophthalmic equipment
WO2024084753A1 (en) Eye observation device
JP2000296109A (en) Optometer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210727

Address after: 100083 room 21, 4th floor, building 2, national defense science and Technology Park, beipolytechnic, Haidian District, Beijing

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Applicant after: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 200030 room 01, 8 building, 1 Yizhou Road, Xuhui District, Shanghai, 180

Applicant before: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant