WO2022077800A1 - 眼底相机及眼底图像全自动拍摄方法 - Google Patents
眼底相机及眼底图像全自动拍摄方法 Download PDFInfo
- Publication number
- WO2022077800A1 WO2022077800A1 PCT/CN2021/073875 CN2021073875W WO2022077800A1 WO 2022077800 A1 WO2022077800 A1 WO 2022077800A1 CN 2021073875 W CN2021073875 W CN 2021073875W WO 2022077800 A1 WO2022077800 A1 WO 2022077800A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- fundus
- lens
- pupil
- focal length
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 210000001747 pupil Anatomy 0.000 claims abstract description 122
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 20
- 210000004087 cornea Anatomy 0.000 claims abstract description 13
- 238000013459 approach Methods 0.000 claims abstract description 7
- 238000005286 illumination Methods 0.000 claims description 53
- 238000003384 imaging method Methods 0.000 claims description 35
- 210000003128 head Anatomy 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 13
- 210000001508 eye Anatomy 0.000 description 39
- 238000001514 detection method Methods 0.000 description 26
- 238000004422 calculation algorithm Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 23
- 238000003062 neural network model Methods 0.000 description 19
- 238000013528 artificial neural network Methods 0.000 description 16
- 238000012549 training Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000007499 fusion processing Methods 0.000 description 4
- 230000000877 morphologic effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 239000011324 bead Substances 0.000 description 3
- 230000004397 blinking Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000008034 disappearance Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 210000001525 retina Anatomy 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 230000004256 retinal image Effects 0.000 description 2
- 238000001308 synthesis method Methods 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 208000002249 Diabetes Complications Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000193 eyeblink Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0008—Apparatus for testing the eyes; Instruments for examining the eyes provided with illuminating means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
- A61B3/15—Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing
- A61B3/152—Arrangements specially adapted for eye photography with means for aligning, spacing or blocking spurious reflection ; with means for relaxing for aligning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2560/00—Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
- A61B2560/04—Constructional details of apparatus
- A61B2560/0431—Portable apparatus, e.g. comprising a handle or case
Definitions
- the invention relates to the field of ophthalmic instruments, in particular to a fundus camera and an automatic fundus image capturing method.
- the retina is the only tissue in the human body that can directly observe capillaries and nerves. By observing the retina, not only eye health problems but also systemic diseases such as diabetes complications and high blood pressure can be found.
- a fundus camera is a specialized device used to photograph the retina.
- Existing fundus cameras can automatically capture fundus images, and the process of automatic shooting mainly involves automatically aligning the main lens with the pupil, automatically adjusting the axial distance (working distance) between the lens and the pupil, and automatically adjusting the focal length.
- the camera is equipped with a main camera, an auxiliary camera and many auxiliary optics.
- the main camera is installed on a platform that can move in X, Y, and Z directions to capture the fundus; the auxiliary camera is installed near the main camera to capture the face. It is mainly used to search the eye and realize automatic pupil alignment; auxiliary optics are used for focusing, adjusting the working distance, etc.
- the existing fundus camera requires complex and expensive hardware modules, and is also complicated to use, which hinders the popularization of fundus cameras.
- the present invention provides an automatic fundus image capturing method, comprising:
- a fundus image is captured using the capture focal length at the working distance.
- the method before moving the lens of the fundus camera to align with the pupil, the method further includes: detecting whether the motion component, the lighting component and the focusing component of the fundus camera are normal.
- detecting whether the motion components, lighting components and focusing components of the fundus camera are normal specifically includes:
- Control the movement component to adjust the position of the lens, and detect whether the lens can move to the position where each positioning component is located;
- the moving assembly is controlled to move the lens to a set position, the lighting assembly is turned on, and the focusing assembly is controlled to be adjusted to a first focal length, and a first image is obtained by shooting;
- control the moving assembly to adjust the lens to the set depth position, control the focusing assembly to adjust to the second focal length, and capture a second image
- Whether the imaging function is normal is determined according to the image characteristics of the object in the second image.
- the method before moving the lens of the fundus camera to align with the pupil, the method further includes: detecting whether the head of the human body fits with the face-sticking component of the fundus camera.
- the face-sticking component for detecting whether the human head fits the fundus camera specifically includes:
- the second image it is determined whether the head of the human body fits the surface patch assembly.
- using the image to determine the working distance specifically includes:
- using the fundus image to determine the focal length specifically includes:
- the shooting focal length is determined according to the sharpness of the optic disc area.
- using the shooting focal length to capture the fundus image at the working distance specifically includes:
- the multiple fundus images are fused into one fundus image.
- using the shooting focal length to capture the fundus image at the working distance specifically includes:
- a fundus image is synthesized using a plurality of the high-quality regions.
- the present invention provides an electronic device, comprising: at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores instructions executable by the one processor, The instructions are executed by the at least one processor, so that the at least one processor executes the above-mentioned fully automatic fundus image capturing method.
- the present invention provides a fundus camera, comprising: a face-mounted component, a motion component, a focusing component, an illumination component, a lens, and at least one processor; and a memory communicatively connected to the at least one processor; wherein, the The memory stores instructions executable by the one processor, and the instructions are executed by the at least one processor to cause the at least one processor to execute the above-described fully automatic fundus image capturing method.
- the fundus camera can automatically align the main lens with the pupil, automatically adjust the working distance, and automatically adjust the focal length.
- the image does not require auxiliary cameras and auxiliary optical devices, which reduces the complexity and difficulty of use of the hardware, allows users to independently capture fundus images, and promotes the popularization of fundus cameras.
- FIG. 1 is a structural diagram of a fundus camera in an embodiment of the present invention.
- FIG. 2 is a schematic diagram of a surface sticker assembly of a fundus camera in an embodiment of the present invention
- FIG. 3 is a schematic diagram of a lens and a positioning assembly
- FIG. 4 is a flowchart of an automatic fundus image capturing method according to an embodiment of the present invention.
- Figure 5 is a schematic diagram of pupil labeling
- FIG. 6 is a flowchart of a preferred fully automatic method for capturing fundus images in an embodiment of the present invention
- Fig. 7 is the schematic diagram that pupil is larger than illumination beam
- Fig. 8 is the schematic diagram that pupil is smaller than illumination beam
- FIG. 9 is a schematic diagram of capturing a fundus image when the pupil is smaller than the illumination beam
- Fig. 10 is the imaging of corneal reflected illumination beam
- 11 is a schematic diagram of the distance between the lens barrel and the eyeball
- Figure 12 is a schematic diagram of spot labeling
- Fig. 13 is the imaging of corneal reflected illumination beam when the working distance is reached
- Figure 14 is a schematic diagram of video disc labeling
- 15 is a schematic diagram of moving a lens position according to a light spot when capturing a fundus image
- 16 is a schematic diagram of two fundus images with unavailable areas
- 17 is a schematic diagram of a synthesis method of a fundus image
- Figure 18 is a structural diagram of a lighting lamp
- FIG. 19 is a schematic diagram of imaging reflected light of illumination when detecting the state of the camera.
- Fig. 20 is the imaging schematic diagram of the convex part of the face-to-face assembly when detecting the state of the camera;
- 21 is a schematic diagram of imaging of the convex portion of the surface mount assembly provided with the target when the state of the camera is detected;
- FIG. 22 is an image of the area between the eyes collected when the use state of the person being photographed is detected.
- the terms “installed”, “connected” and “connected” should be understood in a broad sense, unless otherwise expressly specified and limited, for example, it may be a fixed connection or a detachable connection connection, or integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, or it can be the internal connection of two components, which can be a wireless connection or a wired connection connect.
- installed installed
- “connected” and “connected” should be understood in a broad sense, unless otherwise expressly specified and limited, for example, it may be a fixed connection or a detachable connection connection, or integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, or it can be the internal connection of two components, which can be a wireless connection or a wired connection connect.
- Figure 1 shows a fully automatic portable self-portrait fundus camera.
- the camera includes a face sticker component 01, a motion component, a positioning component 03 and a lens barrel 1.
- the lens barrel 1 is provided with an illumination component, a focusing component, and a lens (eye-connecting component). Objective lens), optical lens group and imaging detector 10, etc., the internal structure of lens barrel 1 can refer to Chinese patent document CN111134616A.
- the actual product also includes the housing, and the motion components and the lens barrel 1 are located inside the housing.
- the face sticker assembly 01 is sealingly connected to the front of the housing, and the face sticker assembly includes a face sticker body and a window through hole formed on the face sticker body for accommodating the eye of the subject when the eye is attached.
- the face sticker assembly 01 is used as a component that contacts the eye of the subject, and the lens barrel 1 collects the retinal image of the subject's eye through the through hole of the face sticker assembly 01 .
- the side of the face sticker body facing away from the lens barrel 1 is configured in a shape that conforms to the facial contour around the eyes of the subject.
- the face patch assembly 01 is inwardly formed in a concave shape to suit the arc shape of the human head, and the size of the through hole can accommodate both eyes at least when the subject's eyes fit the assembly.
- FIG. 2 shows the inward-facing side of the surface sticker assembly 01 .
- the lens can be aimed at this part and take an image.
- a more preferred solution is to set a pattern or a simple figure on the raised portion 012 as a target.
- This specific position has various uses, including detecting whether the camera's lighting components and focusing components are normal, detecting whether the subject's eyes are correctly attached to the face sticker component 01, etc., which will be described in detail below.
- the motion component is used to control the movement of the lens barrel 1 in the three-dimensional space. Taking the coordinate system in FIG. 1 as an example, it can move on the three axes of X, Y, and Z in the figure. It should be noted that, when the lens barrel 1 moves to the limit position in the Z direction, the end portion will not protrude out of the surface sticker assembly 01 .
- the motion component includes three track components, the first group of tracks 021 is used to control the movement of the lens barrel 1 on the X axis, and the second group of tracks 022 is used to control the movement of the lens barrel 1 on the Y axis , The third group of tracks not shown in the figure is used to control the movement of the lens barrel 1 on the Z axis.
- the lens barrel 1 and the second group of rails 022 are arranged on a platform (base), the first group of rails 021 can drive the overall movement of the base, and the third group of rails can drive the base and the first group of rails 021 to move , so that the whole is close to or away from the surface-mounted component 01.
- the positioning assembly 03 is used to detect the movement of the lens barrel 1 .
- the positioning assembly 03 may be an electromagnetic sensor, which senses the movement of the lens barrel 1 to the position where the positioning assembly 03 is located according to the electromagnetic induction signal.
- three positioning assemblies 03 are provided, two of which are arranged on both sides of the movable base to detect the movement of the lens barrel 1 on the X axis, and the third positioning assembly is provided On the base, the movement of the lens barrel 1 on the Y axis is detected, that is, the positioning assembly 03 is used to detect the movement of the lens barrel 1 in the XY plane.
- the illumination component for imaging, the focusing component, the objective lens, the optical lens group and the imaging detector are integrated in one lens barrel to realize the miniaturization of the optical path structure, reduce the volume of the fundus camera, and improve the portability.
- the surface mount component of the fundus camera is provided with a window through hole for accommodating the eye of the photographed subject. The user can wear the fundus camera by himself, and place the eye at the position of the window through hole, and the motion component drives the lens barrel in the window through hole range. It searches the pupil and adjusts the working distance to capture the fundus image. This solution reduces the complexity and difficulty of using the fundus camera hardware, enables users to take the fundus image independently, and promotes the popularization of the fundus camera.
- Embodiments of the present invention provide a fully automatic method for capturing a fundus image, which can be performed by a fundus camera itself, or performed by an electronic device such as a computer or a server (as a control method). As shown in Figure 4, the method includes the following steps:
- the steps of detecting the camera state and the user's use state may also be performed.
- the method may further include:
- S100 detecting whether the motion components, lighting components and focusing components of the fundus camera are normal. This step is optional and can be performed while the fundus camera is turned on. If an abnormality of a component is detected, the subsequent shooting operation will be terminated and a corresponding abnormality prompt will be given.
- S200 detecting whether the head of the human body fits the face-sticking component of the fundus camera. This step is an optional operation. If it is detected that the human head does not fit the face-mounting component of the fundus camera, the user can be prompted through the voice module to guide the user to wear the fundus camera correctly.
- an embodiment of the present invention provides a fundus camera detection method, which can be performed by the fundus camera itself, as a self-checking method, or performed by an electronic device such as a computer or a server, as a product detection method method, the method includes the following steps:
- Step S1 control the moving component to adjust the position of the lens, and detect whether the lens can move to the position where each positioning component is located.
- This method is suitable to be executed when the fundus camera is just started.
- the lens (according to the above embodiment, the lens and the lens barrel are integrally provided, and the moving lens barrel is the moving lens) is moved to the initial position.
- the moving component adjusts the position of the lens, and detects whether the lens can move to the position where the three positioning components are located. If it can move to these positions, it is considered that the function of the moving component is normal, and step S2 can be performed; otherwise, step S6 is performed.
- Step S1 may be referred to as a step of detecting the movement of the XY axis of the moving component.
- This step is to detect whether the focusing assembly and the lighting assembly function normally. In theory, there is no need to limit the lens to a certain position, so there are various options for setting the position described in this step.
- the external environment is uncertain, for example, it may be a relatively bright environment. If the external environment is relatively bright when the first image is captured in this step, the content of the image may be disturbed.
- this step moves the lens to a specific part of the surface mount component (such as the above-mentioned protrusion), and captures as little external environment as possible, and the proportion of the surface mount component in the image is larger than that of the outside proportion of the environment.
- a specific part of the surface mount component such as the above-mentioned protrusion
- the proportion of the surface mount component in the image is larger than that of the outside proportion of the environment.
- Fig. 18 shows the structure of a lighting lamp in a lens barrel, and four lamp beads are arranged on a ring structure. Turn on these four lamp beads and use the first focal length for imaging, and an image as shown in Figure 19 is expected to be obtained.
- the set first focal length can make the illumination assembly image but not the surface mount assembly. Therefore, only the lighting component may exist in the first image without objects such as the protrusion of the surface sticking component, which improves the accuracy of image recognition in the subsequent steps.
- S3 Determine whether the focusing assembly and the lighting assembly are normal according to the image characteristics of the lighting assembly in the first image.
- using the set focal length should yield the image shown in Figure 19, which has distinct features depending on the actual shape of the lighting assembly.
- there should be 4 independent and clear dots in the first image which are the imaging results of the above 4 lamp beads. If the focal length adjusted at this time is not the above-mentioned first focal length, the dots in the image will become larger and blurred, or become smaller; if the lighting assembly is not turned on, no shape will appear in the image.
- Step S4 By recognizing the first image through a machine vision algorithm or a neural network algorithm, it can be identified whether there are features that meet expectations in the image. If it is determined that the focusing assembly and the lighting assembly are normal, step S4 is performed, otherwise, step S6 is performed. Steps S2-S3 may be referred to as focusing assembly and lighting assembly detection steps.
- step S4 controlling the moving component to adjust the lens to the set depth position, controlling the focusing component to adjust to the second focal length, and shooting to obtain a second image.
- a known object needs to be imaged, and as a preferred embodiment, the raised portion of the face sticker assembly is used as the known object.
- step S2 first align the lens on the XY plane with the raised part of the surface mount assembly.
- step S2 has already been aligned with this part, and this step does not need to be adjusted; in other embodiments, if step S2 If this part is not aligned, adjust it in this step.
- This step requires adjusting the depth, that is, adjusting the position of the lens on the Z axis, which can be understood as adjusting the shooting distance of a known object, and then setting the focal length.
- the focal length at this time is different from the focal length used in step S2, and the focal length in this step should be suitable for the current lens position (depth position), and it is expected to get a picture as shown in Figure 20. image.
- the set second focal length can make the surface mount assembly image but not the lighting assembly.
- the object to be photographed such as the raised portion of the surface sticker assembly, may exist in the second image, and the image of the lighting assembly will not appear, thereby improving the accuracy of image recognition in the subsequent steps.
- S5 Determine whether the imaging function is normal according to the image features of the object in the second image.
- the second image is an image captured by the moving component when the XY axis movement, the lighting component and the focusing component are all normal. The purpose of this step is to detect whether the moving component moves normally on the Z axis. If the moving component can Adjust the lens to the set depth position, and the second image taken should be able to show a clear object, such as the raised part of the face sticker assembly shown in Figure 20.
- Step S6 is performed. Steps S4-S5 may be referred to as Z-axis motion detection steps of the motion component.
- a voice module or an information display module can be set in the fundus camera to broadcast or display the corresponding fault information to the user.
- the positioning component is used to verify whether the moving component can normally adjust the position of the lens; after confirming that the moving component is normal, the focal length is adjusted to make the lighting component image, and the collected image can be determined by judging Whether the focusing component and lighting component are normal; finally, adjust the depth of the lens through the motion component, adjust the focal length to image the object, and judge the characteristics of the object in the image to verify whether the motion component can adjust the depth of the lens normally, thus automatically Determine whether the various important parts of the fundus camera work properly.
- This solution can self-check the working status of the equipment in a remote unattended environment, thereby improving the convenience of taking fundus photos and promoting the popularization of fundus cameras.
- a target is provided on the raised part of the face-sticking assembly, that is, the above-mentioned object to be photographed is a target on the set part of the face-sticking assembly.
- the specific content of the target is not limited, and is one or more A well-defined pattern or shape is possible.
- the obtained second image is shown in FIG. 21, which includes a circular target 81.
- Step S5 specifically includes:
- an embodiment of the present invention provides a method for detecting a use state of a fundus camera, which is used to detect whether the user is wearing the fundus camera in the above embodiment correctly.
- the method can be performed by the fundus camera itself, and as a self-checking method, it can also be performed by an electronic device such as a computer or a server.
- the method is suitable to be executed after it is determined that the important components of the camera are normal according to the above detection method, and the method includes the following steps:
- the lens collects the image of the external environment through the through hole 011 as shown in Figure 2, and it should be avoided that the face mount component blocks the lens (the face mount component is not within the imaging range).
- the face mount component blocks the lens (the face mount component is not within the imaging range).
- the lighting assembly needs to be kept off, that is, the beam is not directed outward through the lens.
- the focal length used for the captured image can be a fixed value, and the imaging plane can be roughly set on the surface of the human body.
- step S2 judging whether the brightness of the first image reaches a set standard. If the eye of the person being photographed is in close contact with the face-sticking assembly 01 and there is no large gap around it, the captured first image should be very dark. Here, the brightness of the first image is judged first, and if the brightness reaches the set standard, step S3 is performed, otherwise, step S6 is performed.
- the brightness value can be calculated according to the pixel value of the image, and then compared with the threshold value; the neural network algorithm can also be used to train the neural network with images of different brightness in advance. , so that it has the ability to classify or predict the brightness of the image.
- the neural network is used to identify the first image, and the identification result about the brightness is output.
- the first image is converted into a grayscale image, and then the brightness of the grayscale image is identified.
- the lighting component is turned on, and the second image captured by the lens through the window of the surface mount component is acquired.
- the state of the lens and the subject has not changed, but only the illumination light source is turned on, and the lens is illuminated outwards.
- the illumination beam irradiates the subject's eyes or skin and is reflected.
- the position of the lens is set to be aligned with the center of the window of the surface-mounted component, and the light source used is infrared light. If the human head is attached to the surface-mounted component, the lens is aimed at the eyes The area in between can be collected as shown in Figure 22.
- S4 determine whether the head of the human body fits the surface sticking component. If the subject's head is attached to the face-mounted component, since the human skin will reflect the illumination beam, an obvious light spot will appear in the image as shown in Figure 22, and the characteristics of human skin will appear around the light spot. By judging the image Whether it has the characteristics of a brighter center and a darker edge can determine whether the human head fits the surface patch component.
- Steps S1-S2 the camera is placed in a dark room, or the surface mount component is covered by other objects, the brightness of the first image will also be determined to meet the set standard, which requires further execution Steps S3-S4 are for judgment. If there is no object attached to the surface mount component, there will be no light spots in the second image collected; if other objects cover the surface mount component, light spots will appear in the second image, but due to different materials and surface shapes, for the illumination beam The reflection situation is different from that of the human body, so whether it is a human body can be judged by the characteristics of the light spot.
- the lens may also be aimed at other positions when the first and second images are collected, such as at the eyeball, and in step S4, the eyeball feature may be identified in the image to determine whether it is a human body.
- step S4 may first determine whether the brightness of the second image reaches a set standard, and similar to identifying the brightness of the first image, the second image may be converted into a grayscale image, and then the brightness value may be calculated, or Recognition using neural network. If there is a gap between the surface mount component and the human body at this time and light leakage occurs, the brightness of the second image will be different from the brightness of the second image when only the light source of the camera itself is illuminated due to the influence of ambient light. After the light leakage is excluded, it is then judged whether the features in the second image conform to the features of human skin.
- Step S5 is performed when it is determined that the human head fits the face-sticking component, otherwise, step S6 is performed.
- S5 start capturing the fundus image. Specifically, it is necessary to automatically find the pupil, adjust the working distance, and adjust the focal length to set the imaging plane on the fundus, and finally capture the fundus image.
- a voice module may be set in the fundus camera to prompt the user how to properly wear the fundus camera, etc., and then return to step S1 to make a new judgment.
- an image is collected when the lighting assembly is turned off, and it can be preliminarily determined whether the face-sticking assembly is well covered by the object through the brightness of the image, and then the image is collected when the lighting assembly is turned on. , and further determine whether the covering is a human body through the image features, thereby automatically determining whether the subject is wearing the fundus camera correctly and using the fundus camera in a suitable environment.
- This solution can automatically trigger the fundus camera to take fundus photos. Manual intervention is required to trigger shooting, and no professional manipulation is required, thereby improving the convenience of taking fundus photos and promoting the popularization of fundus cameras.
- an embodiment of the present invention provides a method for automatically aligning a lens of a fundus camera, which can be performed by the fundus camera itself, or performed by an electronic device such as a computer or a server (as a control method). It includes the following steps:
- S1 identify the image collected by the lens of the fundus camera, and determine whether there is a pupil in it. Specifically, after the user wears the above-mentioned fundus camera, the system will continuously (such as frame by frame) capture images of the pupil. If the pupil can be identified in the image, it means that the pupil is already within the imaging range. In this case, fine-tuning is performed to make The lens is perfectly aligned with the pupil to shoot. If the pupil cannot be identified in the image, it means that the position of the lens and the pupil has a large deviation. The reason may be that the initial position of the lens is not suitable, or the user's wearing method is not standard, etc.
- a large number of photos of pupils are collected. These photos are images collected by different people, at different directions and distances from the eyepiece objective of the above-mentioned fundus camera, and at different times. Then, the pupils in each image are marked to obtain training data for training the neural network. Use these labeled data to train a neural network model (such as the YOLO network). After training, the recognition result of the neural network model includes a detection box, which is used to characterize the position and size of the pupil in the image.
- a neural network model such as the YOLO network
- a square frame 51 is used to mark the pupil in the training data, and the recognition result of the trained neural network model will also be a square detection frame.
- a circular frame may also be used for marking, or other similar marking methods are feasible.
- step S2 it is only necessary to identify whether there is a pupil in the image, if there is no pupil in the image, step S2 is performed, otherwise, step S3 is performed.
- the user will be prompted to adjust the wearing state; if the pupil is searched, it will be further judged if the user's eyes are far away from the lens, beyond the range that the motion component can move, such as judging the lens Whether the moving distance exceeds the moving threshold, when the moving distance exceeds the moving threshold, the user is prompted to move the head slightly in the face sticker component to adapt to the moving range of the lens. Then continue searching, and execute step S3 when the moving distance does not exceed the moving threshold.
- S3 determine whether the pupil in the image meets the set condition.
- setting conditions can be set, such as conditions related to size, conditions related to shape, and the like.
- the set condition includes a size threshold, and it is determined whether the pupil size in the image is larger than the size threshold.
- the pupil size in the image is larger than the size threshold, it is determined that there is a pupil that meets the set condition; otherwise, the user is prompted. Close your eyes and rest for a while to dilate your pupils before shooting. Because when taking fundus images, it is generally necessary to take pictures of both eyes in sequence. After taking the first eye, the pupil will be narrowed. Therefore, the system will also let the user close their eyes and rest to restore the pupil size.
- the set condition includes a morphological feature, and it is determined whether the pupil shape in the image conforms to the set morphological feature.
- Conditioned pupil otherwise prompt the user to open their eyes, try not to blink, etc.
- the set morphological feature is circular or approximately circular. If the detected pupil does not conform to the preset morphological feature, for example, it may be flat, which is generally caused by the user's eyes not being opened.
- the above-mentioned neural network model needs to be used for pupil detection, and the recognition result of the neural network model also includes the confidence information of the pupil, that is, the probability value used to indicate that the model determines that the pupil exists in the image .
- the setting conditions include a confidence threshold, and it is determined whether the confidence information obtained by the neural network model is greater than the confidence threshold. When the confidence information is greater than the confidence threshold, it is determined that there are pupils that meet the set conditions; otherwise, the user is prompted to open their eyes and remove obstacles such as hair.
- the confidence of the pupil obtained by the neural network model is relatively low, indicating that although there is a pupil in the image, it may be disturbed by other objects. In order to improve the shooting quality, the user is prompted to make adjustments here.
- Step S4 is performed when the pupil in the image meets the set condition, otherwise, it waits for the user to adjust his state and continues to judge until the set condition is met.
- Step S4 move the lens of the fundus camera to align the pupil according to the position of the pupil in the image.
- the lens barrel is moved by the above-mentioned motion components, and the direction and distance of the movement depends on the deviation of the pupil from the lens in the image.
- the center point of the acquired image as the center point of the lens, and identify the center point of the pupil in the image.
- the center point of the detection frame may be regarded as the center point of the pupil.
- a fundus image capturing method is provided.
- the pupil state in the image it can be automatically determined whether the subject's current pupil state is suitable for capturing the fundus image.
- the photographer sends out corresponding prompts to adjust their own state.
- the state is suitable for shooting fundus images
- the position of the pupil is recognized to perform automatic alignment, and then the shooting is performed, thereby avoiding unusable fundus images.
- No professional participation is required, enabling users to shoot autonomously.
- the size of the pupil may be smaller than the size of the annular illumination beam.
- aligning the pupil with the eyepiece will cause no light to enter the pupil, so the image is captured.
- the image is black.
- an embodiment of the present invention provides a preferred method for capturing a fundus image, and the method includes the following steps:
- FIG. 7 shows a case where the size of the pupil 72 is larger than the size of the annular light beam 71 , and step S52 is executed in this case.
- Fig. 8 shows the case where the size of the two annular illumination beams is larger than the pupil size.
- the illumination light source is a complete annular illumination lamp or a light source formed by a plurality of illumination lamps arranged in a ring shape.
- the inner diameter of the annular beam 71 is larger than that of the pupil. 72 in diameter.
- Step S53 is executed when the pupil size is smaller than the annular illumination beam size, that is, the situation as shown in FIG. 8 is met.
- step S54 is executed.
- S531 determine the edge position of the pupil. Specifically, a machine vision algorithm or the above-mentioned neural network model can be used to obtain the left edge point 721 and the right edge point 722 of the pupil 72 in FIG. 9 .
- S532 Determine the moving distance according to the edge position of the pupil. Specifically, the moving distance of the moving component can be calculated according to the positional relationship between the position O of the current lens center (image center position) and the left edge point 721 and the right edge point 722 .
- S533 respectively move the lens in multiple directions according to the determined moving distance, and the determined moving distance makes the edge of the annular illumination beam coincide with the edge of the pupil.
- the outer edge of the annular beam 71 just coincides with the edge of the pupil 72, so that the part of the annular beam 71 entering the fundus can be located at the edge of the fundus, reducing the impact on the imaging of the central area of the fundus.
- step S54 fuse the multiple fundus images into one fundus image.
- the available areas will be extracted from each fundus image, and a complete fundus image will be stitched and fused using these fundus images.
- step S54 specifically includes:
- S543a splicing a plurality of effective areas according to the displacement deviation to obtain a spliced fundus image. Further, an image fusion algorithm is used to perform fusion processing at the splicing of each of the effective regions.
- step S54 specifically includes:
- a method for capturing a fundus image is provided.
- the lens of the fundus camera is aligned with the pupil, the size of the pupil in the comparison image and the size of the annular speed of light emitted by the camera itself are first judged. If the pupil size is too small, the illumination beam cannot be illuminated normally.
- the lens is moved to deviate from the current alignment position so that the annular illumination beam is partially illuminated in the pupil, and fundus images are acquired at multiple offset positions, and finally a fundus image is fused according to multiple fundus images.
- This scheme The fundus image can be taken when the pupil of the subject is small, without the need for professionals to participate in the shooting process, which reduces the requirements for the pupil state of the subject and improves the shooting efficiency.
- this embodiment provides a method for adjusting the working distance of a fundus camera, which can be performed by the fundus camera itself, or performed by an electronic device such as a computer or a server (as a control method).
- the method includes the following steps:
- control the lens to approach the eyeball and capture an image the image is the imaging of the illumination beam reflected by the cornea.
- This step is performed according to the solution of the above-mentioned embodiment and is performed when the lens is aligned with the pupil in the XY plane.
- controlling the lens to approach the eyeball means controlling the lens to move toward the eyeball on the Z axis through the motion component.
- the light source of the illumination component passes through the optical lens, and the reflected light irradiated on the cornea of the eye is imaged on the cmos, and the result shown in Figure 10 will be obtained.
- the light source is distributed in a cross shape at The four light balls on the four sides of the lighting assembly also show four light spots in the imaging of this light source.
- the illumination light source may be in the shape as shown in FIG. 8 , and the captured image will show light spots of corresponding shape or arrangement.
- the corneal reflected light imaging will change.
- the position, size, and sharpness of the image are related to the distance between the eyepiece objective and the cornea. The closer the distance is, the greater the angle between the incident light and the normal of the cornea, the heavier the scattering effect of reflection, the larger the spot size, the more divergent, and the lower the brightness.
- images of a large number of light spots are collected, and these images are images collected by different people at different directions and distances from the eyepiece objective of the above-mentioned fundus camera and at different times. Then, the light spots in each image are marked to obtain training data for training the neural network. Use these labeled data to train a neural network model (such as the YOLO network).
- the recognition result of the neural network model includes a detection frame, which is used to characterize the position and size of the light spot in the image.
- a square frame 121 is used to mark the light spot in the training data, and the recognition result of the trained neural network model will also be a square detection frame.
- a circular frame may also be used for marking, or other similar marking methods are feasible.
- the set feature can be a feature about size, for example, when the spot size in the image is smaller than the set size, it is determined to meet the set feature; it can also be the disappearance of the spot, such as when the machine vision algorithm or neural network cannot detect the image in the image. It is judged that the light spot conforms to the set characteristics.
- step S3 If the light spot in the image conforms to the set characteristics, step S3 is performed; otherwise, it returns to step S1 to continue to move the lens and capture images.
- the imaging of the illumination beam reflected by the cornea is collected and identified, and the distance between the lens and the eyeball is judged and adjusted according to the light spot feature in the image, and it is not necessary to set any setting on the fundus camera.
- additional optics or hardware it is only necessary to set an appropriate illumination beam to accurately locate the working distance, thereby reducing the cost of the fundus camera and improving the efficiency of working distance adjustment.
- This embodiment provides a preferred method for adjusting the working distance, and the method includes the following steps:
- step S2A call the neural network to detect the light spot in the image, and judge whether there is a light spot in the image.
- step S6A is performed, otherwise, step S3A is performed.
- step S3A Identify the center point of the light spot in the image, and determine whether the center point of the light spot coincides with the center point of the image.
- the center of the detection frame obtained by the neural network is regarded as the center of the light spot.
- the center point of the image is regarded as the center of the lens. If the center point of the image coincides with the center of the light spot, it means that the lens is aligned with the pupil, and step S5A is performed;
- Detection-adjustment-re-detection is a feedback process.
- a smooth adjustment algorithm is used here:
- Adjustment(i-1) represents the displacement of the last lens adjustment
- Shift(i) represents the offset (the deviation between the pupil center and the image center)
- Adjustment(i) represents the displacement that needs to be adjusted this time
- a is A coefficient between 0-1. Because the position of the lens is a two-dimensional coordinate on the XY plane, both Adjustment and Shift are two-dimensional vectors.
- step S5A is performed.
- step S5A control the lens to move closer to the eyeball to reduce the distance. Then, returning to step S1A, with the repeated execution of making the lens gradually approach the eyeball, the size of the light spot in the corresponding image changes from large to small.
- each frame of image can be collected and the above judgment and adjustment can be made accordingly until the image of the disappearance of the light spot is detected.
- S6A control the lens to continue to move the preset distance in the direction close to the eyeball to reach the working distance.
- the system while performing the above adjustment process, it will also detect whether the light spot in the image is complete.
- the light spot is incomplete, such as only half of it, it means that the user is blinking or his eyes are not open. At this time, the system It will prompt the user to open their eyes by voice, try not to blink, etc.
- the position of the lens is also fine-tuned according to the position of the light spot in the image, thereby keeping the lens aligned with the pupil when adjusting the working distance. It is necessary to set any additional optics or hardware on the fundus camera, and only need to set the appropriate illumination beam to achieve accurate positioning of the working distance and keep the lens aligned with the pupil, thereby reducing the cost of the fundus camera and improving the efficiency of fundus image capture.
- this embodiment provides a method for adjusting the focal length of the fundus camera, which can be performed by the fundus camera itself, or performed by an electronic device such as a computer or a server (as a control method).
- the method includes the following steps:
- S1 adjust the focus and collect fundus images. This step is performed when the lens of the fundus camera is aligned with the pupil and reaches the working distance, and the position of the lens and the eyeball at this time is shown in Figure 13. It should be noted that, in the process of adjusting the lens position and working distance in the above embodiment, when capturing images, it is of course also necessary to set a fixed focal length. If the subject's refraction is normal, when the working distance is adjusted in place, the fundus image can be taken directly. However, in practical application, it is necessary to consider the actual diopter of the person being photographed, so as to set a suitable focal length.
- infrared light is used for imaging.
- the light source used for capturing the image is still infrared light.
- the image collected at this time can basically reflect the characteristics of the fundus, and at least the optic disc can be displayed in the image, so the collected image is called the fundus image.
- the optic disc in each image is then annotated to obtain training data for training the neural network.
- Use these labeled data to train a neural network model (such as the YOLO network).
- the recognition result of the neural network model includes a detection frame, which is used to represent the position of the optic disc in the fundus image.
- a square frame 141 is used to mark the optic disc in the training data, and the recognition result of the trained neural network model will also be a square detection frame.
- a circular frame may also be used for marking, or other similar marking methods are feasible.
- the focal length can be continuously changed by means of gradient ascent and the corresponding fundus images can be collected to determine whether the clarity of the optic disc has reached the preset standard.
- the best focal length does not need to continue searching; you can also use all available focal lengths within the adjustable focal length range, and collect the corresponding fundus images, determine a fundus image with the highest optic disc definition from all fundus images, and decide to collect the image.
- the focal length at is the best focal length.
- the traversal method is used to first adjust the focal length with a first set step size of 40 within the set focal length range of 800-1300, and collect the first group of fundus images, thus the fundus images when the focal length reaches 800 , the fundus image at the focal length of 840, the fundus image at the focal length of 880...the fundus image at the focal length of 1300.
- the optic disc area is identified in these fundus images respectively, and the sharpness of each fundus image is determined respectively.
- the mean value of the pixel values in the optic disc area is calculated as the sharpness.
- a fundus image with the highest definition can be determined from the first group of fundus images, and at this time, the focal length X (first focal length) used when collecting the fundus image can be taken as the shooting focal length.
- the focal length for example, perform another traversal near the above-mentioned focal length X, and the second set step size used in this traversal process is smaller than the above-mentioned first set step size, such as The second set step size is 10, so that the second group of fundus images can be obtained, namely, the fundus images at the focal length of X+10, the fundus images at the focal length of X+20, the fundus images at the time of X-10, and the fundus images at the time of X-20 fundus images, etc. Then, identify the optic disc area in these fundus images respectively, and determine the clarity of each fundus image. focal length) as the shooting focal length.
- the first focal length X can be taken as the midpoint to increase the focal length of the first set step size to the maximum value and to decrease the focal length of the first set step size to the minimum value range, the range is X ⁇ 40.
- fundus images are collected at different focal lengths, and whether the current focal length is suitable for capturing fundus images is judged by the clarity of the optic disc in the fundus images.
- Hardware only need to set the image recognition algorithm to find the best focus position, which can reduce the cost of the fundus camera and improve the efficiency of focus adjustment.
- This embodiment provides a preferred focal length adjustment method, which includes the following steps:
- S1A use the current focal length to acquire a fundus image.
- step S2A it is judged whether the subject blinks and/or closes his eyes according to the fundus image.
- a prompt is given, for example, a voice prompts the user not to blink or close his eyes, etc., and then return to step S1A; otherwise, step S3A is performed.
- Eye blinking and eye closing detection can also be realized by machine vision algorithm or neural network algorithm. When the subject blinks or closes his eyes, the collected image will be completely black or very blurry, and the features are relatively obvious. Various methods can be used for detection. , and will not be repeated here.
- S3A identifying whether there is a light spot formed by the illumination beam reflected by the cornea in the fundus image.
- the illumination beam reflected by the cornea should not be in the imaging range, and the above-mentioned light spot should not appear in the fundus image. , especially the complete imaging of the spot is not possible. Even if a light spot appears, it will be a part of the whole light spot.
- a light source formed by a plurality of illuminating lamps arranged in a ring shape is used, and the complete light spot is shown in FIG. 12 .
- a light spot appears in the fundus image when the focus is adjusted, it will be the situation as shown in FIG. 15 , in which only part of the light spot 151 is present. If the light source itself is a complete ring light, a band will appear in the image.
- step S4A When there is a light spot in the fundus image, step S4A is performed, otherwise, step S5A is performed.
- the vector offset can be calculated by combining the position, size and brightness of the light spot in the image.
- a coordinate system is established with the image center as the origin (0,0), and the image radius is R.
- the approximate circular area of each light spot 151 is calculated, and in this embodiment, the approximate circular area is the smallest circular area including the light spot 151 .
- the center coordinate of the approximate circular area of the i-th light spot is (x i , y i ), and the radius is ri i .
- step S1A After aligning the lens with the pupil again, the process returns to step S1A.
- S5A Identify the optic disc area in the fundus image, and determine whether the clarity of the optic disc area reaches a set standard.
- the mobilenet-yolov3 neural network model is used to identify the optic disc, and the optic disc area output by the neural network is the area including the optic disc and the background. Then, the edge of the optic disc is detected in the optic disc area by an edge detection algorithm (such as sobel, Laplace, etc.) to obtain an accurate optic disc image, and the mean value of the optic disc image is calculated as the sharpness value.
- an edge detection algorithm such as sobel, Laplace, etc.
- step S6A it can be determined whether the set standard is reached by comparing the obtained definition value with the threshold value. If the definition of the optic disc area does not meet the set standard, step S6A is executed. If the clarity of the optic disc area has reached the set standard, it is determined that the current focal length is suitable for capturing the fundus image, and then the infrared light can be turned off, and white light is used for exposure to capture the fundus image.
- the initial focal length used in step S1A is the minimum value among the adjustable focal lengths, in this case, the focal length is increased according to a fixed step size or a variable step size, otherwise, the focal length is decreased.
- the fundus image is started to be captured.
- an illumination component needs to be used for exposure (the light source used by the camera in this embodiment is white light).
- the subject may still affect the shooting quality of the fundus image, such as the pupil becomes smaller, the eyelid is blocked, the eye blinks, the light leakage of the face sticker component, etc., when these situations occur, the fundus image captured will appear unavailable Area.
- this embodiment provides a fundus image shooting method, which can be executed by the fundus camera itself, or executed by an electronic device such as a computer or a server (as a control method), The method includes the following steps:
- the lens is fixed at a position in the XY plane and aligned with the pupil, and positioned at the distance of the Z axis, using a fixed focal length, and the lens position, working distance and focal length remain unchanged. , expose the illumination assembly and capture multiple fundus images.
- S2 respectively determine the quality of the multiple fundus images.
- a neural network model is used to analyze image quality, and the neural network model can perform classification tasks to classify image quality, such as outputting classification results of high or poor quality; it can also perform regression prediction tasks to assess image quality. Perform quantification, such as outputting a score of 1-10 to express the evaluation of image quality.
- a large number of retinal images exposed to white light are collected in advance, and the image quality is manually marked as good or bad (for classification models), or the image quality is scored (for example, 1 to 10 points, suitable for regression prediction models).
- These fundus images and annotations or scores are used as training data to train a neural network model. After the model converges, it can be used to identify the quality of fundus images.
- step S3 determine whether the quality of each fundus image meets the set standard, and if any fundus image meets the set standard, the fundus image can be used as the shooting result (the shooting result is output). If the quality of the multiple fundus images does not meet the set standard, step S4 is executed.
- the lens state is kept unchanged, multiple fundus images are captured, and the quality of the multiple fundus images is determined respectively, and when it is determined that all the fundus images are unavailable, the multiple fundus images are used to synthesize A complete fundus image, even if the subject interferes with the shooting process, it will be possible to use the existing fundus image to obtain a high-quality fundus image, reduce the number of re-shots, reduce the user's difficulty in use, and improve the shooting of fundus. Image success rate.
- an embodiment of the present invention provides a method for synthesizing a fundus image, the method comprising the following steps:
- the brightness can be calculated according to the pixel value of the fundus image, and by comparing with the brightness threshold, the regions with higher brightness and the regions with lower brightness can be removed, thereby removing the overexposed and underexposed regions.
- the area with moderate brightness that is, the high-quality area
- the sharpness can also be calculated according to the pixel value of the fundus image, and the area with lower sharpness can be removed by comparing with the sharpness threshold, thereby removing the exposed blurred area, so as to obtain high-quality regions; or comprehensively extract high-quality regions based on brightness and sharpness.
- the regions extracted according to the actual brightness and/or sharpness of the fundus image are usually regions with irregular boundaries, such as the two high-quality regions shown in Figure 16, the regions shown on the left are from the upper part of a fundus image, and the regions on the right The area shown is from the lower part of a fundus image.
- each fundus image can also be divided into grids according to a fixed dividing method, and then the quality of each grid area is analyzed separately, and high-quality grids are extracted, so that high-quality areas with regular boundaries can be obtained. .
- each fundus image may have some offset, in order to synthesize the fundus image more accurately, each fundus image can be mapped to the same coordinate system according to the offset, and then stitching and fusion processing are performed.
- abnormal region detection is first performed on multiple fundus images to extract high-quality regions.
- step S43 firstly, feature points (or key points) are extracted from the multiple fundus images respectively, which may be the center point of the optic disc, the intersection of blood vessels, or other significant positions. Then, feature point matching is performed to match the feature points between different fundus images. After these feature points are matched, the matching information is used to calculate the offset between the respective fundus images (projection matrix calculation). Then according to the offset, multiple high-quality regions are mapped into a fundus image.
- the pixel values of the multiple high-quality regions and the corresponding weights can be used to determine the pixel values of the overlapping parts .
- This is a fusion process based on weighted average.
- the fusion process can be expressed as q1/(q1+q2)*image1+q2/(q1+q2)*image2, where q1 represents the first high-quality The weight of the region, q2 represents the weight corresponding to the second high-quality region, image1 represents the first high-quality region, and image2 represents the second high-quality region.
- the value of the above weight is set according to the overall quality of the fundus image, for example, the first high-quality area is taken from the first fundus image, and the second high-quality area is taken from the second fundus image, and the value obtained according to the above quality analysis method.
- the quality of the first fundus image (for example, the score output by the neural network) is higher than that of the second fundus image, so the corresponding weight q1 is greater than q2.
- the fundus image synthesis method when there are defects in the multiple fundus images captured by the subject, high-quality regions are extracted from the multiple fundus images by using this scheme, and spliced and fused to obtain A complete fundus image with higher quality can be obtained, thereby reducing the difficulty for the user to take a self-portrait fundus image and improving the shooting success rate.
- embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
Description
Claims (11)
- 一种眼底图像全自动拍摄方法,其特征在于,包括:移动眼底相机镜头对准瞳孔;控制所述镜头接近眼球并采集图像,所述图像是对角膜所反射的照明光束的成像;利用所述图像确定工作距离;调整焦距并采集眼底图像,利用所述眼底图像确定拍摄焦距;在所述工作距离上使用所述拍摄焦距拍摄眼底图像。
- 根据权利要求1所述的方法,其特征在于,在移动眼底相机镜头对准瞳孔之前,还包括:检测眼底相机的运动组件、照明组件和调焦组件是否正常。
- 根据权利要求2所述的方法,其特征在于,检测眼底相机的运动组件、照明组件和调焦组件是否正常具体包括:控制运动组件调节镜头的位置,检测所述镜头是否能够移动到各个定位组件所在的位置;在所述镜头能够移动到各个定位组件所在的位置后,控制所述运动组件将所述镜头移动到设定位置,开启照明组件并控制调焦组件调整为第一焦距,拍摄得到第一图像;根据所述第一图像中的照明组件的图像特征判断所述调焦组件和所述照明组件是否正常;当所述调焦组件和所述照明组件正常时,控制运动组件调节镜头至设定深度位置上,控制调焦组件调整为第二焦距,拍摄得到第二图像;根据所述第二图像中的被拍物的图像特征判断成像功能是否正常。
- 根据权利要求1或2所述的方法,其特征在于,在移动眼底相机镜头对准瞳孔之前,还包括:检测人体头部是否贴合眼底相机的面贴组件。
- 根据权利要求4所述的方法,其特征在于,检测人体头部是否贴合眼底相机的面贴组件具体包括:关闭照明组件,获取镜头透过面贴组件的视窗采集的第一图像;判断所述第一图像的亮度是否达到设定标准;当所述第一图像的亮度达到设定标准时,开启所述照明组件,获取镜头透过面贴组件的视窗采集的第二图像;根据所述第二图像确定人体头部是否贴合所述面贴组件。
- 根据权利要求1-5中任一项所述的方法,其特征在于,利用所述图像确定工作距离具体包括:检测所述图像中的光斑的特征是否符合设定特征;当所述光斑的特征符合设定特征时,确定达到工作距离。
- 根据权利要求1-5中任一项所述的方法,其特征在于,利用所述眼底图像确定拍摄焦距具体包括:在所述眼底图像中识别视盘区域;根据所述视盘区域的清晰度确定拍摄焦距。
- 根据权利要求1-5中任一项所述的方法,其特征在于,在所述工作距离上使用所述拍摄焦距拍摄眼底图像具体包括:判断瞳孔尺寸是否小于眼底相机照明组件的环状照明光束尺寸;当所述瞳孔尺寸小于所述环状照明光束尺寸时,分别向多个方向移动所述镜头与瞳孔产生偏移,使得所述环状照明光束部分照射在瞳孔中,并拍摄多个眼底图像;将所述多个眼底图像融合为一个眼底图像。
- 根据权利要求1-5中任一项所述的方法,其特征在于,在所述工作距离上使用所述拍摄焦距拍摄眼底图像具体包括:获取镜头状态不变的情况下拍摄的多个眼底图像;分别在所述多个眼底图像中提取高质量区域;利用多个所述高质量区域合成眼底图像。
- 一种电子设备,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行如权利要求1-9中任一项所述的眼底图像全自动拍摄方法。
- 一种眼底相机,其特征在于,包括:面贴组件、运动组件、调焦组件、照明组件、镜头和至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行如权利要求1-9中任一项所述的眼底图像全自动拍摄方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21878859.4A EP4230112A1 (en) | 2020-10-14 | 2021-01-27 | Fundus camera and fully-automatic photography method for fundus image |
US18/031,513 US20230404401A1 (en) | 2020-10-14 | 2021-01-27 | Fundus camera and fully-automatic photography method for fundus image |
JP2023519780A JP2023547595A (ja) | 2020-10-14 | 2021-01-27 | 眼底カメラ及び眼底画像全自動撮影方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011095133.3A CN112043236B (zh) | 2020-10-14 | 2020-10-14 | 眼底相机及眼底图像全自动拍摄方法 |
CN202011095133.3 | 2020-10-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022077800A1 true WO2022077800A1 (zh) | 2022-04-21 |
Family
ID=73605181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/073875 WO2022077800A1 (zh) | 2020-10-14 | 2021-01-27 | 眼底相机及眼底图像全自动拍摄方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230404401A1 (zh) |
EP (1) | EP4230112A1 (zh) |
JP (1) | JP2023547595A (zh) |
CN (1) | CN112043236B (zh) |
WO (1) | WO2022077800A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116369840A (zh) * | 2023-06-05 | 2023-07-04 | 广东麦特维逊医学研究发展有限公司 | 一种无亮斑投影照明系统及其工作方法 |
CN116725479A (zh) * | 2023-08-14 | 2023-09-12 | 杭州目乐医疗科技股份有限公司 | 一种自助式验光仪以及自助验光方法 |
CN117893529A (zh) * | 2024-03-14 | 2024-04-16 | 江苏富翰医疗产业发展有限公司 | 一种眼底智能拍摄方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112043236B (zh) * | 2020-10-14 | 2021-06-15 | 上海鹰瞳医疗科技有限公司 | 眼底相机及眼底图像全自动拍摄方法 |
CN113476014B (zh) * | 2021-06-02 | 2023-11-14 | 北京鹰瞳科技发展股份有限公司 | 双坐标系对应关系建立系统及方法 |
CN113349734B (zh) * | 2021-06-29 | 2023-11-14 | 北京鹰瞳科技发展股份有限公司 | 眼底相机及其工作距离校准方法 |
CN113729617A (zh) * | 2021-08-20 | 2021-12-03 | 北京鹰瞳科技发展股份有限公司 | 眼底相机的镜头的控制方法和控制装置 |
CN114098632B (zh) * | 2022-01-27 | 2022-11-29 | 北京鹰瞳科技发展股份有限公司 | 用于对眼底相机中的马达进行控制的方法及其相关产品 |
CN115471552B (zh) * | 2022-09-15 | 2023-07-04 | 江苏至真健康科技有限公司 | 一种用于便携式免散瞳眼底照相机的拍摄定位方法及系统 |
CN116687339B (zh) * | 2023-08-01 | 2023-10-31 | 杭州目乐医疗科技股份有限公司 | 基于眼底相机的图像拍摄方法、眼底相机、装置及介质 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005278842A (ja) * | 2004-03-29 | 2005-10-13 | Nidek Co Ltd | 眼底カメラ |
CN1714740A (zh) * | 2004-06-14 | 2006-01-04 | 佳能株式会社 | 眼科装置 |
CN1810202A (zh) * | 2004-12-21 | 2006-08-02 | 佳能株式会社 | 眼科装置 |
CN103908221A (zh) * | 2013-01-08 | 2014-07-09 | 华晶科技股份有限公司 | 摄像装置及摄像方法 |
CN108346149A (zh) | 2018-03-02 | 2018-07-31 | 北京郁金香伙伴科技有限公司 | 图像检测、处理方法、装置及终端 |
CN109547677A (zh) * | 2018-12-06 | 2019-03-29 | 代黎明 | 眼底图像拍摄方法和系统及设备 |
CN111134616A (zh) | 2020-02-25 | 2020-05-12 | 上海鹰瞳医疗科技有限公司 | 一种眼底相机照明系统及眼底相机 |
CN111449620A (zh) * | 2020-04-30 | 2020-07-28 | 上海美沃精密仪器股份有限公司 | 一种全自动眼底相机及其自动照相方法 |
CN112043236A (zh) * | 2020-10-14 | 2020-12-08 | 上海鹰瞳医疗科技有限公司 | 眼底相机及眼底图像全自动拍摄方法 |
-
2020
- 2020-10-14 CN CN202011095133.3A patent/CN112043236B/zh active Active
-
2021
- 2021-01-27 EP EP21878859.4A patent/EP4230112A1/en active Pending
- 2021-01-27 WO PCT/CN2021/073875 patent/WO2022077800A1/zh active Application Filing
- 2021-01-27 JP JP2023519780A patent/JP2023547595A/ja active Pending
- 2021-01-27 US US18/031,513 patent/US20230404401A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005278842A (ja) * | 2004-03-29 | 2005-10-13 | Nidek Co Ltd | 眼底カメラ |
CN1714740A (zh) * | 2004-06-14 | 2006-01-04 | 佳能株式会社 | 眼科装置 |
CN1810202A (zh) * | 2004-12-21 | 2006-08-02 | 佳能株式会社 | 眼科装置 |
CN103908221A (zh) * | 2013-01-08 | 2014-07-09 | 华晶科技股份有限公司 | 摄像装置及摄像方法 |
CN108346149A (zh) | 2018-03-02 | 2018-07-31 | 北京郁金香伙伴科技有限公司 | 图像检测、处理方法、装置及终端 |
CN109547677A (zh) * | 2018-12-06 | 2019-03-29 | 代黎明 | 眼底图像拍摄方法和系统及设备 |
CN111134616A (zh) | 2020-02-25 | 2020-05-12 | 上海鹰瞳医疗科技有限公司 | 一种眼底相机照明系统及眼底相机 |
CN111449620A (zh) * | 2020-04-30 | 2020-07-28 | 上海美沃精密仪器股份有限公司 | 一种全自动眼底相机及其自动照相方法 |
CN112043236A (zh) * | 2020-10-14 | 2020-12-08 | 上海鹰瞳医疗科技有限公司 | 眼底相机及眼底图像全自动拍摄方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116369840A (zh) * | 2023-06-05 | 2023-07-04 | 广东麦特维逊医学研究发展有限公司 | 一种无亮斑投影照明系统及其工作方法 |
CN116369840B (zh) * | 2023-06-05 | 2023-08-01 | 广东麦特维逊医学研究发展有限公司 | 一种无亮斑投影照明系统及其工作方法 |
CN116725479A (zh) * | 2023-08-14 | 2023-09-12 | 杭州目乐医疗科技股份有限公司 | 一种自助式验光仪以及自助验光方法 |
CN116725479B (zh) * | 2023-08-14 | 2023-11-10 | 杭州目乐医疗科技股份有限公司 | 一种自助式验光仪以及自助验光方法 |
CN117893529A (zh) * | 2024-03-14 | 2024-04-16 | 江苏富翰医疗产业发展有限公司 | 一种眼底智能拍摄方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2023547595A (ja) | 2023-11-13 |
CN112043236A (zh) | 2020-12-08 |
EP4230112A1 (en) | 2023-08-23 |
CN112043236B (zh) | 2021-06-15 |
US20230404401A1 (en) | 2023-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022077800A1 (zh) | 眼底相机及眼底图像全自动拍摄方法 | |
CN112220447B (zh) | 眼底相机及眼底图像拍摄方法 | |
CN112075921B (zh) | 眼底相机及其焦距调整方法 | |
US7845797B2 (en) | Custom eyeglass manufacturing method | |
US7434931B2 (en) | Custom eyeglass manufacturing method | |
US8644562B2 (en) | Multimodal ocular biometric system and methods | |
JP2000113199A (ja) | 眼の位置を判断する方法 | |
CN111449620A (zh) | 一种全自动眼底相机及其自动照相方法 | |
CN106821303B (zh) | 一种自主式眼底摄影成像系统及方法 | |
CN112075920B (zh) | 眼底相机及其工作距离调整方法 | |
CN111202495A (zh) | 皮肤分析装置 | |
WO2021204211A1 (zh) | 人脸和虹膜图像采集方法、装置、可读存储介质及设备 | |
JP5092120B2 (ja) | 眼球運動計測装置 | |
CN112190227B (zh) | 眼底相机及其使用状态检测方法 | |
JP2017034569A (ja) | 撮像装置及びその制御方法 | |
CN112220448B (zh) | 眼底相机及眼底图像合成方法 | |
CN112043237A (zh) | 全自动便携自拍眼底相机 | |
JP3711053B2 (ja) | 視線測定装置及びその方法と、視線測定プログラム及びそのプログラムを記録した記録媒体 | |
CN112190228B (zh) | 眼底相机及其检测方法 | |
Borsato et al. | Episcleral surface tracking: challenges and possibilities for using mice sensors for wearable eye tracking | |
CN212281326U (zh) | 全自动便携自拍眼底相机 | |
CN113658243B (zh) | 眼底三维模型建立方法、眼底照相机、装置和存储介质 | |
KR102263830B1 (ko) | 자동 초점 기능을 이용한 안저 영상 촬영 장치 | |
KR102085285B1 (ko) | 딥러닝 영상분석 기반의 얼굴 인식 및 홍채 위치 인식 시스템 | |
EP3695775B1 (en) | Smartphone-based handheld optical device and method for capturing non-mydriatic retinal images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21878859 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023519780 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18031513 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021878859 Country of ref document: EP Effective date: 20230515 |