CN112190228B - Fundus camera and detection method thereof - Google Patents
Fundus camera and detection method thereof Download PDFInfo
- Publication number
- CN112190228B CN112190228B CN202011095178.0A CN202011095178A CN112190228B CN 112190228 B CN112190228 B CN 112190228B CN 202011095178 A CN202011095178 A CN 202011095178A CN 112190228 B CN112190228 B CN 112190228B
- Authority
- CN
- China
- Prior art keywords
- image
- assembly
- lens
- normal
- fundus camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention provides a fundus camera and a detection method thereof, wherein the method comprises the following steps: controlling a moving assembly to adjust the position of a lens, and detecting whether the lens can move to the position of each positioning assembly; after the lens can be moved to the position of each positioning assembly, controlling the moving assembly to move the lens to a set position, starting the lighting assembly and controlling the focusing assembly to adjust to a first focal length, and shooting to obtain a first image; judging whether the focusing assembly and the illuminating assembly are normal or not according to the image characteristics of the illuminating assembly in the first image; when the focusing assembly and the lighting assembly are normal, controlling the moving assembly to adjust the lens to a set depth position, controlling the focusing assembly to adjust to a second focal length, and shooting to obtain a second image; and judging whether the imaging function is normal or not according to the image characteristics of the shot object in the second image.
Description
Technical Field
The invention relates to the field of ophthalmic instruments, in particular to a fundus camera and a detection method thereof.
Background
The retina is the only tissue in which the human body can directly observe capillaries and nerves, and not only health problems of the eye but also systemic pathologies such as diabetic complications and hypertension can be detected by observing the retina. The fundus camera is a dedicated device for photographing the retina. The traditional fundus camera needs complicated and expensive hardware modules in order to solve the problems that the lens is aligned with the pupil, the axial distance between the lens and the pupil is fixed, and the focusing problem is solved, so that the fundus camera is also complicated to use, and the popularization of the fundus camera is hindered.
An important sign of a fully automatic fundus camera is that self-service personal photography can be performed without excessive manual intervention, and under the condition of little manual intervention, whether each module and component of the equipment work normally or not is very important to the normal performance of the photography.
Disclosure of Invention
In view of the above, the present invention provides a fundus camera detecting method, including:
controlling a moving assembly to adjust the position of a lens, and detecting whether the lens can move to the position of each positioning assembly;
after the lens can be moved to the position of each positioning assembly, controlling the moving assembly to move the lens to a set position, starting the lighting assembly and controlling the focusing assembly to adjust to a first focal length, and shooting to obtain a first image;
judging whether the focusing assembly and the illuminating assembly are normal or not according to the image characteristics of the illuminating assembly in the first image;
when the focusing assembly and the lighting assembly are normal, controlling the moving assembly to adjust the lens to a set depth position, controlling the focusing assembly to adjust to a second focal length, and shooting to obtain a second image;
and judging whether the imaging function is normal or not according to the image characteristics of the shot object in the second image.
Optionally, in the setting position, the lens is aligned with a setting part of the surface mount component.
Optionally, when the first image and the second image are captured, the captured external environment image has a smaller ratio than the ratio of the image of the face sticker assembly.
Optionally, the first focal length is capable of imaging the illumination assembly and not the facelift assembly.
Optionally, the second focal length is capable of imaging the face patch assembly without imaging the illumination assembly.
Optionally, the photographed object is a target on a setting portion of the surface patch assembly.
Optionally, the determining whether the focusing assembly and the illumination assembly are normal according to the image feature of the illumination assembly in the first image specifically includes:
identifying whether there are independent and distinct dots in the first image;
determining that the focusing assembly and the illumination assembly are normal when there are independent and sharp dots in the first image.
Optionally, the determining whether the imaging function is normal according to the image feature of the photographed object in the second image specifically includes:
identifying whether a sharp image of the target is present in the second image;
determining that imaging function is normal when there is a sharp image of the target in the second image.
Accordingly, the present invention provides a fundus camera detecting apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the fundus camera detection method described above.
Accordingly, the present invention provides a fundus camera comprising: the device comprises a motion assembly, a positioning assembly, an illumination assembly, a focusing assembly, a surface pasting assembly, a lens, a processor and a memory which is in communication connection with the processor; wherein the memory stores instructions executable by the one processor to cause the processor to perform the fundus camera detection method described above.
According to the fundus camera and the detection method thereof provided by the invention, whether the moving component can normally adjust the position of the lens is verified through the positioning component; after the moving assembly is confirmed to be normal, adjusting the focal length to enable the illuminating assembly to image, and judging the acquired image to determine whether the focusing assembly and the illuminating assembly are normal or not; finally, the depth of the lens is adjusted through the motion assembly, the focal length is adjusted to enable the shot object to be imaged, the characteristics of the object in the image are judged, whether the motion assembly can normally adjust the depth of the lens is verified, and whether each important part of the fundus camera can normally work is automatically determined. The scheme can be used for carrying out self-service inspection on the working state of the equipment in a remote unattended environment, so that the convenience of taking the eye fundus picture can be improved, and the popularization of the eye fundus camera is promoted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a configuration diagram of a fundus camera in an embodiment of the present invention;
FIG. 2 is a schematic view of a patch assembly of a fundus camera in an embodiment of the present invention;
FIG. 3 is a schematic view of a lens and a positioning assembly;
fig. 4 is a flowchart of a fundus camera detection method according to an embodiment of the present invention;
FIG. 5 is a block diagram of an illumination lamp;
FIG. 6 is a schematic view of imaging illumination reflected light while detecting a camera state;
FIG. 7 is a schematic view of the projection of the face mount assembly being imaged while detecting the camera status;
FIG. 8 is a schematic view of the imaging of the raised portion of the surface mount component with the target during the detection of the camera state;
fig. 9 is an image of an area between both eyes acquired when the usage state of the subject is detected.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 shows a full-automatic portable self-photographing fundus camera, which includes a surface patch assembly 01, a motion assembly, a positioning assembly 03 and a lens barrel 1, wherein an illumination assembly, a focusing assembly, a lens (an objective lens), an optical lens group, an imaging detector 10 and the like are arranged inside the lens barrel 1, and the internal structure of the lens barrel 1 can refer to chinese patent document CN 111134616A. The actual product also comprises a housing inside which the moving assembly and the barrel 1 are located. The surface patch component 01 is connected to the front part of the shell in a sealing mode and comprises a surface patch body and a window through hole which is formed in the surface patch body and used for accommodating eyes of a shot when the eyes are fitted. The surface patch assembly 01 is a member for contacting the eyes of the subject, and the lens barrel 1 collects the fundus retinal image of the subject through the through hole of the surface patch assembly 01.
The surface of the surface mount body facing away from the lens barrel 1 is configured to be in a shape that fits the contour of the face around the eyes of the subject. Specifically, the face patch assembly 01 is formed with a concave shape inward to fit the arc shape of the head of a human body, and the size of the through hole is at least capable of accommodating both eyes when the eyes of the person to be measured are fitted to the assembly. The surface of the surface-mounted component 01 facing inwards (in the shell and the lens barrel) is provided with at least one specific position for detecting various functions of the camera. In a specific embodiment, referring to fig. 1 and 2, fig. 2 shows an inward surface of the surface mount component 01, and a protrusion 012 is provided at an upper edge of a middle portion of the through hole 011, so that the lens of the lens barrel 1 can be aligned with the inward surface and capture an image. More preferably, a pattern or a simple pattern is provided as a target on the protruding part 012. The specific position has multiple purposes, including detecting whether the lighting assembly and the focusing assembly of the camera are normal, detecting whether the eyes of the photographed person are correctly attached to the face-attaching assembly 01, and the like, which will be described in detail below.
The motion assembly is used for controlling the lens barrel 1 to move in a three-dimensional space, and can move on three axes X, Y, Z in the drawing by taking a coordinate system in fig. 1 as an example. Note that, when the lens barrel 1 moves to the extreme position in the Z direction, the end portion does not protrude outside the surface mount component 01. As a specific example, the movement assembly includes three rail assemblies, a first set of rails 021 for controlling the movement of the lens barrel 1 in the X axis, a second set of rails 022 for controlling the movement of the lens barrel 1 in the Y axis, and a third set of rails, not shown, for controlling the movement of the lens barrel 1 in the Z axis. Specifically, the lens barrel 1 is disposed on a platform (base) together with the second set of rails 022, the first set of rails 021 can drive the base to move integrally, and the third set of rails can drive the base and the first set of rails 021 to move, so that the whole approaches to or departs from the face sticker assembly 01.
The positioning assembly 03 is used to detect the movement of the lens barrel 1. Specifically, the positioning component 03 may be an electromagnetic sensor, and the lens barrel 1 is sensed to move to the position of the positioning component 03 according to an electromagnetic induction signal. Referring to fig. 3, in the present embodiment, 3 positioning assemblies 03 are provided, two of which are disposed on two sides of a movable base to detect the movement of the lens barrel 1 in the X axis, and a third positioning assembly is disposed on the base to detect the movement of the lens barrel 1 in the Y axis, that is, the positioning assemblies 03 are used to detect the movement of the lens barrel 1 in the XY plane.
According to the fundus camera provided by the invention, the illumination component, the focusing component, the ocular objective, the optical lens group and the imaging detector for imaging are integrated in one lens cone to realize the miniaturization of an optical path structure, reduce the volume of the fundus camera and improve the portability; the face of eye ground camera pastes the subassembly and is equipped with the window through-hole that is used for holding by shooter's eye, and the user can wear the eye ground camera by oneself, arranges the eye in window through-hole position, and motion assembly drive lens cone searches for the pupil in window through-hole scope to adjust working distance, thereby shoot the eye ground image, this scheme has reduced the complexity and the use degree of difficulty of eye ground camera hardware, lets the user independently shoot the eye ground image, promotes the popularization of eye ground camera.
Embodiments of the present invention provide a fundus camera detection method, which may be performed by a fundus camera itself as a self-detection method, or may be performed by an electronic device such as a computer or a server as a product detection method. As shown in fig. 4, the method includes the steps of:
and S1, controlling the motion assembly to adjust the position of the lens, and detecting whether the lens can move to the position of each positioning assembly. The method is adapted to be carried out when the fundus camera has just been activated, first the lens (according to the above-described embodiment, the lens is provided integrally with the barrel, i.e. the shift lens) is shifted to the initial position. Then, referring to fig. 3, the moving assembly adjusts the position of the lens to detect whether the lens can move to the position of the 3 positioning assemblies. If the motion components can be moved to these positions, the motion components are considered to be functioning properly, step S2 may be performed, otherwise step S6 is performed. Step S1 may be referred to as a moving component XY axis movement detection step.
And S2, controlling the motion assembly to move the lens to a set position, starting the lighting assembly and controlling the focusing assembly to adjust to a first focal length, and shooting to obtain a first image. The purpose of this step is to detect whether the focusing assembly and the illumination assembly are functioning properly, theoretically without limiting the position at which the lens must be aligned, so there are many options for the setting position described in this step. However, in an actual working environment, the external environment is uncertain, such as a relatively bright environment, which may cause the image content to be disturbed if the first image is captured in this step. In order to adapt to the actual working environment, the lens is moved to a specific part (such as the convex part) of the facing component, the external environment is shot as little as possible, and the proportion of the facing component in the image is larger than that of the external environment. Of course, it is also possible to modify the shape of the surface mount component and its through-hole such that the image taken at this step contains no external environment at all.
By setting the appropriate focal length, the illumination assembly can be imaged. For example, fig. 5 shows the structure of the lighting lamp in one lens barrel, and 4 lamp beads are arranged on one ring-shaped structure. These 4 beads were turned on and imaging was performed using the first focal length, expecting an image as shown in fig. 6.
In a preferred embodiment, the first focal length is set to enable imaging of the illumination assembly but not the facelift assembly in order to avoid the effect of the background in the captured image on the imaging of the illumination assembly. Therefore, only the illumination assembly can exist in the first image without objects such as the projection part of the surface pasting assembly, and the accuracy of image recognition in the subsequent step is improved.
And S3, judging whether the focusing assembly and the lighting assembly are normal according to the image characteristics of the lighting assembly in the first image. In case the focusing assembly functions properly, the use of the set focal distance should enable to obtain an image as shown in fig. 6, which has distinct characteristics depending on the actual shape of the illumination assembly. For example, in this embodiment, there should be 4 independent and distinct dots in the first image, which is the imaging result of the 4 beads. If the adjusted focal length is not the first focal length, the point-like object in the image will become bigger and blurred or smaller; if the illumination assembly is not on, no shape will appear in the image.
By identifying the first image through a machine vision algorithm or a neural network algorithm, whether the expected features exist in the image can be identified. If it is determined that the focusing assembly and the lighting assembly are normal, step S4 is performed, otherwise step S6 is performed. Steps S2-S3 may be referred to as focusing assembly and illumination assembly detection steps.
And S4, controlling the movement assembly to adjust the lens to a set depth position, controlling the focusing assembly to adjust to a second focal length, and shooting to obtain a second image. In this step, a known object is imaged, and as a preferred embodiment, the raised portion of the face patch assembly is used as the known object. Specifically, the lens is first aligned with the convex portion of the surface mount component on the XY plane, and step S2 is already aligned with this portion in this embodiment, and this step does not need to be adjusted; in other embodiments, if the location is not aligned in step S2, an adjustment is made in this step. The step needs to adjust the depth, that is, the position of the lens on the Z axis, which can be understood as adjusting the shooting distance to the known object, and then setting the focal length.
In order to be able to image the external object, the focal length at this time is different from the focal length used in step S2, and the focal length in this step should be adapted to the current lens position (depth position), an image as shown in fig. 7 is expected.
In a preferred embodiment, in order to avoid the influence of the image of the illumination assembly on the image of the shot object in the shot image, the second focal length is set to enable the surface pasting assembly to image but not enable the illumination assembly to image. Therefore, the second image only has the possibility of having the photographed object such as the convex part of the surface pasting component, the image of the lighting component does not appear, and the accuracy of image recognition in the subsequent step is improved.
And S5, judging whether the imaging function is normal or not according to the image characteristics of the shot object in the second image. The second image is an image captured by the moving component under the condition that the movement of the XY axis, the lighting component and the focusing component are normal, the purpose of this step is to detect whether the movement of the moving component in the Z axis is normal, and if the moving component can adjust the lens to the set depth position in step S4, a clear shot object such as a convex portion of the surface patch component shown in fig. 7 should be exhibited in the captured second image.
And identifying the second image through a machine vision algorithm or a neural network algorithm to identify whether the expected features exist in the image. If it is determined that the movement of the movement component in the Z axis is normal, the detection is ended, and it is determined that each main component of the fundus camera functions normally, otherwise step S6 is executed. Steps S4-S5 may be referred to as motion component Z-axis motion detection steps.
S6, judging the state of the fundus camera is abnormal. And prompting specific fault parts for a user according to the abnormal components. A voice module or an information display module can be arranged in the fundus camera to broadcast or display corresponding fault information to a user.
According to the fundus camera detection method provided by the embodiment of the invention, whether the moving assembly can normally adjust the position of the lens is verified through the positioning assembly; after the moving assembly is confirmed to be normal, adjusting the focal length to enable the illuminating assembly to image, and judging the acquired image to determine whether the focusing assembly and the illuminating assembly are normal or not; finally, the depth of the lens is adjusted through the motion assembly, the focal length is adjusted to enable the shot object to be imaged, the characteristics of the object in the image are judged, whether the motion assembly can normally adjust the depth of the lens is verified, and whether each important part of the fundus camera can normally work is automatically determined. The scheme can be used for carrying out self-service inspection on the working state of the equipment in a remote unattended environment, so that the convenience of taking the eye fundus picture can be improved, and the popularization of the eye fundus camera is promoted.
In a preferred embodiment, the projection of the surface patch assembly is provided with a target, i.e. the above-mentioned photographed object is a target on the setting portion of the surface patch assembly, and the specific content of the target is not limited, and one or more clear patterns or shapes are feasible. The obtained second image is shown in fig. 8, and includes a circular target 81, and step S5 specifically includes:
s51, identifying whether a clear target image exists in the second image;
and S52, determining that the imaging function is normal when a clear image of the target exists in the second image.
The result of target identification by using a machine vision algorithm or a neural network algorithm is more accurate, and if a target wheel does not exist in the image or the outline is not clear, the target wheel is more easily identified, so that the accuracy of camera function judgment is further improved.
The embodiment of the invention provides a method for detecting the use state of a fundus camera, which is used for detecting whether a user correctly wears the fundus camera in the embodiment. The method may be executed by the fundus camera itself, or may be executed by an electronic device such as a computer or a server as a self-inspection method. The method is suitable for being executed after determining that each important part of the camera is normal according to the detection method, and comprises the following steps:
and S1, acquiring a first image acquired by the lens through the window of the surface patch assembly. In the scheme, the lens collects images of the external environment through the through hole 011 shown in fig. 2, the face sticker component is prevented from shielding the lens (the face sticker component is not in an imaging range), when a photographed person correctly wears the eye fundus camera, eyes are attached to the face sticker component 01, two eyes of a human body and surrounding skin are in a window (the through hole 011), and the lens collects corresponding first images. It is necessary to keep the illumination assembly in the off state during this step, i.e. without shining a light beam outward through the lens. In the scheme, the requirement on the definition of the collected image is not high, the focal distance used for collecting the image can be a fixed value, and the imaging plane is approximately arranged on the surface of the human body. Of course, the illumination assembly can be turned on first to perform automatic focusing, and the illumination assembly can be turned off after the imaging plane is more accurately arranged on the surface of the human body.
S2, it is determined whether the brightness of the first image meets the set criteria. If the eyes of the photographed person are attached to the face patch assembly 01, and no large gaps are formed around the eyes, the acquired first image should be dark. The brightness of the first image is determined, and if the brightness reaches the set standard, step S3 is executed, otherwise step S6 is executed.
There are various methods for judging whether the brightness of the image meets the set standard, for example, the brightness value can be calculated according to the pixel value of the image and then compared with the threshold value; the neural network algorithm may also be adopted, the neural network may be trained in advance by using images with different brightness to have the capability of classifying or regressing the brightness of the images, and the neural network may be used to recognize the first image and output the recognition result about the brightness.
In a preferred embodiment, the first image is converted to a grayscale image and then the brightness of the grayscale image is identified.
And S3, starting the lighting assembly, and acquiring a second image acquired by the lens through the window of the surface pasting assembly. At this time, the state of the lens and the photographed person is not changed, but the illumination light source is turned on to illuminate the outside through the lens, and the illumination light beam irradiates the eyes or the skin of the photographed person and is reflected. In a preferred embodiment, the lens is positioned to be aligned with the center of the window of the facial patch assembly, the light source used is infrared light, and if the head of a human body is attached to the facial patch assembly, the lens is aligned with the area between the eyes, and an image as shown in fig. 9 can be acquired.
And S4, determining whether the human head fits the face patch assembly or not according to the second image. If the head of the photographed person is attached to the surface paste component, because the human skin can reflect the illumination light beam, an obvious light spot can appear in the image shown in fig. 9, and the periphery of the light spot can present the characteristics of the human skin, and whether the head of the human body is attached to the surface paste component can be determined by judging whether the image has the characteristics of brighter center and gradually dark edge.
Assuming that there is no object attached to the sticker kit in steps S1-S2, the camera is placed in a dark room, or the sticker kit is covered by another object, the brightness of the first image will also be determined to meet the set criterion, which requires further performing steps S3-S4 for determination. If the object attaching face pasting component does not exist, light spots cannot appear in the collected second image; if other objects cover the surface paste component, light spots can appear in the second image, but due to the fact that the materials and the surface shapes are different, the reflection condition of the illumination light beams is different from that of a human body, and therefore whether the human body is or not can be judged through the characteristics of the light spots.
In other alternative embodiments, the lens may be aligned at other positions when the first and second images are acquired, such as aligning with the eyeball, and the eyeball feature may be identified in the image to determine whether the image is a human body in step S4.
In a preferred embodiment, step S4 may first determine whether the brightness of the second image meets a set criterion, and similar to recognizing the brightness of the first image, the second image may be converted into a gray image and then the brightness value is calculated, or the recognition may be performed using a neural network. If the light leakage condition occurs due to the gap at the joint of the surface mount component and the human body, the brightness of the second image is different from the brightness when only the light source of the camera illuminates under the influence of ambient light. And after the condition of light leakage is eliminated, judging whether the features in the second image accord with the features of the human skin.
Step S5 is performed when the human head is determined to fit the face patch assembly, otherwise step S6 is performed.
S5, the fundus image starts to be captured. Specifically, the pupil needs to be automatically found, the working distance needs to be adjusted, the focal length needs to be adjusted, the imaging plane needs to be arranged on the fundus, and finally, the fundus image needs to be obtained through shooting.
And S6, prompting the user to correctly wear the fundus camera. For example, a voice module may be provided in the fundus camera, the user is prompted how to correctly wear the fundus camera, and the like, and thereafter, the process may return to step S1 to make a re-judgment.
According to the method for detecting the use state of the fundus camera provided by the embodiment of the invention, the image is acquired under the condition that the lighting assembly is turned off, whether the surface-mounted assembly is well covered by an object can be preliminarily judged through the brightness of the image, then the image is acquired under the condition that the lighting assembly is turned on, whether the covering object is a human body is further judged through the image characteristics, and whether a photographed person correctly wears the fundus camera or not and uses the fundus camera in a proper environment is automatically determined.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (6)
1. A method for detecting a fundus camera, comprising:
controlling a moving assembly to adjust the position of a lens, and detecting whether the lens can move to the position of each positioning assembly;
after the lens can be moved to the position of each positioning assembly, controlling the moving assembly to move the lens to a set position, starting the lighting assembly and controlling the focusing assembly to adjust to a first focal length, and shooting to obtain a first image; when the first image is shot, the ratio of the shot external environment image is smaller than that of the image of the surface-mounted component; when the lens moves to a set position, in order to be suitable for the actual working environment, the lens is aligned with a set position of the surface mount component, the external environment is shot as little as possible, and the occupation ratio of the surface mount component in the image is larger than that of the external environment; in order to avoid the influence of the background in the shot image on the imaging of the illuminating assembly, the first focal length can enable the illuminating assembly to image but not enable the surface pasting assembly to image;
judging whether the focusing assembly and the illuminating assembly are normal or not according to the image characteristics of the illuminating assembly in the first image;
when the focusing assembly and the lighting assembly are normal, controlling the moving assembly to adjust the lens to a set depth position, controlling the focusing assembly to adjust to a second focal length, and shooting to obtain a second image; when the second image is shot, the ratio of the shot external environment image is smaller than that of the image of the surface-mounted component; in order to avoid the influence of the image of the lighting assembly on the image of the shot object in the shot image, the second focal length can enable the surface-mounted assembly to image but not enable the lighting assembly to image;
and judging whether the imaging function is normal or not according to the image characteristics of the shot object in the second image.
2. The method of claim 1, wherein the substrate is a target on a setting site of the facial patch assembly.
3. The method according to claim 1 or 2, wherein the determining whether the focusing assembly and the illumination assembly are normal according to the image characteristics of the illumination assembly in the first image specifically comprises:
identifying whether there are independent and distinct dots in the first image;
determining that the focusing assembly and the illumination assembly are normal when there are independent and sharp dots in the first image.
4. The method according to claim 2, wherein judging whether the imaging function is normal according to the image feature of the photographed object in the second image specifically comprises:
identifying whether a sharp image of the target is present in the second image;
determining that imaging function is normal when there is a sharp image of the target in the second image.
5. A fundus camera detection apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the fundus camera inspection method of any of claims 1-4.
6. A fundus camera, comprising: the device comprises a motion assembly, a positioning assembly, an illumination assembly, a focusing assembly, a surface pasting assembly, a lens, a processor and a memory which is in communication connection with the processor; wherein the memory stores instructions executable by the processor to cause the processor to perform a fundus camera inspection method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011095178.0A CN112190228B (en) | 2020-10-14 | 2020-10-14 | Fundus camera and detection method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011095178.0A CN112190228B (en) | 2020-10-14 | 2020-10-14 | Fundus camera and detection method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112190228A CN112190228A (en) | 2021-01-08 |
CN112190228B true CN112190228B (en) | 2022-04-22 |
Family
ID=74009682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011095178.0A Active CN112190228B (en) | 2020-10-14 | 2020-10-14 | Fundus camera and detection method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112190228B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114025151B (en) * | 2021-09-23 | 2023-11-14 | 北京鹰瞳科技发展股份有限公司 | Fundus camera fault detection method, fundus camera fault detection device and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103096118B (en) * | 2011-10-28 | 2015-10-07 | 浙江大华技术股份有限公司 | Camera zoom detection method and device |
CN106101561B (en) * | 2016-08-09 | 2019-06-04 | 青岛海信移动通信技术股份有限公司 | Camera focusing detection method and device |
TWI692969B (en) * | 2019-01-15 | 2020-05-01 | 沅聖科技股份有限公司 | Camera automatic focusing method and device thereof |
CN211355388U (en) * | 2019-09-25 | 2020-08-28 | 刘奕慧 | Vision detection device |
CN111449620A (en) * | 2020-04-30 | 2020-07-28 | 上海美沃精密仪器股份有限公司 | Full-automatic fundus camera and automatic photographing method thereof |
CN111419176B (en) * | 2020-06-10 | 2020-10-16 | 北京青燕祥云科技有限公司 | Fundus camera with image transmission function, fundus camera system and control method thereof |
CN111631677A (en) * | 2020-07-06 | 2020-09-08 | 重庆康华瑞明科技股份有限公司 | Full-automatic system of full compatible ophthalmology inspection instrument of modularization |
-
2020
- 2020-10-14 CN CN202011095178.0A patent/CN112190228B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112190228A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112043236B (en) | Fundus camera and full-automatic fundus image shooting method | |
CN112220447B (en) | Fundus camera and fundus image shooting method | |
CN112075921B (en) | Fundus camera and focal length adjusting method thereof | |
JP4750721B2 (en) | Custom glasses manufacturing method | |
US7726814B2 (en) | Reflection microscope and method | |
JP4583527B2 (en) | How to determine eye position | |
US8534836B2 (en) | Fundus camera | |
US7845797B2 (en) | Custom eyeglass manufacturing method | |
JP5694161B2 (en) | Pupil detection device and pupil detection method | |
CN112190227B (en) | Fundus camera and method for detecting use state thereof | |
US20120050515A1 (en) | Image processing apparatus and image processing method | |
CN107495919A (en) | Ophthalmoligic instrument | |
US20150335242A1 (en) | Ophthalmic apparatus and control method for the same | |
JP5601179B2 (en) | Gaze detection apparatus and gaze detection method | |
CN112075920B (en) | Fundus camera and working distance adjusting method thereof | |
CN112190228B (en) | Fundus camera and detection method thereof | |
CN112220448B (en) | Fundus camera and fundus image synthesis method | |
CN112043237A (en) | Full-automatic portable self-timer fundus camera | |
CN212281326U (en) | Full-automatic portable self-timer fundus camera | |
JPH11206713A (en) | Equipment for detecting line of sight and line-of-sight detector using the same | |
JP3813015B2 (en) | Image input device and individual identification device | |
CN116849602A (en) | Fundus image shooting method and device and main control equipment | |
JP2021027876A (en) | Ophthalmologic apparatus | |
JPH11150678A (en) | Image pickup device and image pick up method | |
JP2022076390A (en) | Ophthalmologic apparatus and ophthalmologic apparatus control program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |