CN112190227B - Fundus camera and method for detecting use state thereof - Google Patents

Fundus camera and method for detecting use state thereof Download PDF

Info

Publication number
CN112190227B
CN112190227B CN202011095173.8A CN202011095173A CN112190227B CN 112190227 B CN112190227 B CN 112190227B CN 202011095173 A CN202011095173 A CN 202011095173A CN 112190227 B CN112190227 B CN 112190227B
Authority
CN
China
Prior art keywords
image
assembly
lens
brightness
fundus camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011095173.8A
Other languages
Chinese (zh)
Other versions
CN112190227A (en
Inventor
胡丁山
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd, Beijing Airdoc Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202011095173.8A priority Critical patent/CN112190227B/en
Publication of CN112190227A publication Critical patent/CN112190227A/en
Application granted granted Critical
Publication of CN112190227B publication Critical patent/CN112190227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a fundus camera and a use state detection method thereof, wherein the method comprises the following steps: closing the lighting assembly, and acquiring a first image acquired by the lens through a window of the surface pasting assembly; judging whether the brightness of the first image reaches a set standard or not; when the brightness of the first image reaches a set standard, starting the lighting assembly to obtain a second image acquired by a lens through a window of the surface mount assembly; and determining whether the head of the human body is attached to the surface patch assembly according to the second image.

Description

Fundus camera and method for detecting use state thereof
Technical Field
The invention relates to the field of ophthalmic instruments, in particular to a fundus camera and a use state detection method thereof.
Background
The retina is the only tissue in which the human body can directly observe capillaries and nerves, and not only health problems of the eye but also systemic pathologies such as diabetic complications and hypertension can be detected by observing the retina. The fundus camera is a dedicated device for photographing the retina. The traditional fundus camera needs a complex and expensive hardware module in order to solve the problems of aligning the lens with the pupil, fixing the axial distance between the lens and the pupil and focusing, and is also complex to use, thereby hindering the popularization of the fundus camera.
Before taking fundus pictures, a person to be photographed needs to be ensured to correctly use a fundus camera, for example, the head of the person should be placed at a correct position, the existing fundus camera system needs the person to be photographed to be ready according to a specified flow, the whole process needs assistance of professionals, the person to be photographed after the professional judges the use state, and the convenience needs to be improved.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting a use state of a fundus camera, including:
closing the lighting assembly, and acquiring a first image acquired by the lens through a window of the surface pasting assembly;
judging whether the brightness of the first image reaches a set standard or not;
when the brightness of the first image reaches a set standard, starting the lighting assembly to obtain a second image acquired by a lens through a window of the surface mount assembly;
and determining whether the head of the human body is attached to the surface patch assembly according to the second image.
Optionally, the lens is aligned with a central position of the window of the face patch assembly, so as to align with an area between two eyes when the head of the human body is attached to the face patch assembly.
Optionally, the determining whether the brightness of the first image reaches the set etalon body includes:
converting the first image into a grayscale image;
and calculating a brightness value according to the pixel value of the gray level image.
Optionally, turning on the lighting assembly, in particular turning on an infrared light source in the lighting assembly.
Optionally, determining whether the human head fits the face patch assembly according to the second image specifically includes:
judging whether the features in the second image accord with the features of the human skin or not;
and when the features in the second image accord with the skin features of the human body, determining that the head of the human body is attached to the face patch assembly.
Optionally, determining whether the human head fits the face patch assembly according to the second image specifically includes:
judging whether the brightness of the second image reaches a set standard or not;
when the brightness of the second image reaches a set standard, judging whether the features in the second image accord with the features of the human skin;
and when the features in the second image accord with the skin features of the human body, determining that the head of the human body is attached to the face patch assembly.
Optionally, the determining whether the brightness of the second image reaches the set etalon body includes:
converting the second image into a grayscale image;
and calculating a brightness value according to the pixel value of the gray level image.
Accordingly, the present invention provides a fundus camera detecting apparatus comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the above fundus camera use state detection method.
Accordingly, the present invention provides a fundus camera comprising: the system comprises a lighting assembly, a surface paste assembly, a lens, a processor and a memory which is in communication connection with the processor; wherein the memory stores instructions executable by the one processor to cause the processor to execute the above fundus camera use state detection method.
According to the fundus camera and the use state detection method thereof, the image is acquired under the condition that the illumination assembly is turned off, whether the surface patch assembly is well covered by an object can be preliminarily judged according to the brightness of the image, then the image is acquired under the condition that the illumination assembly is turned on, whether the covering object is a human body is further judged according to the image characteristics, and whether a photographed person correctly wears the fundus camera or not and uses the fundus camera in a proper environment is automatically determined.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a configuration diagram of a fundus camera in an embodiment of the present invention;
FIG. 2 is a schematic view of a patch assembly of a fundus camera in an embodiment of the present invention;
FIG. 3 is a schematic view of a lens and a positioning assembly;
fig. 4 is a flowchart of a method for detecting a usage status of a fundus camera according to an embodiment of the present invention;
FIG. 5 is a block diagram of an illumination lamp;
FIG. 6 is a schematic view of imaging illumination reflected light while detecting a camera state;
FIG. 7 is a schematic view of the projection of the face mount assembly being imaged while detecting the camera status;
FIG. 8 is a schematic view of the imaging of the raised portion of the surface mount component with the target during the detection of the camera state;
fig. 9 is an image of an area between both eyes acquired when the usage state of the subject is detected.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 shows a full-automatic portable self-photographing fundus camera, which includes a surface patch assembly 01, a motion assembly, a positioning assembly 03 and a lens barrel 1, wherein an illumination assembly, a focusing assembly, a lens (an objective lens), an optical lens group, an imaging detector 10 and the like are arranged inside the lens barrel 1, and the internal structure of the lens barrel 1 can refer to chinese patent document CN 111134616A. The actual product also comprises a housing inside which the moving assembly and the barrel 1 are located. The surface patch component 01 is connected to the front part of the shell in a sealing mode and comprises a surface patch body and a window through hole which is formed in the surface patch body and used for accommodating eyes of a shot when the eyes are fitted. The surface patch assembly 01 is a member for contacting the eyes of the subject, and the lens barrel 1 collects the fundus retinal image of the subject through the through hole of the surface patch assembly 01.
The surface of the surface mount body facing away from the lens barrel 1 is configured to be in a shape that fits the contour of the face around the eyes of the subject. Specifically, the face patch assembly 01 is formed with a concave shape inward to fit the arc shape of the head of a human body, and the size of the through hole is at least capable of accommodating both eyes when the eyes of the person to be measured are fitted to the assembly. The surface of the surface-mounted component 01 facing inwards (in the shell and the lens barrel) is provided with at least one specific position for detecting various functions of the camera. In a specific embodiment, referring to fig. 1 and 2, fig. 2 shows an inward surface of the surface mount component 01, and a protrusion 012 is provided at an upper edge of a middle portion of the through hole 011, so that the lens of the lens barrel 1 can be aligned with the inward surface and capture an image. More preferably, a pattern or a simple pattern is provided as a target on the protruding part 012. The specific position has multiple purposes, including detecting whether the lighting assembly and the focusing assembly of the camera are normal, detecting whether the eyes of the photographed person are correctly attached to the face-attaching assembly 01, and the like, which will be described in detail below.
The motion assembly is used for controlling the lens barrel 1 to move in a three-dimensional space, and can move on three axes X, Y, Z in the drawing by taking a coordinate system in fig. 1 as an example. Note that, when the lens barrel 1 moves to the extreme position in the Z direction, the end portion does not protrude outside the surface mount component 01. As a specific example, the movement assembly includes three rail assemblies, a first set of rails 021 for controlling the movement of the lens barrel 1 in the X axis, a second set of rails 022 for controlling the movement of the lens barrel 1 in the Y axis, and a third set of rails, not shown, for controlling the movement of the lens barrel 1 in the Z axis. Specifically, the lens barrel 1 is disposed on a platform (base) together with the second set of rails 022, the first set of rails 021 can drive the base to move integrally, and the third set of rails can drive the base and the first set of rails 021 to move, so that the whole approaches to or departs from the face sticker assembly 01.
The positioning assembly 03 is used to detect the movement of the lens barrel 1. Specifically, the positioning component 03 may be an electromagnetic sensor, and the lens barrel 1 is sensed to move to the position of the positioning component 03 according to an electromagnetic induction signal. Referring to fig. 3, in the present embodiment, 3 positioning assemblies 03 are provided, two of which are disposed on two sides of a movable base to detect the movement of the lens barrel 1 in the X axis, and a third positioning assembly is disposed on the base to detect the movement of the lens barrel 1 in the Y axis, that is, the positioning assemblies 03 are used to detect the movement of the lens barrel 1 in the XY plane.
According to the fundus camera provided by the invention, the illumination component, the focusing component, the ocular objective, the optical lens group and the imaging detector for imaging are integrated in one lens cone to realize the miniaturization of an optical path structure, reduce the volume of the fundus camera and improve the portability; the face of eye ground camera pastes the subassembly and is equipped with the window through-hole that is used for holding by shooter's eye, and the user can wear the eye ground camera by oneself, arranges the eye in window through-hole position, and motion assembly drive lens cone searches for the pupil in window through-hole scope to adjust working distance, thereby shoot the eye ground image, this scheme has reduced the complexity and the use degree of difficulty of eye ground camera hardware, lets the user independently shoot the eye ground image, promotes the popularization of eye ground camera.
Embodiments of the present invention provide a fundus camera detection method, which may be performed by a fundus camera itself as a self-detection method, or may be performed by an electronic device such as a computer or a server as a product detection method. The method comprises the following steps:
and S1, controlling the motion assembly to adjust the position of the lens, and detecting whether the lens can move to the position of each positioning assembly. The method is adapted to be carried out when the fundus camera has just been activated, first the lens (according to the above-described embodiment, the lens is provided integrally with the barrel, i.e. the shift lens) is shifted to the initial position. Then, referring to fig. 3, the moving assembly adjusts the position of the lens to detect whether the lens can move to the position of the 3 positioning assemblies. If the motion components can be moved to these positions, the motion components are considered to be functioning properly, step S2 may be performed, otherwise step S6 is performed. Step S1 may be referred to as a moving component XY axis movement detection step.
And S2, controlling the motion assembly to move the lens to a set position, starting the lighting assembly and controlling the focusing assembly to adjust to a first focal length, and shooting to obtain a first image. The purpose of this step is to detect whether the focusing assembly and the illumination assembly are functioning properly, theoretically without limiting the position at which the lens must be aligned, so there are many options for the setting position described in this step. However, in an actual working environment, the external environment is uncertain, such as a relatively bright environment, which may cause the image content to be disturbed if the first image is captured in this step. In order to adapt to the actual working environment, the lens is moved to a specific part (such as the convex part) of the facing component, the external environment is shot as little as possible, and the proportion of the facing component in the image is larger than that of the external environment. Of course, it is also possible to modify the shape of the surface mount component and its through-hole such that the image taken at this step contains no external environment at all.
By setting the appropriate focal length, the illumination assembly can be imaged. For example, fig. 5 shows the structure of the lighting lamp in one lens barrel, and 4 lamp beads are arranged on one ring-shaped structure. These 4 beads were turned on and imaging was performed using the first focal length, expecting an image as shown in fig. 6.
In a preferred embodiment, the first focal length is set to enable imaging of the illumination assembly but not the facelift assembly in order to avoid the effect of the background in the captured image on the imaging of the illumination assembly. Therefore, only the illumination assembly can exist in the first image without objects such as the projection part of the surface pasting assembly, and the accuracy of image recognition in the subsequent step is improved.
And S3, judging whether the focusing assembly and the lighting assembly are normal according to the image characteristics of the lighting assembly in the first image. In case the focusing assembly functions properly, the use of the set focal distance should enable to obtain an image as shown in fig. 6, which has distinct characteristics depending on the actual shape of the illumination assembly. For example, in this embodiment, there should be 4 independent and distinct dots in the first image, which is the imaging result of the 4 beads. If the adjusted focal length is not the first focal length, the point-like object in the image will become bigger and blurred or smaller; if the illumination assembly is not on, no shape will appear in the image.
By identifying the first image through a machine vision algorithm or a neural network algorithm, whether the expected features exist in the image can be identified. If it is determined that the focusing assembly and the lighting assembly are normal, step S4 is performed, otherwise step S6 is performed. Steps S2-S3 may be referred to as focusing assembly and illumination assembly detection steps.
And S4, controlling the movement assembly to adjust the lens to a set depth position, controlling the focusing assembly to adjust to a second focal length, and shooting to obtain a second image. In this step, a known object is imaged, and as a preferred embodiment, the raised portion of the face patch assembly is used as the known object. Specifically, the lens is first aligned with the convex portion of the surface mount component on the XY plane, and step S2 is already aligned with this portion in this embodiment, and this step does not need to be adjusted; in other embodiments, if the location is not aligned in step S2, an adjustment is made in this step. The step needs to adjust the depth, that is, the position of the lens on the Z axis, which can be understood as adjusting the shooting distance to the known object, and then setting the focal length.
In order to be able to image the external object, the focal length at this time is different from the focal length used in step S2, and the focal length in this step should be adapted to the current lens position (depth position), an image as shown in fig. 7 is expected.
In a preferred embodiment, in order to avoid the influence of the image of the illumination assembly on the image of the shot object in the shot image, the second focal length is set to enable the surface pasting assembly to image but not enable the illumination assembly to image. Therefore, the second image only has the possibility of having the photographed object such as the convex part of the surface pasting component, the image of the lighting component does not appear, and the accuracy of image recognition in the subsequent step is improved.
And S5, judging whether the imaging function is normal or not according to the image characteristics of the shot object in the second image. The second image is an image captured by the moving component under the condition that the movement of the XY axis, the lighting component and the focusing component are normal, the purpose of this step is to detect whether the movement of the moving component in the Z axis is normal, and if the moving component can adjust the lens to the set depth position in step S4, a clear shot object such as a convex portion of the surface patch component shown in fig. 7 should be exhibited in the captured second image.
And identifying the second image through a machine vision algorithm or a neural network algorithm to identify whether the expected features exist in the image. If it is determined that the movement of the movement component in the Z axis is normal, the detection is ended, and it is determined that each main component of the fundus camera functions normally, otherwise step S6 is executed. Steps S4-S5 may be referred to as motion component Z-axis motion detection steps.
S6, judging the state of the fundus camera is abnormal. And prompting specific fault parts for a user according to the abnormal components. A voice module or an information display module can be arranged in the fundus camera to broadcast or display corresponding fault information to a user.
According to the fundus camera detection method provided by the embodiment of the invention, whether the moving assembly can normally adjust the position of the lens is verified through the positioning assembly; after the moving assembly is confirmed to be normal, adjusting the focal length to enable the illuminating assembly to image, and judging the acquired image to determine whether the focusing assembly and the illuminating assembly are normal or not; finally, the depth of the lens is adjusted through the motion assembly, the focal length is adjusted to enable the shot object to be imaged, the characteristics of the object in the image are judged, whether the motion assembly can normally adjust the depth of the lens is verified, and whether each important part of the fundus camera can normally work is automatically determined. The scheme can be used for carrying out self-service inspection on the working state of the equipment in a remote unattended environment, so that the convenience of taking the eye fundus picture can be improved, and the popularization of the eye fundus camera is promoted.
In a preferred embodiment, the projection of the surface patch assembly is provided with a target, i.e. the above-mentioned photographed object is a target on the setting portion of the surface patch assembly, and the specific content of the target is not limited, and one or more clear patterns or shapes are feasible. The obtained second image is shown in fig. 8, and includes a circular target 81, and step S5 specifically includes:
s51, identifying whether a clear target image exists in the second image;
and S52, determining that the imaging function is normal when a clear image of the target exists in the second image.
The result of target identification by using a machine vision algorithm or a neural network algorithm is more accurate, and if a target wheel does not exist in the image or the outline is not clear, the target wheel is more easily identified, so that the accuracy of camera function judgment is further improved.
The embodiment of the invention provides a method for detecting the use state of a fundus camera, which is used for detecting whether a user correctly wears the fundus camera in the embodiment. The method may be executed by the fundus camera itself, or may be executed by an electronic device such as a computer or a server as a self-inspection method. The method is suitable for being executed after determining that each important part of the camera is normal according to the detection method, and comprises the following steps as shown in fig. 4:
and S1, acquiring a first image acquired by the lens through the window of the surface patch assembly. In the scheme, the lens collects images of the external environment through the through hole 011 shown in fig. 2, the face sticker component is prevented from shielding the lens (the face sticker component is not in an imaging range), when a photographed person correctly wears the eye fundus camera, eyes are attached to the face sticker component 01, two eyes of a human body and surrounding skin are in a window (the through hole 011), and the lens collects corresponding first images. It is necessary to keep the illumination assembly in the off state during this step, i.e. without shining a light beam outward through the lens. In the scheme, the requirement on the definition of the collected image is not high, the focal distance used for collecting the image can be a fixed value, and the imaging plane is approximately arranged on the surface of the human body. Of course, the illumination assembly can be turned on first to perform automatic focusing, and the illumination assembly can be turned off after the imaging plane is more accurately arranged on the surface of the human body.
S2, it is determined whether the brightness of the first image meets the set criteria. If the eyes of the photographed person are attached to the face patch assembly 01, and no large gaps are formed around the eyes, the acquired first image should be dark. The brightness of the first image is determined, and if the brightness reaches the set standard, step S3 is executed, otherwise step S6 is executed.
There are various methods for judging whether the brightness of the image meets the set standard, for example, the brightness value can be calculated according to the pixel value of the image and then compared with the threshold value; the neural network algorithm may also be adopted, the neural network may be trained in advance by using images with different brightness to have the capability of classifying or regressing the brightness of the images, and the neural network may be used to recognize the first image and output the recognition result about the brightness.
In a preferred embodiment, the first image is converted to a grayscale image and then the brightness of the grayscale image is identified.
And S3, starting the lighting assembly, and acquiring a second image acquired by the lens through the window of the surface pasting assembly. At this time, the state of the lens and the photographed person is not changed, but the illumination light source is turned on to illuminate the outside through the lens, and the illumination light beam irradiates the eyes or the skin of the photographed person and is reflected. In a preferred embodiment, the lens is positioned to be aligned with the center of the window of the facial patch assembly, the light source used is infrared light, and if the head of a human body is attached to the facial patch assembly, the lens is aligned with the area between the eyes, and an image as shown in fig. 9 can be acquired.
And S4, determining whether the human head fits the face patch assembly or not according to the second image. If the head of the photographed person is attached to the surface paste component, because the human skin can reflect the illumination light beam, an obvious light spot can appear in the image shown in fig. 9, and the periphery of the light spot can present the characteristics of the human skin, and whether the head of the human body is attached to the surface paste component can be determined by judging whether the image has the characteristics of brighter center and gradually dark edge.
Assuming that there is no object attached to the sticker kit in steps S1-S2, the camera is placed in a dark room, or the sticker kit is covered by another object, the brightness of the first image will also be determined to meet the set criterion, which requires further performing steps S3-S4 for determination. If the object attaching face pasting component does not exist, light spots cannot appear in the collected second image; if other objects cover the surface paste component, light spots can appear in the second image, but due to the fact that the materials and the surface shapes are different, the reflection condition of the illumination light beams is different from that of a human body, and therefore whether the human body is or not can be judged through the characteristics of the light spots.
In other alternative embodiments, the lens may be aligned at other positions when the first and second images are acquired, such as aligning with the eyeball, and the eyeball feature may be identified in the image to determine whether the image is a human body in step S4.
In a preferred embodiment, step S4 may first determine whether the brightness of the second image meets a set criterion, and similar to recognizing the brightness of the first image, the second image may be converted into a gray image and then the brightness value is calculated, or the recognition may be performed using a neural network. If the light leakage condition occurs due to the gap at the joint of the surface mount component and the human body, the brightness of the second image is different from the brightness when only the light source of the camera illuminates under the influence of ambient light. And after the condition of light leakage is eliminated, judging whether the features in the second image accord with the features of the human skin.
Step S5 is performed when the human head is determined to fit the face patch assembly, otherwise step S6 is performed.
S5, the fundus image starts to be captured. Specifically, the pupil needs to be automatically found, the working distance needs to be adjusted, the focal length needs to be adjusted, the imaging plane needs to be arranged on the fundus, and finally, the fundus image needs to be obtained through shooting.
And S6, prompting the user to correctly wear the fundus camera. For example, a voice module may be provided in the fundus camera, the user is prompted how to correctly wear the fundus camera, and the like, and thereafter, the process may return to step S1 to make a re-judgment.
According to the method for detecting the use state of the fundus camera provided by the embodiment of the invention, the image is acquired under the condition that the lighting assembly is turned off, whether the surface-mounted assembly is well covered by an object can be preliminarily judged through the brightness of the image, then the image is acquired under the condition that the lighting assembly is turned on, whether the covering object is a human body is further judged through the image characteristics, and whether a photographed person correctly wears the fundus camera or not and uses the fundus camera in a proper environment is automatically determined.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (6)

1. A method for detecting the use state of a fundus camera is characterized by comprising the following steps:
step 1, closing an illumination assembly, and acquiring a first image acquired by a lens through a window of a surface mount assembly;
the lens is aligned to the center of the window of the face sticker assembly, so that the area between the two eyes is aligned when the head of a human body is attached to the face sticker assembly;
step 2, converting the first image into a gray image;
step 3, calculating a brightness value according to the pixel value of the gray level image to judge whether the brightness of the first image reaches a set standard;
step 4, when the brightness of the first image reaches a set standard, starting the lighting assembly to obtain a second image acquired by a lens through a window of the surface mount assembly;
step 5, determining whether the head of the human body is attached to the surface patch assembly or not according to the second image;
wherein determining from the second image whether the human head is attached to the facetted assembly comprises:
judging whether the features in the second image accord with the features of the human skin or not;
and when the features in the second image accord with the skin features of the human body, determining that the head of the human body is attached to the face patch assembly.
2. A method for detecting the use state of a fundus camera is characterized by comprising the following steps:
step 1, closing an illumination assembly, and acquiring a first image acquired by a lens through a window of a surface mount assembly;
the lens is aligned to the center of the window of the face sticker assembly, so that the area between the two eyes is aligned when the head of a human body is attached to the face sticker assembly;
step 2, converting the first image into a gray image;
step 3, calculating a brightness value according to the pixel value of the gray level image to judge whether the brightness of the first image reaches a set standard;
step 4, when the brightness of the first image reaches a set standard, starting the lighting assembly to obtain a second image acquired by a lens through a window of the surface mount assembly;
step 5, determining whether the head of the human body is attached to the surface patch assembly or not according to the second image;
wherein determining from the second image whether the human head is attached to the facetted assembly comprises:
judging whether the brightness of the second image reaches a set standard or not;
when the brightness of the second image reaches a set standard, judging whether the features in the second image accord with the features of the human skin;
and when the features in the second image accord with the skin features of the human body, determining that the head of the human body is attached to the face patch assembly.
3. Method according to claim 1 or 2, characterized in that switching on the lighting assembly, in particular switching on an infrared light source in the lighting assembly.
4. The method of claim 2, wherein determining whether the brightness of the second image reaches a set etalon volume comprises:
converting the second image into a grayscale image;
and calculating a brightness value according to the pixel value of the gray level image.
5. A fundus camera detection apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the fundus camera use status detection method according to any one of claims 1 to 4.
6. A fundus camera, comprising: the system comprises a lighting assembly, a surface paste assembly, a lens, a processor and a memory which is in communication connection with the processor; wherein the memory stores instructions executable by the processor to cause the processor to perform the fundus camera use status detection method according to any one of claims 1 to 4.
CN202011095173.8A 2020-10-14 2020-10-14 Fundus camera and method for detecting use state thereof Active CN112190227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011095173.8A CN112190227B (en) 2020-10-14 2020-10-14 Fundus camera and method for detecting use state thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011095173.8A CN112190227B (en) 2020-10-14 2020-10-14 Fundus camera and method for detecting use state thereof

Publications (2)

Publication Number Publication Date
CN112190227A CN112190227A (en) 2021-01-08
CN112190227B true CN112190227B (en) 2022-01-11

Family

ID=74009609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011095173.8A Active CN112190227B (en) 2020-10-14 2020-10-14 Fundus camera and method for detecting use state thereof

Country Status (1)

Country Link
CN (1) CN112190227B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025151B (en) * 2021-09-23 2023-11-14 北京鹰瞳科技发展股份有限公司 Fundus camera fault detection method, fundus camera fault detection device and storage medium
CN116687338B (en) * 2023-08-01 2023-11-03 杭州目乐医疗科技股份有限公司 Fundus camera state detection method, fundus camera, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN86100707A (en) * 1986-01-24 1987-01-31 航天工业部第二研究院二十五所 Microwave measurement device for testing saturation of water contained in physical model of petroleum rock core
EP0680860A2 (en) * 1994-04-26 1995-11-08 Omron Corporation Sensor for detecting raindrops, wiper drive apparatus using the device, and vehicle using them
EP2268193A1 (en) * 2008-03-27 2011-01-05 Neuroptix Corporation Ocular imaging
CN104406916A (en) * 2014-11-04 2015-03-11 百度在线网络技术(北京)有限公司 Method and apparatus for detecting food
CN108596895A (en) * 2018-04-26 2018-09-28 上海鹰瞳医疗科技有限公司 Eye fundus image detection method based on machine learning, apparatus and system
CN109471264A (en) * 2019-01-02 2019-03-15 京东方科技集团股份有限公司 Mask and headset equipment comprising it
CN109758115A (en) * 2019-02-26 2019-05-17 上海鹰瞳医疗科技有限公司 Guide assembly and fundus camera
CN110146979A (en) * 2018-02-13 2019-08-20 纬创资通股份有限公司 Head-mounted display apparatus and its method of adjustment
CN209693332U (en) * 2018-12-27 2019-11-26 歌尔科技有限公司 Wear display equipment
CN110536036A (en) * 2019-07-19 2019-12-03 华为技术有限公司 Information cuing method and equipment

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103654715B (en) * 2012-09-24 2016-04-06 荣晶生物科技股份有限公司 Image detection device and image detecting method
CN103049747B (en) * 2012-12-30 2016-08-24 信帧电子技术(北京)有限公司 The human body image utilizing the colour of skin knows method for distinguishing again
US9325936B2 (en) * 2013-08-09 2016-04-26 Samsung Electronics Co., Ltd. Hybrid visual communication
CN106157286B (en) * 2015-04-03 2020-11-17 研祥智能科技股份有限公司 Image processing method and screen light leakage detection method
CN204964931U (en) * 2015-07-24 2016-01-13 苏州世纪福智能装备股份有限公司 Light leakage detection device
CN105093532A (en) * 2015-08-03 2015-11-25 京东方科技集团股份有限公司 Virtual reality glasses and display method
CN106780521B (en) * 2016-12-08 2020-01-07 广州视源电子科技股份有限公司 Screen light leakage detection method, system and device
CN206400191U (en) * 2016-12-20 2017-08-11 浙江大华技术股份有限公司 A kind of head-wearing display device nose support and head-wearing display device
JP2018121288A (en) * 2017-01-27 2018-08-02 日本精機株式会社 Display device and head-up display device
CN106873159A (en) * 2017-02-27 2017-06-20 阿里巴巴集团控股有限公司 Virtual reality helmet
CN208641025U (en) * 2017-07-06 2019-03-26 周晓伟 It can sticking type eyeshade
CN108038469B (en) * 2017-12-27 2019-10-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108346149B (en) * 2018-03-02 2021-03-12 北京郁金香伙伴科技有限公司 Image detection and processing method and device and terminal
CN108577803B (en) * 2018-04-26 2020-09-01 上海鹰瞳医疗科技有限公司 Fundus image detection method, device and system based on machine learning
JP2020109453A (en) * 2019-01-07 2020-07-16 矢崎総業株式会社 Head-up display device
CN109788281B (en) * 2019-03-05 2020-11-27 信利光电股份有限公司 Light leakage detection method and device of camera module
CN211348857U (en) * 2019-09-30 2020-08-25 联想(北京)有限公司 Wearable device
CN111080577B (en) * 2019-11-27 2023-05-26 北京至真互联网技术有限公司 Fundus image quality evaluation method, fundus image quality evaluation system, fundus image quality evaluation apparatus, and fundus image storage medium
CN111479002B (en) * 2020-04-14 2021-06-15 Oppo(重庆)智能科技有限公司 Photographing light leakage detection method and device, terminal equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN86100707A (en) * 1986-01-24 1987-01-31 航天工业部第二研究院二十五所 Microwave measurement device for testing saturation of water contained in physical model of petroleum rock core
EP0680860A2 (en) * 1994-04-26 1995-11-08 Omron Corporation Sensor for detecting raindrops, wiper drive apparatus using the device, and vehicle using them
EP2268193A1 (en) * 2008-03-27 2011-01-05 Neuroptix Corporation Ocular imaging
CN104406916A (en) * 2014-11-04 2015-03-11 百度在线网络技术(北京)有限公司 Method and apparatus for detecting food
CN110146979A (en) * 2018-02-13 2019-08-20 纬创资通股份有限公司 Head-mounted display apparatus and its method of adjustment
CN108596895A (en) * 2018-04-26 2018-09-28 上海鹰瞳医疗科技有限公司 Eye fundus image detection method based on machine learning, apparatus and system
CN209693332U (en) * 2018-12-27 2019-11-26 歌尔科技有限公司 Wear display equipment
CN109471264A (en) * 2019-01-02 2019-03-15 京东方科技集团股份有限公司 Mask and headset equipment comprising it
CN109758115A (en) * 2019-02-26 2019-05-17 上海鹰瞳医疗科技有限公司 Guide assembly and fundus camera
CN110536036A (en) * 2019-07-19 2019-12-03 华为技术有限公司 Information cuing method and equipment

Also Published As

Publication number Publication date
CN112190227A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN112043236B (en) Fundus camera and full-automatic fundus image shooting method
JP4750721B2 (en) Custom glasses manufacturing method
CN112220447B (en) Fundus camera and fundus image shooting method
JP4583527B2 (en) How to determine eye position
US8534836B2 (en) Fundus camera
CN112075921B (en) Fundus camera and focal length adjusting method thereof
US7845797B2 (en) Custom eyeglass manufacturing method
JP5694161B2 (en) Pupil detection device and pupil detection method
WO2010035472A1 (en) Line-of-sight direction determination device and line-of-sight direction determination method
CN112190227B (en) Fundus camera and method for detecting use state thereof
US20120050515A1 (en) Image processing apparatus and image processing method
CN107495919A (en) Ophthalmoligic instrument
US20150335242A1 (en) Ophthalmic apparatus and control method for the same
JP5601179B2 (en) Gaze detection apparatus and gaze detection method
CN112075920B (en) Fundus camera and working distance adjusting method thereof
CN112190228B (en) Fundus camera and detection method thereof
CN112220448B (en) Fundus camera and fundus image synthesis method
WO2007026158A1 (en) Imaging apparatus, portable image capture device and method of assembling composite images from component images
CN112043237A (en) Full-automatic portable self-timer fundus camera
CN212281326U (en) Full-automatic portable self-timer fundus camera
KR101731972B1 (en) Automatic focusing apparatus of ophthalmometer and methdo thereof
JPH11206713A (en) Equipment for detecting line of sight and line-of-sight detector using the same
CN116849602A (en) Fundus image shooting method and device and main control equipment
JP2021027876A (en) Ophthalmologic apparatus
JPH02213322A (en) Method and apparatus for detecting visual axis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210722

Address after: 100083 room 21, 4th floor, building 2, national defense science and Technology Park, beipolytechnic, Haidian District, Beijing

Applicant after: Beijing Yingtong Technology Development Co.,Ltd.

Applicant after: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 200030 room 01, 8 building, 1 Yizhou Road, Xuhui District, Shanghai, 180

Applicant before: SHANGHAI YINGTONG MEDICAL TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant