CN110101363B - Collecting device for fundus images - Google Patents

Collecting device for fundus images Download PDF

Info

Publication number
CN110101363B
CN110101363B CN201910399827.7A CN201910399827A CN110101363B CN 110101363 B CN110101363 B CN 110101363B CN 201910399827 A CN201910399827 A CN 201910399827A CN 110101363 B CN110101363 B CN 110101363B
Authority
CN
China
Prior art keywords
image
module
feature
fundus
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910399827.7A
Other languages
Chinese (zh)
Other versions
CN110101363A (en
Inventor
陈意
王追
陈志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silicon Based Intelligent Technology Co ltd
Original Assignee
Shenzhen Silicon Based Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silicon Based Intelligent Technology Co ltd filed Critical Shenzhen Silicon Based Intelligent Technology Co ltd
Priority to CN201910399827.7A priority Critical patent/CN110101363B/en
Publication of CN110101363A publication Critical patent/CN110101363A/en
Application granted granted Critical
Publication of CN110101363B publication Critical patent/CN110101363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present disclosure provides an acquisition apparatus of fundus images, characterized by comprising: an acquisition module for acquiring an image and capable of capturing a fundus image by adjusting an optical mechanism; the comparison module is used for comparing the acquired image with a preset feature library and identifying whether a feature image exists in the acquired image or not; and a control module configured to put the acquisition module in a non-operating state when there is no feature image in the images acquired within a preset time. In the disclosure, whether the image acquired by the acquisition module has the matched characteristic image in the preset characteristic library or not is identified, so that the acquisition module is in a non-working state when the shooting target is lost. Therefore, the power consumption of the fundus image acquisition device can be reduced, and the endurance time can be prolonged.

Description

Collecting device for fundus images
Technical Field
The present disclosure relates to an acquisition apparatus of fundus images.
Background
With the popularization of the related items of fundus screening, fundus cameras are more and more widely applied. To meet fundus screening needs, for example, in basement and remote mountainous areas, it is often necessary to employ a portable fundus camera. Portable fundus cameras typically have a power module for powering the capture. When the portable fundus camera is operated, it is necessary to operate the portable fundus camera by an operator and to acquire a fundus image of a subject with the fundus of the subject aligned.
However, the current fundus camera is often configured such that once turned on, the fundus camera is still in an operating state even if the subject is not in front of the lens of the fundus camera, and therefore, such a portable fundus camera tends to cause a large power consumption or insufficient cruising ability. For example, with some handheld fundus cameras, the resolution of the captured image is high, and there are many pixel points (e.g., more than 200 ten thousand pixels), so the main chip of the fundus camera is heavy in processing task, and needs to process the image at full rate and consumes large power consumption. In addition, the infrared lamp and the like for auxiliary lighting provided in the fundus camera are also a part that consumes much energy, resulting in a significant reduction in battery-powered cruising time of the fundus camera as a whole.
Disclosure of Invention
The present disclosure has been made in view of the above circumstances, and an object thereof is to provide an acquisition apparatus for fundus images capable of reducing power consumption of the acquisition apparatus for fundus images and extending cruising ability.
To this end, the present disclosure provides an acquisition device of a fundus image, characterized by comprising: an acquisition module for acquiring an image and capable of capturing a fundus image by adjusting an optical mechanism; the comparison module is used for comparing the acquired image with a preset feature library and identifying whether a feature image exists in the acquired image or not; and a control module configured to put the acquisition module in a non-operating state when the feature image does not exist in the acquired image within a preset time.
In the disclosure, the comparison module compares the image acquired by the acquisition module with a preset feature library, identifies whether a feature image exists in the acquired image, and controls the acquisition module to be in a non-working state when the feature image does not exist in the acquired image within a preset time. In this case, the acquisition module is in a non-operating state when the acquisition module loses the photographic target. Therefore, the power consumption of the fundus image acquisition device can be reduced, and the endurance time can be prolonged.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the preset feature library is a face feature library, and the feature image is a face image. In this case, it is possible to recognize whether or not a face image exists in the image acquired by the acquisition module.
In addition, in the acquisition apparatus according to the present disclosure, optionally, a power module for supplying power is further included, and when the characteristic image does not exist in the acquired image within a preset time, the control module is configured to limit a supply current of the power module. Therefore, the power consumption of the fundus image acquisition device can be reduced, and the endurance time can be prolonged.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the control module is further configured to calculate a view ratio of the feature image to the acquired image when the feature image exists in the acquired image within a preset time; and judging whether to enable the acquisition module to be in a non-working state or not according to the view proportion. In this case, the switching between the working state and the non-working state of the acquisition module can be rapidly controlled according to the view scale.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the control module is further configured to put the acquisition module in a non-operating state when the view scale is smaller than a prescribed scale within a prescribed time. In this case, when the view ratio is smaller than the predetermined ratio within the predetermined time, the power consumption of the fundus image capturing apparatus can be reduced, and the cruising time can be extended.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the control module is further configured to detect whether the feature image is located within a preset detection area in the acquired image when the feature image exists in the acquired image within a preset time; and when the characteristic image is not located in the preset detection area, enabling the acquisition module to be in a non-working state. In this case, when the characteristic image is not located within the preset detection area, the power consumption of the acquisition device of the fundus image can be reduced, and the duration can be prolonged.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the control module is further configured to compare the feature image with at least one specific face image when the feature image exists in the acquired image; and when the characteristic image is not matched with the at least one specific face image, enabling the acquisition module to be in a non-working state. In this case, when the face image does not match the specific face image, the power consumption of the acquisition device of the fundus image can be reduced, and the duration can be prolonged.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the preset feature library is an eyeball feature library, and the feature image is an eyeball image. In this case, it is possible to identify whether or not an eyeball image exists in the image captured by the capture module.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the control module is further configured to acquire state information of the acquisition module before the acquisition module acquires an image, and switch the acquisition module to a high resolution shooting mode or a low resolution shooting mode according to the state information. In this case, the shooting mode can be switched as needed by the acquisition device of the fundus image to reduce power consumption and prolong the duration.
In addition, in the acquisition apparatus according to the present disclosure, optionally, the state information includes at least a focus state and an exposure state, and when the state information is the focus state, the control module is configured to switch the acquisition module to the low resolution shooting mode. In this case, when the acquisition module is in the in-focus state, the power consumption of the acquisition device of the fundus image can be reduced, and the duration can be prolonged.
According to the present disclosure, an acquisition apparatus of fundus images that can reduce energy consumption for use and prolong cruising ability can be provided.
Drawings
Fig. 1 is a perspective view schematically illustrating an example of a fundus image acquisition apparatus according to the present disclosure.
Fig. 2 is a right side view of the fundus image capturing apparatus of fig. 1.
Fig. 3 is a block diagram of an acquisition apparatus of a fundus image according to an example of the present disclosure.
Fig. 4 is a schematic configuration diagram of an acquisition module of an acquisition apparatus of a fundus image according to an example of the present disclosure.
Fig. 5 is a schematic diagram of a face image according to an example of the present disclosure.
Fig. 6 is a schematic view of a fundus image according to an example of the present disclosure.
Fig. 7 is a schematic view of a detection container of the fundus image capturing apparatus of fig. 1.
Fig. 8 is a block diagram of modification 1 of the fundus image capturing apparatus according to the example of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones.
It is noted that the terms "comprises," "comprising," and "having," and any variations thereof, in this disclosure, for example, a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In some of the flows described in the specification and claims of this disclosure and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the order of the operations merely being used to distinguish between the various operations, and the order of execution does not itself represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 1 is a perspective view schematically showing an example of a fundus image capturing apparatus 1 according to the present disclosure. Fig. 2 is a right side view of the fundus image capturing apparatus 1 of fig. 1. Fig. 3 is a block diagram of the fundus image acquisition apparatus 1 according to an example of the present disclosure.
(fundus image capturing device 1)
The present disclosure relates to an acquisition apparatus (hereinafter sometimes simply referred to as "acquisition apparatus") 1 of a fundus image. The acquisition device 1 of the fundus image can acquire an image and process the acquired image. With the acquisition apparatus 1 of a fundus image, a target image of a patient, such as a fundus image, can be acquired, and the acquired image can be processed. In some examples, the acquisition device 1 of fundus images may be a portable fundus camera, such as the handheld fundus camera shown in fig. 1. In addition, in some examples, the acquisition device 1 of the fundus image may also be a desktop fundus camera.
In some examples, as shown in fig. 1, 2, and 3, the acquisition apparatus 1 of a fundus image may include an acquisition module 10, a contrast module 20, a control module 30, and a power supply module 40. In the acquisition apparatus 1 of a fundus image, the acquisition module 10 can acquire an external image. The comparison module 20 may identify whether a characteristic image, such as a human face image (described later), exists in the image captured by the capture module 10, and the control module 30 may put the capture module 10 in a non-operating state when the characteristic image does not exist in the image captured within a preset time. In addition, the power supply module 40 of the fundus image capturing apparatus 1 may be omitted and directly supplied with power from an external power supply (e.g., a direct current power supply).
Fig. 4 is a schematic configuration diagram of an acquisition module 10 of the acquisition apparatus 1 of a fundus image according to an example of the present disclosure.
In some examples, acquisition module 10 may be used to acquire images. The captured image may be formed by the capture module 10 capturing a scene in the field of view of the camera. The imaging visual field range is a spatial range that can be imaged when an image is captured, and the parameters for determining the imaging visual field range include the angle of view. For the fundus image acquired by the acquisition module 10, the field angle may be, for example, 30 ° to 60 °, preferably 40 ° to 50 °. In some examples, the field angle of the fundus image may be, for example, 30 °, 35 °, 40 °, 45 °, 50 °, 55 °, 60 °. Additionally, a scene may refer to an environment or scene, including but not limited to a landscape, a person, or other creature.
In some examples, the image captured by the capture module 10 may be a color image (e.g., an RGB image) or may be a grayscale image. In some examples, as shown in fig. 4, the acquisition module 10 may include an optical mechanism 11 and an imaging unit 12. At the time of capturing an image, light enters the imaging unit 12 via the optical mechanism 11, and an image (i.e., a captured image) is formed in the imaging unit 12.
In some examples, the acquired image may be a fundus image. Reflected light from the fundus of the human eye (e.g., reflected light R1 or reflected light R2) can be focused on the imaging unit 12 by the optical mechanism 11 (see fig. 4). That is, the human eye fundus (particularly, the human eye retina region) can be imaged on the imaging unit 12 by the optical mechanism 11.
In some examples, the optical mechanism 11 may be an optical assembly consisting of a lens, a diaphragm, and the like. Wherein the lens can change the propagation direction of the light path (e.g. reflected light R1 or reflected light R2). The diaphragm can be used to control the size of the image.
In the present disclosure, the acquisition module 10 may capture a fundus image by adjusting the optical mechanism 11. That is, in order to capture a clear fundus image, the optical mechanism 11 of the acquisition module 10 may be focused. In some examples, the optical mechanism 11 may implement automatic optical zooming via a motor. In other examples, the optical mechanism 11 may be manually focused. Thereby, the fundus of the human eye can be clearly imaged on the imaging unit 12. In addition, clear fundus images can facilitate the acquisition device 1 to analyze the fundus images so as to obtain more accurate judgment results.
In some examples, the imaging unit 12 may be a photosensitive element (may also be referred to as an "image sensor") that converts reflected light (e.g., reflected light R1 or reflected light R2) into an electrical signal. The light sensing element may be, for example, a CMOS or CCD chip. In some examples, the image size (i.e., the number of pixels) of the imaging unit 12 may be, for example, 256 × 256, 512 × 512, 1024 × 1024, or the like. However, examples of the present disclosure are not limited thereto, and may be 128 × 128, 768 × 768, 2048 × 2048, or the like, for example.
As shown in fig. 4, the acquisition module 10 may include a lighting unit 13. The illumination unit 13 can provide illumination light. The illumination light (for example, illumination light L1 or illumination light L2) generated by the illumination unit 13 reaches the bottom of the human eye through the optical mechanism 11 and generates reflected light (for example, reflected light R1 or reflected light R2) at the bottom of the human eye. In fundus imaging, since reflected light generated after external light enters a human eye is small, by providing the illumination unit 13 in the acquisition module 10, it is possible to facilitate the acquisition module 10 to capture sufficient fundus reflected light.
As can be seen from the above, during the acquisition of the fundus image, since the illumination unit 13 is often required to provide illumination, the acquisition module 10 consumes a large amount of battery of the acquisition apparatus 1. Thus, in the present disclosure, the harvester 1 is put on standby, for example, when the harvester 1 is not in an operating state, so that the cruising time of the harvester 1 (described in detail later) can be extended.
In some examples, the lighting unit 13 may have one or more light sources. The light source of the illumination unit 13 may be a white light source, an infrared light source, or both a white light source and a red light source. In the present embodiment, the wavelength range of the light emitted from the white light source of the illumination unit 13 may be 390nm to 760nm, and the wavelength of the light emitted from the infrared light source may be 780 nm.
In some examples, the light source of the lighting unit 13 may employ a ring-shaped light source. This can provide relatively uniform illumination light.
In some examples, the acquisition module 10 may include a positioning unit (not shown). The positioning unit may provide a plurality of fiducial points when capturing the fundus image, and the acquisition module 10 can acquire images of different regions of the fundus by allowing the human eye to look at different fiducial points.
In some examples, when a doctor or a nurse uses the acquisition device 1 (for example, a handheld fundus camera shown in fig. 1) to acquire a fundus image of a patient, the acquisition device 1 is aligned with the eyes of the patient, the distance between the acquisition device 1 and the eyes of the patient is adjusted to a set distance range, then the acquisition device 1 is focused, and after focusing is completed, a shutter is pressed to shoot. At the moment of exposure, the illumination unit 13 of the acquisition module 10 provides illumination light (e.g., illumination light L1 or illumination light L2) into the eye of the patient and forms reflected light (e.g., reflected light R1 or reflected light R2) at the bottom of the eye. The reflected light (e.g., the reflected light R1 or the reflected light R2) reaches the imaging unit 12 through the optical mechanism 11, and forms a fundus image in the imaging unit 12.
In some examples, in the case of including a positioning unit in the acquisition module 10, the patient may look at different eyeground points through the taking lens at the direction of the doctor or nurse, so as to facilitate the doctor or nurse to focus the acquisition device 1, and to facilitate the doctor or nurse to acquire fundus images of different areas.
In other examples, images are continuously acquired by acquisition module 10. When the acquisition module 10 continues to acquire images, the acquisition module 10 is considered to be in an operating state, in other words, the acquisition device 1 of fundus images is in an operating state.
In some examples, acquisition module 10 is powered by power module 40 (described later). Specifically, the imaging unit 12 of the acquisition module 10 requires a supply current to complete the conversion of the reflected light (e.g., the reflected light R1 or the reflected light R2) into an electrical signal. The illumination unit 13 of the acquisition module 10 requires a supply current to generate illumination light (e.g., illumination light L1 or illumination light L2). In addition, the positioning unit of the acquisition module 10 also requires a supply current to generate the sighting points. When the power supply current of the power supply module 40 is limited or turned off, the illumination unit 13 stops illuminating, and the capture module 10 stops capturing or storing the captured image, that is, the capture module 10 is in a non-operating state, such as a standby state or a power-off state. This can reduce power consumption of the fundus image capturing apparatus 1 and prolong the duration of the flight.
In some examples, the acquisition module 10 may be in different operating states (focus state and exposure state). The status information of the acquisition module 10 may be directly transmitted to the control module 30 (described later) so that the control module 30 controls the acquisition module 10 to perform mode switching. The modes of the acquisition apparatus 1 of fundus images may include a low resolution photographing mode and a high resolution photographing mode. In the high-resolution shooting mode, the quality of the picture acquired by the acquisition module 10 is high, and the operation load of the acquisition device 1 of the fundus image is large and the power consumption is large. In the low resolution shooting mode, the quality of the picture acquired by the acquisition module 10 is low, and the operation load of the acquisition device 1 of the fundus image is small and the power consumption is small.
(comparison Module 20)
Fig. 5 is a schematic diagram of a face image according to an example of the present disclosure. Fig. 6 is a schematic view of a fundus image according to an example of the present disclosure. Fig. 7 is a schematic view of a detection container of the fundus image capturing apparatus 1 of fig. 1.
In some examples, comparison module 20 may be configured to compare the captured image with a preset feature library, and identify whether a feature image exists in the captured image. The comparison module 20 can generate recognition results that feature images exist in the acquired images or that feature images do not exist in the acquired images.
In some examples, the preset feature library may be a feature library including feature information of preset feature images. The preset feature library may store a feature template for identifying the feature image. The feature template may include feature information of the feature image. The acquired image is compared with the characteristic template for matching, and the similarity between the acquired image and the characteristic template can be acquired. When the similarity between the captured image and the feature template exceeds a first set threshold H1, the comparison module 20 may determine that the feature image exists in the captured image. The first set threshold H1 may be a fixed threshold set inside the fundus image capturing apparatus 1, or may be artificially adjustable.
In some examples, the preset feature library may be a face feature library, and the feature image may be a face image. The feature image may be, for example, a face F as shown in fig. 5. The face feature library may be a feature library containing feature information of a preset face image. The feature information of the face image may include, for example, the key point information of the face F shown in fig. 5. The keypoint information may be, for example, face contour information, eyes (e.g., eye f1 or eye f2), etc. The comparison module 20 may be configured to compare the acquired image with a face feature library, and identify whether a face image exists in the acquired image. In this case, it is possible to recognize whether or not a face image exists in the image acquired by the acquisition module 10. For example, after the handheld fundus camera collects an image, the comparison module 20 may compare and identify whether a human face image exists in the collected image.
In other examples, the preset feature library may be an eyeball feature library, and the feature image may be an eyeball image (see fig. 6). The eyeball feature library may be a feature library containing feature information of a preset eyeball image. The comparison module 20 may be configured to compare the acquired image with an eyeball feature library, and identify whether an eyeball image exists in the acquired image. In this case, it is possible to identify whether or not an eyeball image exists in the image captured by the capture module 10. For example, after the handheld fundus camera acquires an image, the comparison module 20 may compare and identify whether an eyeball image exists in the acquired image.
In some examples, the comparison module 20 may be, for example, a digital processing chip (DSP), a CPU, an application specific integrated circuit chip, an FPGA chip, or the like. In some examples, contrast module 20 is powered by power module 40 (described later). When the supply current is limited or turned off, the comparison module 20 (for example, FPGA chip) stops comparing and recognizing the acquired image, thereby further reducing the power consumption of the acquisition apparatus 1 of fundus images and extending the endurance time.
(control Module 30)
In some examples, control module 30 may receive the identification results from comparison module 20. In some examples, control module 30 may be configured to leave acquisition module 10 in an inactive state when no feature image is present in the images acquired within preset time T1. Here, the preset time T1 may be a fixed value set inside the acquisition apparatus 1 of the fundus image. For example, the preset time T1 may be 1 minute, 2 minutes, or 3 minutes and more. However, the example of the present disclosure is not limited thereto, and the preset time T1 may be preset according to different usage habits of the user. For example, the preset time T1 may be set to 30 seconds, 1 minute, 10 minutes, and more.
In some examples, when a characteristic image does not exist in the image captured within the preset time T1, it is indicated that a photographed target (e.g., an object capable of forming a characteristic image) has left the area captured by the capturing module 10. In this case, the acquisition module 10 is put in a non-operating state. In some examples, the non-operational state may be a condition in which the supply current required by the module is limited or turned off.
In addition, in some examples, control module 30 may be further configured to calculate a view ratio of the feature image to the captured image when the feature image exists in the captured image within a preset time T1. The view scale may be the scale of the pixel points. Specifically, after the control module 30 obtains the feature image, the pixel points of the feature image are counted, the pixel points of the acquired image are counted, and then the proportion of the pixel points of the feature image in the pixel points of the acquired image is calculated. Examples of the present disclosure are not limited thereto, and the view scale may also be an area scale.
In some examples, control module 30 may determine whether to deactivate acquisition module 10 based on the scale of the view. In this case, the gradual trend of the view scale may determine whether the acquisition module 10 of the acquisition device 1 of the fundus image is in an operating state. Therefore, the switching between the working state and the non-working state of the acquisition module 10 can be rapidly controlled according to the view scale. The gradual trend may be a gradual trend of the proportion of the pixel points.
Additionally, in some examples, control module 30 may be further configured to place acquisition module 10 in an inactive state when the view scale is less than the prescribed scale within prescribed time T2. In this case, when the intra-view ratio is smaller than the predetermined ratio within the predetermined time T2, the power consumption of the fundus image capturing apparatus 1 can be reduced, and the duration can be extended. The prescribed ratio may be a fixed value set inside the acquisition apparatus 1 of the fundus image. But examples of the present disclosure are not limited thereto, and the prescribed ratio may be set by a user. The prescribed time T2 may be a fixed value set inside the acquisition apparatus 1 of the fundus image. For example, the predetermined time T2 may be 1 minute, 2 minutes, 3 minutes, or 5 minutes or more. But examples of the present disclosure are not limited thereto, and the prescribed time T2 may be set according to different usage habits of the user. For example, the predetermined time T2 may be set to 30 seconds, 1 minute, 2 minutes, 5 minutes, or 10 minutes or more.
In some examples, when the proportion of the pixel values of the feature image in the pixel values of the captured image decreases and is less than the prescribed proportion within the prescribed time T2, the entire process may indicate that the subject is far from the lens or that the subject has left the lens capture range. In this case, the acquisition module 10 of the acquisition apparatus 1 of fundus images can be rapidly controlled to be in a non-operating state to reduce power consumption and prolong the endurance time of the acquisition apparatus 1 of fundus images. Thereby, the acquisition apparatus 1 of the fundus image can quickly control the switching of the operating state and the non-operating state of the acquisition module 10 based on the change of the view scale.
In addition, in some examples, control module 30 may be configured to detect whether the feature image is located within a preset detection region in the captured image when the feature image is present in the captured image within a preset time T1. For example, the acquisition device 1 of fundus images may be a hand-held fundus camera. The image acquired by the acquisition device 1 may be presented on the display screen of a hand-held fundus camera. The preset detection region in the image acquired by the acquisition apparatus 1 may be a detection region a as shown in fig. 7. The detection area a may be a partial or entire area of the display screen.
As shown in fig. 7, under the condition that the feature image is a human face image, when the face of the patient moves into the detection area a of the handheld fundus camera and the edge of the face substantially coincides with the edge of the detection area a or is within the detection area a, the feature image is located within a preset detection area in the acquired image. The control module 30 may leave the acquisition module 10 inactive if the patient's face is always free at the periphery of the detection area a or if the proportion is too small and if the patient has not moved the face into the detection area a for a period of time.
Additionally, in some examples, the control module 30 may be configured to leave the acquisition module 10 in a non-operational state when the feature image is not within the preset detection region. In this case, when the characteristic image is not located within the preset detection area, the power consumption of the fundus image capturing apparatus 1 can be reduced, and the cruising time can be extended.
In some examples, control module 30 may be configured to both detect whether the feature image of the user is within the detection area, and also detect whether the ratio of the feature image to the detection area is appropriate.
In addition, in some examples, when the preset feature library may be a face feature library and the feature image is a face image, the control module 30 may be configured to compare the feature image with at least one specific face image when the feature image exists in the images acquired within the preset time T1. The control module 30 may include a library of specific facial features, among others. The control module 30 may recognize whether at least one specific face image exists in the feature images through the specific face feature library.
In some examples, the specific facial feature library may be a feature library including feature information of at least one specific facial image. The specific face feature library may store feature templates for identifying specific face images. The feature template may include feature information of a particular face image. And comparing and matching the characteristic image with the characteristic template to obtain the similarity between the characteristic image and the characteristic template. When the similarity between the feature image and the feature template exceeds the second threshold H2, the control module 30 may determine that at least one specific face image exists in the feature image.
In some examples, the control module 30 may leave the acquisition module 10 in the non-operative state when the feature image does not have the at least one specific face image, i.e., when the feature image does not match the at least one specific face image. In this case, when the face image does not match the specific face image, the power consumption of the acquisition apparatus 1 of the fundus image can be reduced, and the cruising time can be prolonged.
In this case, the acquisition apparatus 1 of fundus images can be applied to the field of face authentication when the control module 30 is able to match the characteristic image with at least one specific face image. Thereby, the acquisition apparatus 1 of the fundus image can be opened to one or a limited number of specific authorized users. In some examples, control module 30 can automatically compare the feature images to specific face images. In other examples, the control module 30 may compare the feature image with the specific face image after acquiring the face image recognition control instruction. The control instruction may be issued by the user by clicking, touching, or speaking.
In some examples, the specific face image may be formed by finding a reference point from a face to be pre-stored, and extracting a face image to maximize the similarity between the face image and an existing face bundle image. After the elastic image matching, a new face bundle image is extracted to become a specific face image. And identifying by using the obtained specific face image as a feature, calculating all similarities between the face in the test feature image and the existing pre-stored specific face image, wherein the identity of the face with the maximum similarity is the identity of the test face, and identifying through face identification.
In other examples, as shown in fig. 3, control module 30 may receive status information from ratio acquisition module 10. Based on the status information of the acquisition module 10, the control module 30 may control the acquisition module 10 to perform mode switching. In other words, the control module 30 may be configured to acquire status information of the acquisition module 10 before the acquisition module 10 acquires an image. The control module 30 may switch the acquisition module 10 to a high resolution photographing mode or a low resolution photographing mode according to the state information. In this case, the shooting mode can be switched as needed by the acquisition apparatus 1 of fundus images to reduce power consumption and prolong the duration.
Additionally, in some examples, the state information may include at least an in-focus state and an exposure state. When the state information is the in-focus state, the control module 30 may be configured to switch the acquisition module 10 to the low resolution photographing mode. In this case, when the acquisition module 10 is in the in-focus state, the power consumption of the acquisition apparatus 1 of the fundus image can be reduced, and the cruising time can be extended. When the state information is the exposure state, the control module 30 may be configured to switch the acquisition module 10 to the high resolution photographing mode. In this case, a high-quality picture can be obtained. Therefore, under the requirement environments of different state information or different shooting qualities, two different shooting modes are reasonably switched, energy consumption can be reduced, and the cruising time of the acquisition device 1 of the fundus images is prolonged.
In some examples, the control module 30 may be, for example, a processing chip of a handheld fundus camera shown in fig. 1. The control module 30 may be, for example, a digital processing chip (DSP). In some examples, control module 30 is powered by a power module 40 (described later).
In some examples, control module 30 may output a control signal to power module 40 based on the identification result to enable control of the supply current of acquisition module 10 or comparison module 20. In other words, the control module 30 may control the operation state of the acquisition module 10 or the comparison module 20 based on the recognition result.
(Power supply module 40)
In some examples, power module 40 may include a stored power source (e.g., a battery) and a supply current delivery circuit for connecting the stored power source with other modules. In some examples, as shown in fig. 3, power module 40 may provide power. Specifically, the power module 40 may provide the power supply current to the acquisition module 10, the comparison module 20, and the control module 30, respectively.
In some examples, control module 30 may be configured to limit or turn off the supply current of power module 40 when no feature image is present in the images captured within preset time T1. In other words, when there is no characteristic image in the image captured within the preset time T1, the power supply module 40 receives the control signal sent by the control module 30 and reduces or turns off the supply current to the capture module 10 or the contrast module 20. This can reduce power consumption of the fundus image capturing apparatus 1 and prolong the duration of the flight.
In some examples, the acquisition device 1 of fundus images may also be free of the power supply module 40. At this time, the power supply current required by the acquisition module 10, the comparison module 20 and the control module 30 may be directly supplied by the external power source. In this case, the acquisition apparatus 1 of the fundus image may include a power supply current delivery circuit for connecting an external power supply to other modules. Thus, the control module 30 may limit or switch off the supply current required by the acquisition module 10 or the comparison module 20 by controlling the respective supply current delivery circuit.
(monitor module 50)
Fig. 8 is a block diagram of modification 1 of the fundus image capturing apparatus 1 according to the example of the present disclosure.
In some examples, as shown in fig. 8, the acquisition apparatus 1 of fundus images may include a sniffer module 50. Listening module 50 may obtain user speech signals within the surrounding environment.
In some examples, the control module 30 may ensure that the collection module 10 is in an operational state when the monitoring module 50 monitors a user voice signal. When the monitoring module 50 does not monitor the user voice signal, the control module 30 may input a control signal to the power module 40 according to the judgment in the above example to determine that the collection module 10 is in the working state or the non-working state.
In some examples, when the control module 30 leaves the collection module 10 in the non-operational state, if the listening module 50 detects a user voice signal, the control module 30 may return the collection module 10 from the non-operational state to the operational state. That is, the control module 30 may input a control signal to the power module 40, so that the power module 40 recovers the power supply current to the collection module 10. In this case, the limited supply current of the power module 40 returns to normal.
In some examples, the listening module 50 may include a sound sensor and an analog-to-digital converter. If the monitoring module 50 monitors a user voice signal, the sound sensor may convert the received sound into an electrical signal, and then input the electrical signal into the analog-to-digital converter to obtain a digital sound signal. The digital sound signal is input to the control module 30, so that the power supply module 40 recovers the power supply current to the acquisition module 10 or the comparison module 20.
In other examples, when the user needs to continue to acquire images using the acquisition apparatus 1 of fundus images, the acquisition apparatus 1 of fundus images may be manually restored from a non-operating state (e.g., standby state) to an operating state. Here, the artificial manner may include an operation such as touching a display screen of the acquisition apparatus 1 of the fundus image or an adjustment button.
While the present disclosure has been described in detail above with reference to the drawings and the embodiments, it should be understood that the above description does not limit the present disclosure in any way. Those skilled in the art can make modifications and variations to the present disclosure as needed without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (5)

1. An acquisition device of fundus images, which is characterized in that,
the method comprises the following steps:
an acquisition module for acquiring an image and capable of capturing a fundus image by focusing an optical mechanism, the acquisition module including a positioning unit for providing a plurality of sighting mark points when capturing the fundus image;
the comparison module is used for comparing the acquired image with a preset feature library and identifying whether a feature image exists in the acquired image or not; and
a control module configured to put the acquisition module in a non-operating state when the feature image does not exist in the acquired image within a preset time, and to calculate a view ratio of the feature image to the acquired image when the feature image exists in the acquired image within the preset time, and put the acquisition module in a non-operating state when the view ratio is smaller than a prescribed ratio and in a decreasing trend within a prescribed time,
the preset feature library is an eyeball feature library, and the feature image is an eyeball image.
2. The acquisition device of claim 1,
the control module is configured to limit a supply current of the power module when the characteristic image does not exist in the acquired image within a preset time.
3. The acquisition device of claim 1,
the control module is further configured to detect whether the feature image is located within a preset detection area in the captured image when the feature image is present in the captured image within a preset time, and to put the capturing module in a non-operating state when the feature image is not located within the preset detection area.
4. The acquisition device of claim 1,
the control module is further configured to acquire state information of the acquisition module before the acquisition module acquires an image, and switch the acquisition module to a high resolution shooting mode or a low resolution shooting mode according to the state information.
5. The acquisition device of claim 4,
the state information at least comprises a focusing state and an exposure state, and when the state information is in the focusing state, the control module is configured to switch the acquisition module to the low-resolution shooting mode.
CN201910399827.7A 2019-05-14 2019-05-14 Collecting device for fundus images Active CN110101363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910399827.7A CN110101363B (en) 2019-05-14 2019-05-14 Collecting device for fundus images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910399827.7A CN110101363B (en) 2019-05-14 2019-05-14 Collecting device for fundus images

Publications (2)

Publication Number Publication Date
CN110101363A CN110101363A (en) 2019-08-09
CN110101363B true CN110101363B (en) 2020-08-21

Family

ID=67490021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910399827.7A Active CN110101363B (en) 2019-05-14 2019-05-14 Collecting device for fundus images

Country Status (1)

Country Link
CN (1) CN110101363B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000201894A (en) * 1999-01-19 2000-07-25 Matsushita Electric Ind Co Ltd Optometric instrument and iris information obtaining system using this
JP2007061508A (en) * 2005-09-01 2007-03-15 Nidek Co Ltd Optometer
CN101795619A (en) * 2007-09-04 2010-08-04 卡尔蔡司医疗技术股份公司 Energy-conservation Medical Equipment
CN103702155A (en) * 2013-12-06 2014-04-02 乐视致新电子科技(天津)有限公司 TV control method and device
CN104092822A (en) * 2014-07-01 2014-10-08 惠州Tcl移动通信有限公司 Mobile phone state switching method and system based on face detection and eyeball tracking
CN106022247A (en) * 2016-05-16 2016-10-12 京东方科技集团股份有限公司 Display device and method
CN106488130A (en) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 A kind of screening-mode changing method and its switched system
CN106821697A (en) * 2017-03-23 2017-06-13 郑州诚优成电子科技有限公司 The automatic sight training instrument of retina scanning Intelligent Recognition
CN107088049A (en) * 2017-05-26 2017-08-25 苏州微清医疗器械有限公司 hand-held fundus camera
CN109547677A (en) * 2018-12-06 2019-03-29 代黎明 Eye fundus image image pickup method and system and equipment
CN109739101A (en) * 2018-12-30 2019-05-10 武汉市新源科创科技有限公司 Intelligent home control system based on recognition of face

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102023611B1 (en) * 2012-05-04 2019-09-23 삼성전자 주식회사 Terminal and method for iris scanning and proximity sensing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000201894A (en) * 1999-01-19 2000-07-25 Matsushita Electric Ind Co Ltd Optometric instrument and iris information obtaining system using this
JP2007061508A (en) * 2005-09-01 2007-03-15 Nidek Co Ltd Optometer
CN101795619A (en) * 2007-09-04 2010-08-04 卡尔蔡司医疗技术股份公司 Energy-conservation Medical Equipment
CN103702155A (en) * 2013-12-06 2014-04-02 乐视致新电子科技(天津)有限公司 TV control method and device
CN104092822A (en) * 2014-07-01 2014-10-08 惠州Tcl移动通信有限公司 Mobile phone state switching method and system based on face detection and eyeball tracking
CN106022247A (en) * 2016-05-16 2016-10-12 京东方科技集团股份有限公司 Display device and method
CN106488130A (en) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 A kind of screening-mode changing method and its switched system
CN106821697A (en) * 2017-03-23 2017-06-13 郑州诚优成电子科技有限公司 The automatic sight training instrument of retina scanning Intelligent Recognition
CN107088049A (en) * 2017-05-26 2017-08-25 苏州微清医疗器械有限公司 hand-held fundus camera
CN109547677A (en) * 2018-12-06 2019-03-29 代黎明 Eye fundus image image pickup method and system and equipment
CN109739101A (en) * 2018-12-30 2019-05-10 武汉市新源科创科技有限公司 Intelligent home control system based on recognition of face

Also Published As

Publication number Publication date
CN110101363A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
US10395097B2 (en) Method and system for biometric recognition
JP4182117B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
US8169530B2 (en) Camera having an autofocusing system
US8514277B2 (en) Video infrared retinal image scanner
CN104573667B (en) A kind of iris identification device for the iris image quality for improving mobile terminal
US8953849B2 (en) Method and system for biometric recognition
JP5789091B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
US20130329029A1 (en) Digital camera system
JP5171468B2 (en) IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD
JP2003016434A (en) Individual authenticating device
US9197818B2 (en) Imaging apparatus
US20100033591A1 (en) Image capturing apparatus and control method therefor
JP2005004524A (en) Identifying system, and personal authenticating system
CN109451233A (en) A kind of device acquiring fine definition face-image
CN110101363B (en) Collecting device for fundus images
JP5109779B2 (en) Imaging device
JP2021150760A (en) Imaging apparatus and method for controlling the same
JP2003289468A (en) Imaging apparatus
CN216352422U (en) Multi-modal image acquisition device
CN109426762B (en) Biological recognition system, method and biological recognition terminal
CN112998645A (en) Fundus imaging device, fundus imaging system, and fundus imaging method
KR20200107167A (en) Apparatus and Method for Making a Facial Image Suitable for Facial Recognition by Using Infraredat at Natural Lighting
CN111210438A (en) Mirror
CN211723122U (en) Fundus imaging apparatus and fundus imaging system
CN217739940U (en) Identity recognition and authentication system based on multi-mode biological information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Chen Yi

Inventor after: Wang Zhui

Inventor after: Chen Zhi

Inventor before: Chen Yi

Inventor before: Chen Zhi

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant