CN116421140B - Fundus camera control method, fundus camera, and storage medium - Google Patents

Fundus camera control method, fundus camera, and storage medium Download PDF

Info

Publication number
CN116421140B
CN116421140B CN202310687970.2A CN202310687970A CN116421140B CN 116421140 B CN116421140 B CN 116421140B CN 202310687970 A CN202310687970 A CN 202310687970A CN 116421140 B CN116421140 B CN 116421140B
Authority
CN
China
Prior art keywords
fundus
similarity
similar
historical
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310687970.2A
Other languages
Chinese (zh)
Other versions
CN116421140A (en
Inventor
程得集
徐冰
程香云
吕兴正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Mocular Medical Technology Inc
Original Assignee
Hangzhou Mocular Medical Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Mocular Medical Technology Inc filed Critical Hangzhou Mocular Medical Technology Inc
Priority to CN202310687970.2A priority Critical patent/CN116421140B/en
Publication of CN116421140A publication Critical patent/CN116421140A/en
Application granted granted Critical
Publication of CN116421140B publication Critical patent/CN116421140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention provides a fundus camera control method, a fundus camera and a storage medium, which belong to the technical field of diagnostic instruments and specifically comprise the following steps: the method comprises the steps of determining a coding sequence through local texture features of fundus images, determining similarity with historical fundus images and historical similar fundus images according to the coding sequence through a Hamming distance function, obtaining feature vectors of irises of users through Haar wavelets when the historical similar users exist according to the number, the similarity and the similarity of the historical similar fundus images, determining comprehensive similarity of the historical similar users by combining the local texture features, inner and outer eyeangles and the similarity with the historical similar users, and determining motor control movement amount based on motor movement amount and position data corresponding to the matched fundus images when the matched fundus images exist according to the comprehensive similarity, so that the efficiency of fundus camera focal length control is further improved.

Description

Fundus camera control method, fundus camera, and storage medium
Technical Field
The invention belongs to the field of diagnostic instruments, and particularly relates to a fundus camera control method, a fundus camera and a storage medium.
Background
In order to realize fundus imaging detection of a patient, and determine the development condition of chronic diseases according to fundus imaging detection results, the condition of illness is prevented from being aggravated, the use of fundus cameras is more and more extensive, and meanwhile, in the use process of the fundus cameras, the determination of focal length is often needed to be carried out manually, so that time and labor are consumed, and the accuracy is difficult to meet the requirements.
Therefore, in order to realize automatic control of focal length of the bottom-eye camera, in the patent CN114098632B entitled "method for controlling motor in bottom-eye camera and related products thereof," the following technical problems are presented, however, by obtaining the position data of the pupil center determined by the secondary camera and inputting the position data into the motor control model to obtain the motor movement amount for moving the primary camera to the working distance position:
the above patent mode of the invention can lead to lower control accuracy of the motor movement amount if the training set of the model is from all users, and lower control accuracy of the motor movement amount if the training set of the model is from the user, because the number of the training sets is less, the accuracy of the model is lower, and the control accuracy of the motor movement amount is also lower.
The identification of the user identity is not considered by combining with the fundus image of the user and other features, in general, the vascular distribution situation in the fundus image of the user has unique identification, and meanwhile, unique and stable eye marks are distributed in the sclera at two sides of the iris of the user, so that if the identification of the user identity is not considered, the historical data of the user cannot be obtained, and meanwhile, the determination of the moving distance at the first time according to the historical motor moving amount of the user cannot be realized.
In view of the above technical problems, the present invention provides a fundus camera control method, a fundus camera, and a storage medium.
Disclosure of Invention
In order to achieve the purpose of the invention, the invention adopts the following technical scheme:
according to an aspect of the present invention, there is provided a fundus camera control method.
A fundus camera control method, characterized by comprising:
s11, acquiring a fundus image of a user through a camera of a fundus camera, determining position data of the pupil center according to the fundus image of the user, and entering step S12 when the focal length is required to be controlled according to the position data;
s12, determining a coding sequence through local texture features of the fundus images, determining similarity with the historical fundus images and the historical similar fundus images through a Hamming distance function according to the coding sequence, determining whether a historical similar user exists according to the number, the similarity and the similarity of the historical similar fundus images, if yes, entering a step S14, otherwise, entering a step S13;
s13, acquiring a characteristic vector of the iris of the user through Haar wavelet, determining comprehensive similarity of the historical similar user by combining local texture features, inner and outer eye corners and similarity of the historical similar user, determining whether a matched fundus image exists or not through the comprehensive similarity, if so, determining motor control movement based on motor movement amount and position data corresponding to the matched fundus image, and if not, entering step S14;
s14, determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center of the motor after the initial motor movement amount and the definition of the fundus image.
Further, the position data of the pupil center determines coordinates of the pupil center according to the recognition angle of the pupil image in the fundus image.
Further, the specific steps of determining the historical similar fundus image are as follows:
and determining the deviation amount of an image coding sequence of the fundus and an image coding sequence of a historical fundus image through a hamming distance function, determining the similarity of the fundus image and the historical fundus image based on the deviation amount, and determining the historical similar fundus image according to the similarity.
Further, determining whether there is a matching fundus image according to the integrated similarity specifically includes:
and taking the fundus image of the corresponding historical similar user with the maximum comprehensive similarity as a matching fundus image.
Further, the determining of the motor control movement amount according to the position data of the pupil center of the motor after the initial motor movement amount and the definition of the fundus image specifically includes:
judging whether the position data of the pupil center of the motor after the initial motor movement is smaller than a preset deviation or not, if yes, entering the next step, and if not, entering the next step again when the position data of the pupil center of the motor after the initial motor movement is smaller than the preset deviation;
and acquiring the definition of the fundus image when the position data of the pupil center is smaller than the preset deviation, wherein the definition of the fundus image is determined according to the image noise of the fundus image, and the movement amount of the motor is determined according to the definition of the fundus image until the definition of the fundus image meets the requirement.
In a second aspect, the present invention provides a fundus camera, and the fundus camera control method includes:
a location data determination module; a history similar user determination module; a matching fundus image determination module; a motor control movement amount determination module;
the position data determining module is responsible for acquiring fundus images of a user through a camera of a fundus camera and determining position data of the pupil center according to the fundus images of the user;
the history similar user determining module is responsible for determining a coding sequence through the local texture characteristics of the fundus images, determining the similarity with the history fundus images and the history similar fundus images through a hamming distance function according to the coding sequence, and determining whether a history similar user exists according to the number of the history fundus images, the similarity and the similarity of the history similar fundus images;
the matching fundus image determining module is responsible for acquiring the feature vector of the iris of the user through Haar wavelet, combining the local texture features, the inner and outer eyeangles and the similarity with the history similar user to determine the comprehensive similarity of the history similar user, and determining whether the matching fundus image exists or not through the comprehensive similarity;
the motor control movement amount determining module is responsible for determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center after the initial motor movement amount and the definition of the fundus image.
In a third aspect, the present invention provides a computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to execute a fundus camera control method as described above.
The invention has the beneficial effects that:
the similarity between the historical fundus images and the historical similar fundus images are determined through the Hamming distance function according to the coding sequence, and whether the historical similar users exist or not is determined according to the number, the similarity and the similarity of the historical similar fundus images, so that preliminary screening of the historical fundus images from the angle of local texture features is realized, the historical suspected users are screened through various angles, namely, the data volume required for further comprehensive similarity analysis is reduced, and meanwhile, the control efficiency of the motor movement amount is also ensured.
And acquiring the characteristic vector of the iris of the user through Haar wavelet, determining the comprehensive similarity of the historical similar user by combining the local texture characteristics, the inner and outer eye corners and the similarity of the historical similar user, and determining whether the matched fundus image exists or not through the comprehensive similarity, so that the determination of the comprehensive similarity from three angles of the iris characteristic quantity, the fundus image and the eye corner image is realized, and the screening accuracy of the matched fundus image is further ensured.
By adopting a method for confirming the movement amount of the motor control with the difference according to the difference of the matched fundus images, the adjustment efficiency of the user with the history similar images is ensured, and the accuracy of the automatic adjustment of the user without the history similar images is also ensured.
Additional features and advantages will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings;
FIG. 1 is a flow chart of a method of traffic tracking and monitoring;
FIG. 2 is a flowchart showing specific steps of the determination of the number of screening users;
FIG. 3 is a flow chart of a method of comprehensive similarity determination of historically similar fundus images;
fig. 4 is a frame diagram of a fundus camera;
fig. 5 is a frame diagram of a computer storage medium.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present disclosure.
In order to solve the above-mentioned problems, according to an aspect of the present invention, as shown in fig. 1, there is provided a fundus camera control method according to an aspect of the present invention, characterized by comprising:
s11, acquiring a fundus image of a user through a camera of a fundus camera, determining position data of the pupil center according to the fundus image of the user, and entering step S12 when the focal length is required to be controlled according to the position data;
the position data of the pupil center is used for determining coordinates of the pupil center according to the identification angle of the pupil image in the fundus image.
In the actual operation process, the angles of eyes, pupils, white eyes and the like can be identified in an image characteristic identification mode, so that the inclination angle of the eyes can be judged, and the coordinates of the center of the pupils can be determined according to the inclination angle.
S12, determining a coding sequence through local texture features of the fundus images, determining similarity with the historical fundus images and the historical similar fundus images through a Hamming distance function according to the coding sequence, determining whether a historical similar user exists according to the number, the similarity and the similarity of the historical similar fundus images, if yes, entering a step S14, otherwise, entering a step S13;
specifically, the steps for determining the historical similar fundus image are as follows:
and determining the deviation amount of an image coding sequence of the fundus and an image coding sequence of a historical fundus image through a hamming distance function, determining the similarity of the fundus image and the historical fundus image based on the deviation amount, and determining the historical similar fundus image according to the similarity.
In the information theory, a Hamming distance (english) between two equal-length strings is the number of different characters at the corresponding positions of the two strings. In other words, it is the number of characters that need to be replaced to transform one string into another.
It can be understood that when there is a large difference between the image coding sequence of the fundus and the image coding sequence of the historical fundus image, the hamming distance function is also large and the similarity is poor.
Specifically, the determining whether the user has a history of similar users specifically includes:
s21, acquiring the similarity of the historical similar fundus images, determining whether a historical similar user exists or not according to the similarity of the historical similar fundus images, if so, determining that the historical similar user exists, and entering a step S23, otherwise, entering a step S22;
specifically, in the present embodiment, when there is a fundus image with a higher similarity in the history similar fundus image, it may be determined that there is a history similar user, and specifically may be implemented by a threshold.
S22, acquiring the number of the historical similar fundus images, determining whether the number of the historical similar fundus images meets the requirement, if so, entering the next step, and if not, determining that no historical similar user exists;
specifically, in the present embodiment, when the number of history-like fundus images is small, it may be determined that there is no history-like user, and specifically, it may also be realized by means of a threshold value.
S23, determining a similarity threshold value through the number of the historical fundus images and the similarity of the historical fundus images, judging whether the number of the historical similar fundus images with the similarity larger than the similarity threshold value meets the requirement, and if yes, entering a step S24; if not, taking the history user corresponding to the similarity of the history similar fundus images being larger than the similarity threshold as the history similar user;
the similarity threshold is related to the number of the history fundus images and the similarity of the history fundus images, wherein the larger the number of the history fundus images is, the larger the average value of the similarity of the history fundus images is, and the larger the similarity threshold is.
S24, determining the number of screening users according to the number of the historical fundus images, the similarity of the historical fundus images, the number of the historical similar fundus images and the similarity, and determining the historical similar users according to the number of the screening users and the similarity of the screening users.
Specifically, as shown in fig. 2, the specific steps of determining the number of the screening users are as follows:
s31, acquiring the number of the historical fundus images, taking the historical similar fundus images with the similarity larger than the similarity threshold value as screening similar images, taking the ratio of the screening similar images to the number of the historical fundus images as screening image ratios, determining whether the number of the screening similar images meets the requirement or not according to the screening image ratios, if so, taking the number of the screening similar images as the number of the historical similar users, and if not, entering step S32;
it should be noted that, the ratio of the screening images to the ratio of the screening similar images to all the historical fundus images is smaller, and when the ratio is smaller, the selected screening similar images can correspond to the historical similar users, so that the number of the historical similar users is ensured, and the accuracy of the final evaluation of the comprehensive similarity is ensured.
S32, acquiring the number of the historical similar images, taking the ratio of the number of the screening similar images to the number of the historical similar fundus images as a screening similar image ratio, determining whether the number of the screening similar images meets the requirement or not according to the screening similar image ratio, if so, taking the number of the screening similar images as the number of the historical similar users, and if not, entering into the step S32;
s33, obtaining an average value of the similarity of the historical fundus images and an average value of the similarity of the historical similar fundus images, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical fundus images as a historical image similarity deviation amount, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical similar fundus images as an approximate image similarity deviation amount, determining whether the number of the screening similar images meets the requirement according to the historical image similarity deviation amount and the approximate image similarity deviation amount, if yes, taking the number of the screening similar images as the number of the historical similar users, and if no, entering a step S34;
s34 determines the number of the filtering users by the filtering image ratio, the filtering approximate image ratio, the approximate image similarity deviation amount, and the history image similarity deviation amount.
It should be noted that the number of users to be screened needs to be selected within a set range, and if the number exceeds a certain range, the overall calculation efficiency becomes low.
It should be noted that, determining whether the number of the screened similar images meets the requirement according to the screened image ratio specifically includes:
and when the screening image ratio is smaller than a preset ratio, determining that the quantity of the screening similar images meets the requirement.
In the embodiment, the similarity with the historical fundus images and the historical similar fundus images are determined through the hamming distance function according to the coding sequence, and whether the historical similar users exist or not is determined according to the number of the historical fundus images, the similarity and the similarity of the historical similar fundus images, so that preliminary screening of the historical fundus images from the angle of local texture features is realized, and the screening of the historical suspected users through various angles is realized, namely, the data volume required for further comprehensive similarity analysis is reduced, and meanwhile, the control efficiency of the motor movement is also ensured.
S13, acquiring a characteristic vector of the iris of the user through Haar wavelet, determining comprehensive similarity of the historical similar user by combining local texture features, inner and outer eye corners and similarity of the historical similar user, determining whether a matched fundus image exists or not through the comprehensive similarity, if so, determining motor control movement based on motor movement amount and position data corresponding to the matched fundus image, and if not, entering step S14;
specifically, as shown in fig. 3, the method for determining the comprehensive similarity of the historical similar fundus images includes:
s41, acquiring an eye surrounding image of the user through the fundus camera, extracting characteristic quantities of the inner and outer eye angles, the highest point and the lowest point of the eyelid according to the eye surrounding image of the user, determining eye similarity of the user and the history similar user according to the inner and outer eye angles, the highest point and the lowest point of the eyelid, determining eye correction similarity according to the similarity and the eye similarity, screening the history similar user according to the eye correction similarity to obtain a history screening similar user, determining whether the eye surrounding image can be matched according to the deviation quantity of the maximum value of the eye correction similarity of the history screening similar user and the average value of the eye correction similarity of the history screening similar user, if yes, entering step S42, otherwise entering step S43;
it should be noted that, by considering the eye periphery image first, the problem of high extraction difficulty of local texture features of other fundus images or feature vectors of irises is avoided, and meanwhile, further screening of the number of similar historical users is also realized.
S42, taking the difference value between the maximum value of the eye correction similarity of the history screening similar users and the eye correction similarity of other history screening similar users as a comparison difference, determining whether fundus images of the history screening similar users corresponding to the maximum value of the eye correction similarity are matched fundus images according to the minimum value of the comparison difference, the number of the history screening similar users with the comparison difference smaller than a preset value, the proportion of the history screening similar users with the comparison difference smaller than the preset value in the history screening similar users and the average value of the comparison difference, if yes, taking the fundus images of the history screening similar users corresponding to the maximum value of the eye correction similarity as matched fundus images, and if no, entering step S43;
s43, obtaining local texture features, color moments and roundness of the fundus image of the user, determining image feature quantities of the fundus image of the user based on the local texture features, color moments and roundness of the fundus image of the user, determining image similarity of the fundus image according to the image feature quantities and the image feature quantities of the history screening similar users, and performing secondary screening on the history screening similar users according to the image similarity to obtain a matched user to be screened;
s44, obtaining the characteristic vector of the iris of the user through Haar wavelet, determining the similarity between the user and the iris of the user to be screened according to the characteristic vector of the iris of the user and the characteristic vector of the iris of the user to be screened, and determining the comprehensive similarity between the user and the user to be screened according to the similarity, the eye similarity and the image similarity of the iris.
Specifically, determining whether there is a matching fundus image according to the integrated similarity specifically includes:
and taking the fundus image of the corresponding historical similar user with the maximum comprehensive similarity as a matching fundus image.
In this embodiment, feature vectors of the iris of the user are obtained through Haar wavelet, local texture features, inner and outer eye corners and similarity with a history similar user of the fundus image are combined to determine comprehensive similarity of the history similar user, and whether the fundus image is matched or not is determined through the comprehensive similarity, so that determination of the comprehensive similarity from three angles of the iris feature quantity, the fundus image and the eye corner image is achieved, and screening accuracy of the matched fundus image is further guaranteed.
S14, determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center of the motor after the initial motor movement amount and the definition of the fundus image.
It is understood that the determination of the motor control movement amount is performed according to the position data of the pupil center of the motor after the initial motor movement amount is performed and the definition of the fundus image, and specifically includes:
judging whether the position data of the pupil center of the motor after the initial motor movement is smaller than a preset deviation or not, if yes, entering the next step, and if not, entering the next step again when the position data of the pupil center of the motor after the initial motor movement is smaller than the preset deviation;
and acquiring the definition of the fundus image when the position data of the pupil center is smaller than the preset deviation, wherein the definition of the fundus image is determined according to the image noise of the fundus image, and the movement amount of the motor is determined according to the definition of the fundus image until the definition of the fundus image meets the requirement.
In this embodiment, by adopting a method of confirming the differential motor control movement amount according to the difference of the matching fundus images, not only is the adjustment efficiency of the user who has the history similar images ensured, but also the accuracy of the automatic adjustment of the user who does not have the history similar images is ensured.
On the other hand, as shown in fig. 4, the present invention provides a fundus camera, and the fundus camera control method includes:
a location data determination module; a history similar user determination module; a matching fundus image determination module; a motor control movement amount determination module;
the position data determining module is responsible for acquiring fundus images of a user through a camera of a fundus camera and determining position data of the pupil center according to the fundus images of the user;
the history similar user determining module is responsible for determining a coding sequence through the local texture characteristics of the fundus images, determining the similarity with the history fundus images and the history similar fundus images through a hamming distance function according to the coding sequence, and determining whether a history similar user exists according to the number of the history fundus images, the similarity and the similarity of the history similar fundus images;
the matching fundus image determining module is responsible for acquiring the feature vector of the iris of the user through Haar wavelet, combining the local texture features, the inner and outer eyeangles and the similarity with the history similar user to determine the comprehensive similarity of the history similar user, and determining whether the matching fundus image exists or not through the comprehensive similarity;
the motor control movement amount determining module is responsible for determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center after the initial motor movement amount and the definition of the fundus image.
Acquiring an eye surrounding image of the user through the fundus camera, extracting characteristic quantities of the inside and outside eye angles, the highest point and the lowest point of the eyelid according to the eye surrounding image of the user, determining eye similarity of the user and the history similar user according to the inside and outside eye angles, the highest point and the lowest point of the eyelid, determining eye correction similarity according to the similarity and the eye similarity, screening the history similar user according to the eye correction similarity to obtain a history screening similar user, and entering a next step when determining that the matching fundus image cannot be performed according to the deviation quantity of the maximum value of the eye correction similarity of the history screening similar user and the average value of the eye correction similarity of the history screening similar user;
acquiring local texture features, color moments and roundness of fundus images of the user, determining image feature quantities of fundus images of the user based on the local texture features, color moments and roundness of the fundus images of the user, determining image similarity of the fundus images according to the image feature quantities and the image feature quantities of the history screening similar users, and performing secondary screening on the history screening similar users according to the image similarity to obtain matched users to be screened;
and acquiring the characteristic vector of the iris of the user through Haar wavelet, determining the similarity between the user and the iris of the matched user to be screened according to the characteristic vector of the iris of the user and the characteristic vector of the iris of the matched user to be screened, and determining the comprehensive similarity between the user and the matched user to be screened according to the iris similarity, the eye similarity and the image similarity.
On the other hand, as shown in fig. 5, the present invention provides a computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to execute the above-described fundus camera control method.
The fundus camera control method specifically comprises the following steps:
acquiring a fundus image of a user through a camera of a fundus camera, determining position data of the pupil center according to the fundus image of the user, and entering the next step when the focal length is required to be controlled according to the position data;
determining a coding sequence through the local texture characteristics of the fundus image, determining the similarity with the historical fundus image and the historical similar fundus image through a Hamming distance function according to the coding sequence,
acquiring the similarity of the historical similar fundus images, and entering the next step when the presence of the historical similar user is determined according to the similarity of the historical similar fundus images;
determining a similarity threshold value according to the number of the historical fundus images and the similarity of the historical fundus images, and entering the next step when the number of the historical similar fundus images, which is larger than the similarity threshold value, is judged to meet the requirement;
the similarity threshold is related to the number of the history fundus images and the similarity of the history fundus images, wherein the larger the number of the history fundus images is, the larger the average value of the similarity of the history fundus images is, and the larger the similarity threshold is.
And determining the number of screening users according to the number of the screening users and the similarity of the screening users.
Specific examples of the specific method for determining the number of the screening users are as follows:
acquiring the number of the historical fundus images, taking the historical similar fundus images with the similarity larger than the similarity threshold value as screening similar images, taking the ratio of the number of the screening similar images to the number of the historical fundus images as screening image ratio, and entering the next step when the number of the screening similar images cannot meet the requirement according to the screening image ratio;
it should be noted that, the ratio of the screening images to the ratio of the screening similar images to all the historical fundus images is smaller, and when the ratio is smaller, the selected screening similar images can correspond to the historical similar users, so that the number of the historical similar users is ensured, and the accuracy of the final evaluation of the comprehensive similarity is ensured.
Acquiring the number of the historical similar images, taking the ratio of the number of the screening similar images to the number of the historical similar fundus images as a screening similar image ratio, and entering the next step when the number of the screening similar images cannot meet the requirement according to the screening similar image ratio;
acquiring an average value of the similarity of the historical fundus images and an average value of the similarity of the historical similar fundus images, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical fundus images as a historical image similarity deviation amount, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical similar fundus images as an approximate image similarity deviation amount, and determining that the number of screening similar images cannot meet the requirement according to the historical image similarity deviation amount and the approximate image similarity deviation amount, and determining the number of screening users through the screening image ratio, the screening approximate image ratio, the approximate image similarity deviation amount and the historical image similarity deviation amount;
and acquiring a characteristic vector of the iris of the user through a Haar wavelet, determining the comprehensive similarity of the historical similar user by combining the local texture characteristics, the inner and outer eye corners and the similarity of the historical similar user of the fundus image, and determining the motor control movement based on the motor movement amount and the position data corresponding to the matched fundus image when the matched fundus image exists through the comprehensive similarity.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.

Claims (10)

1. A fundus camera control method, characterized by comprising:
s11, acquiring a fundus image of a user through a camera of a fundus camera, determining position data of the pupil center according to the fundus image of the user, and entering step S12 when the focal length is determined to be controlled according to the position data;
s12, determining a coding sequence through local texture features of the fundus images, determining similarity with the historical fundus images and the historical similar fundus images through a Hamming distance function according to the coding sequence, determining whether a historical similar user exists according to the number, the similarity and the similarity of the historical similar fundus images, if yes, entering a step S14, otherwise, entering a step S13;
s13, acquiring a characteristic vector of the iris of the user through Haar wavelet, determining comprehensive similarity of the historical similar user by combining local texture features, inner and outer eye corners and similarity of the historical similar user, determining whether a matched fundus image exists or not through the comprehensive similarity, if so, determining motor control movement based on motor movement amount and position data corresponding to the matched fundus image, and if not, entering step S14;
s14, determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center of the motor after the initial motor movement amount and the definition of the fundus image.
2. A fundus camera control method according to claim 1, wherein the position data of the pupil center makes a determination of coordinates of the pupil center based on an identification angle of a pupil image in the fundus image.
3. A fundus camera control method according to claim 1, wherein said history-like fundus image determination comprises the specific steps of:
and determining the deviation amount of an image coding sequence of the fundus and an image coding sequence of a historical fundus image through a hamming distance function, determining the similarity of the fundus image and the historical fundus image based on the deviation amount, and determining the historical similar fundus image according to the similarity.
4. A fundus camera control method according to claim 1, wherein determining whether the user has a history of similar users comprises:
s21, acquiring the similarity of the historical similar fundus images, determining whether a historical similar user exists or not according to the similarity of the historical similar fundus images, if so, determining that the historical similar user exists, and entering a step S23, otherwise, entering a step S22;
s22, acquiring the number of the historical similar fundus images, determining whether the number of the historical similar fundus images meets the requirement, if so, entering the next step, and if not, determining that no historical similar user exists;
s23, determining a similarity threshold value through the number of the historical fundus images and the similarity of the historical fundus images, judging whether the number of the historical similar fundus images with the similarity larger than the similarity threshold value meets the requirement, and if yes, entering a step S24; if not, taking the history user corresponding to the similarity of the history similar fundus images being larger than the similarity threshold as the history similar user;
s24, determining the number of screening users according to the number of the historical fundus images, the similarity of the historical fundus images, the number of the historical similar fundus images and the similarity, and determining the historical similar users according to the number of the screening users and the similarity of the screening users.
5. A fundus camera control method according to claim 4, wherein the similarity threshold is related to the number of the history fundus images and the similarity of the history fundus images, wherein the larger the number of the history fundus images is, the larger the average value of the similarity of the history fundus images is, and the larger the similarity threshold is.
6. A fundus camera control method according to claim 4, wherein the step of determining the number of the filtering users comprises:
acquiring the number of the historical fundus images, taking the historical similar fundus images with the similarity larger than the similarity threshold value as screening similar images, taking the ratio of the screening similar images to the number of the historical fundus images as screening image ratios, determining whether the number of the screening similar images meets the requirement or not according to the screening image ratios, if so, taking the number of the screening similar images as the number of the historical similar users, and if not, entering the next step;
acquiring the number of the historical similar images, taking the ratio of the number of the screening similar images to the number of the historical similar fundus images as a screening similar image ratio, determining whether the number of the screening similar images meets the requirement or not according to the screening similar image ratio, if so, taking the number of the screening similar images as the number of the historical similar users, and if not, entering the next step;
acquiring an average value of the similarity of the historical fundus images and an average value of the similarity of the historical similar fundus images, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical fundus images as a historical image similarity deviation amount, taking a difference value between the average value of the similarity of the screening images and the average value of the similarity of the historical similar fundus images as an approximate image similarity deviation amount, determining whether the number of the screening similar images meets the requirement according to the historical image similarity deviation amount and the approximate image similarity deviation amount, if yes, taking the number of the screening similar images as the number of the historical similar users, and if no, entering the next step;
and determining the number of the screening users through the screening image ratio, the screening approximate image ratio, the approximate image similarity deviation amount and the historical image similarity deviation amount.
7. A fundus camera control method according to claim 6, wherein determining whether the number of the screening similar images satisfies a requirement by the screening image ratio comprises:
and when the screening image ratio is smaller than a preset ratio, determining that the quantity of the screening similar images meets the requirement.
8. The fundus camera control method according to claim 1, wherein the determination of the motor control movement amount is performed based on position data of a pupil center of the motor after the initial motor movement amount is performed and sharpness of a fundus image, specifically comprising:
judging whether the position data of the pupil center of the motor after the initial motor movement is smaller than a preset deviation or not, if yes, entering the next step, and if not, entering the next step again when the position data of the pupil center of the motor after the initial motor movement is smaller than the preset deviation;
and acquiring the definition of the fundus image when the position data of the pupil center is smaller than the preset deviation, wherein the definition of the fundus image is determined according to the image noise of the fundus image, and the movement amount of the motor is determined according to the definition of the fundus image until the definition of the fundus image meets the requirement.
9. A fundus camera employing a fundus camera control method according to any one of claims 1 to 8, characterized by comprising in particular:
a location data determination module; a history similar user determination module; a matching fundus image determination module; a motor control movement amount determination module;
the position data determining module is responsible for acquiring fundus images of a user through a camera of a fundus camera and determining position data of the pupil center according to the fundus images of the user;
the history similar user determining module is responsible for determining a coding sequence through the local texture characteristics of the fundus images, determining the similarity with the history fundus images and the history similar fundus images through a hamming distance function according to the coding sequence, and determining whether a history similar user exists according to the number of the history fundus images, the similarity and the similarity of the history similar fundus images;
the matching fundus image determining module is responsible for acquiring the feature vector of the iris of the user through Haar wavelet, combining the local texture features, the inner and outer eyeangles and the similarity with the history similar user to determine the comprehensive similarity of the history similar user, and determining whether the matching fundus image exists or not through the comprehensive similarity;
the motor control movement amount determining module is responsible for determining initial motor movement amount according to the position data of the pupil center, and determining motor control movement amount according to the position data of the pupil center after the initial motor movement amount and the definition of the fundus image.
10. A computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to execute a fundus camera control method according to any one of claims 1 to 8.
CN202310687970.2A 2023-06-12 2023-06-12 Fundus camera control method, fundus camera, and storage medium Active CN116421140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310687970.2A CN116421140B (en) 2023-06-12 2023-06-12 Fundus camera control method, fundus camera, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310687970.2A CN116421140B (en) 2023-06-12 2023-06-12 Fundus camera control method, fundus camera, and storage medium

Publications (2)

Publication Number Publication Date
CN116421140A CN116421140A (en) 2023-07-14
CN116421140B true CN116421140B (en) 2023-09-05

Family

ID=87091077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310687970.2A Active CN116421140B (en) 2023-06-12 2023-06-12 Fundus camera control method, fundus camera, and storage medium

Country Status (1)

Country Link
CN (1) CN116421140B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5471896A (en) * 1977-11-17 1979-06-08 Minolta Camera Kk Eyeeground camera easily focused
CA2145659A1 (en) * 1991-07-15 1994-04-28 John G. Daugman Biometric personal identification system based on iris analysis
WO2003105086A1 (en) * 2002-06-07 2003-12-18 Carl Zeiss Meditec Ag Method and arrangement for evaluating images taken with a fundus camera
JP2008210328A (en) * 2007-02-28 2008-09-11 Gifu Univ Apparatus and program for authentication of fundus image
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN110276333A (en) * 2019-06-28 2019-09-24 上海鹰瞳医疗科技有限公司 Eyeground identification model training method, eyeground personal identification method and equipment
CN110269588A (en) * 2018-03-16 2019-09-24 株式会社拓普康 Ophthalmoligic instrument and its control method
CN110458217A (en) * 2019-07-31 2019-11-15 腾讯医疗健康(深圳)有限公司 Image-recognizing method and device, eye fundus image recognition methods and electronic equipment
CN110516100A (en) * 2019-08-29 2019-11-29 武汉纺织大学 A kind of calculation method of image similarity, system, storage medium and electronic equipment
CN112075921A (en) * 2020-10-14 2020-12-15 上海鹰瞳医疗科技有限公司 Fundus camera and focal length adjusting method thereof
CN114098632A (en) * 2022-01-27 2022-03-01 北京鹰瞳科技发展股份有限公司 Method for controlling a motor in a fundus camera and related product
CN116138729A (en) * 2022-12-21 2023-05-23 宁波明星科技发展有限公司 Parameter determining method and system based on fundus camera and intelligent terminal
CN116228668A (en) * 2022-12-30 2023-06-06 华中科技大学同济医学院附属同济医院 Fundus image identification method and related equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7080075B2 (en) * 2018-03-16 2022-06-03 株式会社トプコン Ophthalmic device and its control method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5471896A (en) * 1977-11-17 1979-06-08 Minolta Camera Kk Eyeeground camera easily focused
CA2145659A1 (en) * 1991-07-15 1994-04-28 John G. Daugman Biometric personal identification system based on iris analysis
WO2003105086A1 (en) * 2002-06-07 2003-12-18 Carl Zeiss Meditec Ag Method and arrangement for evaluating images taken with a fundus camera
JP2008210328A (en) * 2007-02-28 2008-09-11 Gifu Univ Apparatus and program for authentication of fundus image
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN110269588A (en) * 2018-03-16 2019-09-24 株式会社拓普康 Ophthalmoligic instrument and its control method
CN110276333A (en) * 2019-06-28 2019-09-24 上海鹰瞳医疗科技有限公司 Eyeground identification model training method, eyeground personal identification method and equipment
CN110458217A (en) * 2019-07-31 2019-11-15 腾讯医疗健康(深圳)有限公司 Image-recognizing method and device, eye fundus image recognition methods and electronic equipment
CN110516100A (en) * 2019-08-29 2019-11-29 武汉纺织大学 A kind of calculation method of image similarity, system, storage medium and electronic equipment
CN112075921A (en) * 2020-10-14 2020-12-15 上海鹰瞳医疗科技有限公司 Fundus camera and focal length adjusting method thereof
CN114098632A (en) * 2022-01-27 2022-03-01 北京鹰瞳科技发展股份有限公司 Method for controlling a motor in a fundus camera and related product
CN116138729A (en) * 2022-12-21 2023-05-23 宁波明星科技发展有限公司 Parameter determining method and system based on fundus camera and intelligent terminal
CN116228668A (en) * 2022-12-30 2023-06-06 华中科技大学同济医学院附属同济医院 Fundus image identification method and related equipment

Also Published As

Publication number Publication date
CN116421140A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
US11849998B2 (en) Method for pupil detection for cognitive monitoring, analysis, and biofeedback-based treatment and training
US8942436B2 (en) Image processing device, imaging device, image processing method
US8184870B2 (en) Apparatus, method, and program for discriminating subjects
US7599519B2 (en) Method and apparatus for detecting structural elements of subjects
JP4861806B2 (en) Red-eye detection and correction
US9098760B2 (en) Face recognizing apparatus and face recognizing method
US20070189584A1 (en) Specific expression face detection method, and imaging control method, apparatus and program
US20060204052A1 (en) Method, apparatus, and program for detecting red eye
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN106056064A (en) Face recognition method and face recognition device
CN110634116B (en) Facial image scoring method and camera
CN105706108A (en) Frequency spectrum resource scheduling device, method and system
US11188771B2 (en) Living-body detection method and apparatus for face, and computer readable medium
CN109635761B (en) Iris recognition image determining method and device, terminal equipment and storage medium
JP4901229B2 (en) Red-eye detection method, apparatus, and program
CN111839455A (en) Eye sign identification method and equipment for thyroid-associated ophthalmopathy
CN115423870A (en) Pupil center positioning method and device
CN114360039A (en) Intelligent eyelid detection method and system
KR20220107022A (en) Fundus Image Identification Method, Apparatus and Apparatus
CN116421140B (en) Fundus camera control method, fundus camera, and storage medium
CN105279764B (en) Eye image processing apparatus and method
JP5653003B2 (en) Object identification device and object identification method
WO2021229663A1 (en) Learning device, learning method, and recording medium
WO2021229659A1 (en) Determination device, determination method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant