CN114937307B - Method for myopia prediction and related products - Google Patents

Method for myopia prediction and related products Download PDF

Info

Publication number
CN114937307B
CN114937307B CN202210847259.4A CN202210847259A CN114937307B CN 114937307 B CN114937307 B CN 114937307B CN 202210847259 A CN202210847259 A CN 202210847259A CN 114937307 B CN114937307 B CN 114937307B
Authority
CN
China
Prior art keywords
myopia
fundus
prediction
user
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210847259.4A
Other languages
Chinese (zh)
Other versions
CN114937307A (en
Inventor
宋凯敏
贺婉佶
王界闻
张弘
史晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Airdoc Technology Co Ltd
Original Assignee
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Airdoc Technology Co Ltd filed Critical Beijing Airdoc Technology Co Ltd
Priority to CN202210847259.4A priority Critical patent/CN114937307B/en
Publication of CN114937307A publication Critical patent/CN114937307A/en
Application granted granted Critical
Publication of CN114937307B publication Critical patent/CN114937307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for myopia prediction and a related product thereof, wherein the method comprises the steps of generating an average myopia prediction result of a user with myopia to be predicted by utilizing a machine learning model; determining an eyeball quality index of the user related to myopia based on the fundus picture of the user; and generating an individual myopia prediction associated with the user based on the average myopia prediction and the eye quality indicator. By utilizing the prediction scheme of the invention, individual prediction aiming at the myopes can be provided, thereby providing a good basis for subsequent human intervention.

Description

Method for myopia prediction and related products
Technical Field
The present invention relates generally to the field of image analysis. More particularly, the present invention relates to a method, apparatus and computer-readable storage medium for myopia prediction.
Background
The current myopia problem is becoming more severe due to the overuse of devices with electronic display screens. To understand and inhibit further development of this situation, it is currently possible to detect retinal fundus pictures in addition to conventional eye chart testing, axial length measurement. For example, the axial length, choroidal thickness, and thickness of the eye associated with myopia may all find corresponding image features in the fundus image. In addition, in the early stage of myopia, slight deformation of the eyeball may cause stretching of the peripheral region of the optic disc, forming fundus oculi arc-shaped patches, and different characteristic expressions may be formed based on the stretched region.
In the case of a teenager, the more prominent the features such as the axial length and choroid thickness in fundus photographs, the more rapidly the teenager's future myopia progression will be. In view of this, accurate monitoring is carried out on the eyeground image characteristics under different ages and different myopia conditions in the early stage of myopia, so that the prediction of the future myopia development trend becomes an important link for the vision prevention and control of teenagers.
Disclosure of Invention
In view of the above-mentioned technical problem, the present invention provides a solution for myopia prediction. By the prediction scheme of the invention, effective and accurate myopia prediction of myopes can be realized, thereby facilitating timely human intervention and avoiding aggravation and deterioration of myopia degrees.
To this end, in a first aspect, the invention provides a method for myopia prediction, comprising: generating an average myopia prediction result of a user with myopia to be predicted by using a machine learning model; determining an eyeball quality index of the user related to myopia based on the fundus picture of the user; and generating an individual myopia prediction associated with the user based on the average myopia prediction and the eye quality indicator.
In one embodiment, the method further comprises: forming non-fundus feature quantities to be input to the machine learning model by using the collected non-fundus features; and performing early training on the machine learning model by using the non-fundus characteristic quantity.
In one embodiment, the non-fundus feature includes one or more of age, gender, eye category, and current diopter.
In one embodiment, determining the eye mass indicator of the user related to myopia based on the fundus picture of the user comprises: performing fundus feature extraction on the fundus picture to obtain fundus feature quantities related to myopia; and generating the eyeball quality index related to myopia according to the eyeground characteristic quantity and the non-eyeground characteristic quantity.
In one embodiment, wherein performing the feature extraction operation on the fundus picture comprises: performing a segmentation operation on the fundus image using at least one image segmentation model associated with fundus features to obtain fundus feature quantities associated with the myopia.
In one embodiment, wherein the ocular fundus feature comprises an ocular fundus leopard line and/or an ocular fundus arcus plaque.
In one embodiment, wherein generating the eye ball quality indicator associated with myopia from the fundus feature quantity and non-fundus feature quantity comprises: inputting the fundus feature quantity and non-fundus feature quantity into a machine learning model to obtain an average diopter of the user; and calculating the eyeball quality index according to the average diopter.
In one embodiment, wherein the user comprises a teenager with an age range between 6-18 years.
In a second aspect, the invention provides an apparatus for myopia prediction, comprising: a processor; and a memory having stored thereon computer program code for myopia prediction, which when executed by the processor, implements the method according to the first aspect and its various embodiments.
In a third aspect, the invention provides a computer readable storage medium having stored thereon computer program code for myopia prediction, which when executed by a processor, implements the method according to the first aspect and its various embodiments.
By using the scheme for predicting the myopia, the myopia development of the myopia patient can be accurately and effectively predicted. In particular, the present invention innovatively combines the generation of an average myopia prediction for a user (i.e., a myope) by a machine learning model and the acquisition of a user's eye quality index by analysis of fundus photographs, thereby generating individual myopia predictions associated with the user. Wherein, by using machine learning to execute the average myopia prediction, the invention successfully obtains the average myopia prediction of the user by using machine learning technology according to large-scale population distribution. In addition, through the analysis of fundus pictures, the present solution successfully introduces eye mass into the myopia prediction. Based on the method, the individual prediction of the myopic patient can be provided by the myopia prediction mode, so that a good basis is provided for subsequent human intervention. Further, based on the accurate myopia prediction of the invention, the scheme of the invention also makes timely and effective myopia prevention and control possible. When the user is an adolescent (e.g. a myopic patient between 6-18 years of age), the solution of the present invention also provides effective myopia prediction and prevention and control for users of that age group.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a simplified flow diagram illustrating a method for myopia prediction according to an embodiment of the present invention;
FIG. 2 is a detailed flow diagram illustrating a method for myopia prediction according to one embodiment of the present invention;
FIG. 3 is a detailed flow chart illustrating a method for myopia prediction according to another embodiment of the present invention;
FIG. 4 is a graph showing the average myopia prediction results according to an embodiment of the present invention;
fig. 5 is a graph showing an eyeball quality index distribution according to the embodiment of the present invention;
FIG. 6 is a graph illustrating the results of a myopia prediction according to one embodiment of the present invention;
FIG. 7 is a graph illustrating the results of a myopia prediction according to another embodiment of the present invention;
FIG. 8 is a functional block diagram illustrating an apparatus for myopia prediction according to an embodiment of the present invention; and
fig. 9 is a block diagram illustrating a system for myopia prediction according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
As mentioned above, in view of the urgent need for the rapid and effective myopia prevention and control in the number of current myopia groups (especially teenagers between 6 and 18 years), the present invention proposes to use a machine learning model in combination with the analysis of fundus images to make an effective prediction of myopia, thereby providing a reference and a basis for early warning of myopia development and timely and effective human intervention. Specifically, the present invention achieves average myopia prediction of a prediction subject (i.e., "user" in the context of the present invention) by using a machine learning model, and further obtains an eyeball quality index by analyzing a fundus picture of the prediction subject. By taking into account both the average myopia prediction and the eye quality indicator, an individual myopia prediction associated with the predicted object is ultimately generated, thereby providing a personalized myopia prediction.
The following detailed description of embodiments of the invention refers to the accompanying drawings.
FIG. 1 is a simplified flow diagram illustrating a method 100 for myopia prediction according to an embodiment of the present invention. It is understood that the method 100 herein may be performed by the apparatus illustrated in fig. 8.
As shown in fig. 1, at step S102, an average myopia prediction result for a user whose myopia is to be predicted is generated using a machine learning model. As known to those skilled in the art, the machine learning model herein is a machine learning model that is trained by a large amount of training data and is capable of performing inference operations. In order to realize the average myopia prediction, the invention provides a machine learning model which is based on a large amount of crowd data, particularly collects non-fundus feature data including age, sex, eye type (left eye/right eye) and current diopter, and inputs the four types of non-fundus feature data (or non-fundus feature quantities) as input data into the machine learning model for early training, so that the average myopia prediction result (such as the average refraction development trend) can be finally obtained. As an example, the machine learning model here may be a random forest regression model, which is a nonlinear tree-based ensemble learning method using "decision weight + bagging".
According to the scheme of the invention, the random forest regression model is selected and utilized to be well represented on the data set, and due to the introduction of two randomness (random data set selection and random feature selection), the random forest regression model is not easy to fall into overfitting and has good anti-noise capability. In addition, the random forest regression model can process high-dimensional data and does not need to make feature selection, so that the adaptability to the data set is strong. In addition, the random forest regression model can process discrete data and continuous data, so that a data set does not need to be normalized. When creating random forests, unbiased estimates are used for generalized errors ("generation errors"). In the aspect of training, the random forest regression model is fast in training speed, variable importance ranking can be obtained, and in the training process, the random forest regression model can detect interaction among features ("features"). In terms of implementation, the random forest regression model is easy to be made into a parallelized regression method, so that the implementation is relatively simple.
Returning to the flow, at step S104, an eyeball quality index of the user relating to myopia is determined based on the fundus picture of the user. Here, the fundus picture may be a picture obtained after photographing the fundus of the user by a fundus camera (as shown at 603 of fig. 6 and 703 of fig. 7). Further, the determination operation herein involves a segmentation calculation of one or more fundus features in the fundus picture. As an example, the aforementioned fundus feature may include a fundus leopard line and/or fundus arcus macula, and the like.
At step S106, an individual myopia prediction associated with the user is generated from the average myopia prediction and the eye quality indicator. As one implementation scenario, the individual's near vision prediction results herein may be the user's final individual refractive prediction curve, such as illustrated by curve 602 in FIG. 6 and curve 702 in FIG. 7, which gives a diopter prediction between the ages of teenagers 6-18. By observing the individual's refractive prediction profile, the user's progression of myopia over the next several years can be clearly seen.
The method 100 for myopia prediction of the present invention is described above in conjunction with figure 1. By implementing the method, the invention can provide effective and personalized myopia prediction for users, particularly teenagers with the ages of 6-18, thereby providing possibility for timely human intervention and intervention.
Figure 2 is a detailed flow diagram illustrating a method 200 for myopia prediction, according to one embodiment of the present invention. It should be understood that the method 200 illustrated in fig. 2 may be considered as one possible implementation of the method 100 illustrated in fig. 1, and thus the description of the method 100 with respect to fig. 1 is equally applicable to the description of the method 200 with respect to fig. 2 below.
As shown in fig. 2, at step S202, a non-fundus feature amount to be input to the machine learning model is formed using the acquired non-fundus features. As previously mentioned, the non-ocular fundus characteristics herein may include the age, sex, eye classification (left/right eye) and current diopter of the user, and the present invention is not limited in any way in this respect. Further, by performing feature engineering conversion for the four types of non-fundus features, the non-fundus feature input can be converted into an input suitable for a machine learning model. Next, at step S204, the machine learning model is subjected to pre-training (including, for example, forward and backward training) using the non-fundus feature amount, resulting in a machine learning model that can perform an inference operation. As an example, the machine learning model here may be the random forest regression model described previously.
Next, at step S206, an average myopia prediction result for the user whose myopia is to be predicted is generated using the machine learning model described above. By way of example, the average myopia prediction is indicated at 402 in fig. 4. At step S208, a fundus feature extraction operation is performed on the fundus picture to obtain a fundus feature amount associated with myopia. At step S210, the eyeball quality index relating to myopia is generated from the fundus feature amount and the non-fundus feature amount. In one embodiment, the fundus feature quantity may be a fundus feature quantity relating to fundus leopard lines and fundus oculi arcus plaques. Based on this, in one implementation scenario, the foregoing fundus feature quantity and non-fundus feature quantity can be quantized and converted into a data input quantity suitable for being input into the machine learning model by using a feature engineering manner (e.g., "one-hot encoding" one-bit effective encoding), so that the average diopter of the user fundus can be calculated back and forth by using the machine learning model. For example only, eye mass may be determined as the difference between the regressively calculated average power and the actual power measured by the user. When the difference value is larger, the quality of the eyeball is reflected to be poorer; conversely, when the difference is smaller, the eyeball mass is reflected better at this time.
Finally, at step S212, an individual myopia prediction associated with the user is generated based on the average myopia prediction and the eye quality indicator. As an example, the foregoing individual myopia prediction may be calculated using the following equation (1):
Figure 479942DEST_PATH_IMAGE001
(1)
wherein
Figure 827747DEST_PATH_IMAGE002
I.e. the average myopia prediction result in step S206, and the eyeball quality, i.e. the eyeball quality indicator in step S210, is greater or less than>
Figure 830338DEST_PATH_IMAGE003
The normal distribution parameter is a parameter obtained by selecting a corresponding population at the aforementioned eyeball mass from a database having a large number of fundus images and by a probability density distribution corresponding to the actual diopter of the population.
FIG. 3 is a detailed flow diagram illustrating a method 300 for myopia prediction according to another embodiment of the present invention. It should be understood that the method 300 illustrated in fig. 3 may be considered as one possible implementation of the method 100 illustrated in fig. 1, and thus the description of the method 100 with respect to fig. 1 is equally applicable to the description of the method 300 with respect to fig. 3 below.
As shown in fig. 3, at step S302, one or more of age, sex, eye identity, and current diopter are acquired as non-fundus features. As previously described, the acquisition may be for a large number of myopic populations to develop training data of sufficient magnitude that may be used to train the machine learning model. Next, at step S304, the non-fundus feature is input to the machine learning model for training. As mentioned above, the machine learning model may be, for example, a random forest regression model, and the present invention is not limited in any way in terms of the specific machine learning model. When training is complete, at step S306, an average myopia prediction result for the user to be predicted for myopia is generated using the aforementioned machine learning model, which may have the form of a prediction curve as shown at 402 in fig. 4, for example.
At step S308, a segmentation operation is performed on the fundus image using at least one image segmentation model relating to fundus features to obtain fundus feature quantities relating to myopia. According to different implementation scenarios, the image segmentation model here may be multiple, such as a fundus leopard streak segmentation model for fundus leopard streaks and a fundus arcus macula segmentation model for fundus arcus macula. Taking the fundus leopard line segmentation model as an example, in an exemplary operation, it can perform pixel-level segmentation on the fundus image, resulting in leopard line segmentation information identifying each pixel as belonging to a leopard line or non-leopard line region. Then, a plurality of regions of interest having different degrees of influence on vision due to the appearance of the leopard streak can be segmented based on the fundus image, and region segmentation information for identifying the region of interest to which each pixel belongs can be obtained. Thereafter, the fundus leopard streak can be quantitatively analyzed from the leopard streak division information and the region division information, whereby a fundus quantitative index regarding the fundus leopard streak can be obtained.
Similarly, the division operation may also be performed for a fundus oculi arcuated plaque in the fundus photograph, whereby a fundus quantification index regarding the fundus oculi arcuated plaque can be obtained. Thereafter, the above-mentioned fundus quantification index relating to the fundus leopard line and the fundus quantification index relating to the fundus arcus macula may be combined, whereby the fundus feature amount relating to myopia referred to in the present invention can be obtained. It is understood that the leopard line segmentation and the arc-shaped spot segmentation are only exemplary and not restrictive, and those skilled in the art can also use other fundus characteristics reflecting myopia to perform segmentation and quantitative analysis operations according to the teachings of the present invention, so as to obtain the fundus characteristic quantity referred to in the present invention.
Next, at step S310, the fundus feature amount and the non-fundus feature amount are input into the machine learning model to obtain the average diopter of the user. Thereafter, at step S312, an eye quality index is calculated from the average diopter. As mentioned above, the average diopter can be regarded as one reference amount of the eyeball mass of the present invention, and other reference amounts can be used by those skilled in the art based on the principle and teaching of the present invention, so the present invention is not limited in this respect. As an example, the eyeball quality index here may be obtained by comparing an average diopter obtained by the machine learning model with a diopter actually measured by the user, and taking the difference between the two as a final eyeball quality index.
Finally, at step S314, an individual myopia prediction associated with the user is generated based on the average myopia prediction and the eye quality indicator. As an example, the calculations may be performed using equation (1) previously described to obtain a graph of the individual's myopia prediction, such as shown at 602 in fig. 6 and at 702 in fig. 7. Based on such predicted graphs, parents or medical workers of the teenager can know the development trend of myopia of the teenager in advance and put forward a scheme for effectively controlling or inhibiting the deterioration of the teenager in time.
FIG. 4 is a graph illustrating average myopia prediction results according to an embodiment of the present invention. As shown in the figure, the abscissa indicates age and the ordinate indicates predicted refractive power. For ease of reference and comparison, a conventional average myopia prediction curve 401 obtained without using a machine learning model and an average myopia prediction curve 402 obtained with a machine learning model of the present invention are shown. For accuracy of the prediction, the present invention also proposes to remove non-fundus characteristic data (e.g., including age, gender, eye identity, and current diopter) of the myopic power that becomes shallow after using atropine or other myopic interventions from the training data.
Fig. 5 is a graph showing an eyeball quality index distribution according to an embodiment of the present invention. Specifically, the distribution of the eyeball quality index (which may also be specifically referred to as diopter level) and an exemplary corresponding population ratio are shown in fig. 5, which approximately satisfy the normal distribution of the following equation (2), and which reflects the diopter level with the same condition (such as speckle/arc spot) as the user to be predicted:
Figure 822565DEST_PATH_IMAGE004
(2)/>
wherein in formula (2)
Figure 580305DEST_PATH_IMAGE005
Represents the actual diopter of the user to be predicted near-sighted, and->
Figure 87510DEST_PATH_IMAGE006
Represents the mean diopter (e.g., calculated or fitted at step S310 of FIG. 3), and->
Figure 831475DEST_PATH_IMAGE007
Representing the distribution offset. For the exemplary percentage values in the figure, it represents the proportion of the population that falls within this interval. For example, the abscissa lies in +>
Figure 802842DEST_PATH_IMAGE008
The population in the area occupied 68.3%, and the abscissa lies in->
Figure 606850DEST_PATH_IMAGE009
The population in the range is 95.4%.
FIG. 6 is a schematic view showing one embodiment of the invention graph of example myopia predictions. As shown on the left side in fig. 6, the abscissa indicates age ("age") and the ordinate indicates predicted refractive power. Using the operational procedure described above in connection with fig. 1-3 of the present invention, a final myopia prediction curve 602 corresponding to the right fundus image 603 can be obtained. For reference, the left side also shows the near vision prediction curve 601 obtained without the solution of the present invention. Similarly, FIG. 7 is a graph illustrating the results of a myopia prediction according to another embodiment of the present invention. Similar to that shown in fig. 6, the curve 702 represents a myopia prediction curve 702 corresponding to the fundus image 703 obtained using the scheme of the present invention. Also for reference purposes, the left side of figure 7 also shows a myopia prediction curve 701 obtained without the use of the solution of the present invention. In order to verify the effectiveness and the advantages of the scheme, the applicant also carries out follow-up survey on a large number of people, and the obtained follow-up data shows that the individual myopia prediction curve has higher accuracy compared with the existing average curve after the fundus image is added, so that higher reference value is provided. The inventive scheme is more accurate and effective than the prior art in view of the higher accuracy of the inventive prediction scheme, which is significantly different from the inventive prediction curves 602 and 702 in the prior art prediction curves 601 and 701 shown in the figure.
FIG. 8 is a functional block diagram illustrating an apparatus 800 for myopia prediction according to an embodiment of the present invention. It will be appreciated that the apparatus 800 may perform the method steps described in connection with fig. 1-3.
As shown in fig. 8, the apparatus 800 of the present invention may comprise a memory 802, which may store program instructions for myopia prediction, and a processor 803. Additionally or alternatively, the memory 802 may also store algorithm code (e.g., program code such as the aforementioned machine learning model and/or image feature segmentation model) for implementing the analysis of the fundus image. The processor 803 herein may be a general purpose processor or a special purpose processor (e.g., an artificial intelligence processor), depending on the implementation scenario. Further, when the program in the memory 802 is executed by the processor 803, the device receives a fundus image 804, e.g., through its interface, and performs the method steps described in connection with fig. 1-3, to ultimately output a myopia prediction (as shown at 702 of fig. 7) for reference by, e.g., a teenager's parents and healthcare workers, and to provide timely human intervention to prevent further deterioration of the myopia condition.
Figure 9 is a block diagram illustrating a system 900 for myopia prediction, according to an embodiment of the present invention. The system 900 may include a prediction device 901 (which may be equivalent to the device 800 shown in fig. 8) according to an embodiment of the present invention, and a peripheral device and an external network thereof, wherein the prediction device 901 is configured to perform operations such as analysis and calculation on personal data of a user to be predicted, so as to implement the prediction scheme of the present invention described in conjunction with fig. 1 to 7.
As shown in fig. 9, the prediction apparatus 901 may include a CPU 9011, which may be a general-purpose CPU, a dedicated CPU, or an execution unit of other information processing and program execution. Further, the prediction device 901 may further include a mass storage 9012 and a read only memory ROM 9013, where the mass storage 9012 may be configured to store various types of data including various types of user data, intermediate result data, final myopia prediction results, and the like, as well as various programs required for running the machine learning model, and the ROM 9013 may be configured to store a power-on self-test for the prediction device 901, initialization of various functional modules in the system, a driver for basic input/output of the system, and data required for booting the operating system.
Further, the prediction device 901 also includes other hardware platforms or components, such as the illustrated tensor data processing unit ("TPU") 9014, image processing unit ("GPU") 9015, field programmable gate array ("FPGA") 9016, and machine learning unit ("MLU") 9017. It is understood that although various hardware platforms or components are shown in the predictive device 901, this is merely exemplary and not limiting, and those skilled in the art can add or remove corresponding hardware according to actual needs. For example, the prediction apparatus 901 may include only a general-purpose CPU as a well-known hardware platform or another dedicated hardware platform as a prediction hardware platform of the present invention.
The predictive device 901 of the invention also includes a communication interface 9018 so that it can be connected to a local area network/wireless local area network (LAN/WLAN) 905 via the communication interface 9018, and in turn can be connected to a local server 906 via the LAN/WLAN or to the Internet ("Internet") 907. Alternatively or additionally, the prediction device 901 of the present invention may also be directly connected to the internet or cellular network based on wireless communication technology, such as third generation ("3G"), fourth generation ("4G"), or 5 generation ("5G") based wireless communication technology, through communication interface 9018. In some application scenarios, the prediction device 901 of the present invention may also access a server 908 of an external network and possibly a database 909 as needed in order to obtain various known neural network models, data and modules, and may remotely store various measured or collected data (including training data for training machine learning models).
The peripherals of the predictive device 901 may include a display means 902, an input means 903 and a data transmission interface 904. In one embodiment, the display means 902 may for example comprise one or more loudspeakers and/or one or more visual displays configured for voice prompting and/or visual display of the calculation process or the final prediction result of the inventive prediction device. The input device 903 may include, for example, a keyboard, mouse, microphone, fundus camera, or other input buttons or controls configured to receive input of image data (such as a fundus image of the present invention) or user instructions. The data transfer interface 904 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, fireWire ("FireWire"), PCI Express, and a high-definition multimedia interface ("HDMI"), which are configured for data transfer and interaction with other devices or systems. According to the aspect of the present invention, the data transmission interface 904 may also receive various data (e.g., various characteristic quantities of the present invention) and transmit various types of data and results to the prediction apparatus 901.
The CPU 9011, the mass storage 9012, the read only memory ROM 9013, the TPU 9014, the GPU 9015, the FPGA 9016, the MLU 9017, and the communication interface 9018 of the prediction device 901 of the present invention may be connected to each other via a bus 9019, and may implement data interaction with peripheral devices via the bus. In one embodiment, the cpu 9011 may control other hardware components in the test device 901 and their peripherals via the bus 9019.
In operation, the processor CPU 9011 of the prediction apparatus 901 of the present invention may receive various input data via the input device 903 or the data transmission interface 904 and invoke computer program instructions or code (e.g., code relating to a machine learning model or an image segmentation model) stored in the memory 9012 to process the received input data to obtain a myopia prediction result for a myopic user to be predicted. In addition, the prediction device 901 may also upload the prediction result to a network, such as a remote database 909, through the communication interface 9018. In one application scenario, database 909 herein may be a hospital-affiliated database so that a medical professional may directly recall the user's predicted results and give medical intervention to avoid deterioration of the user's myopia.
It should also be appreciated that any module, unit, component, server, computer, terminal, or device executing instructions of the examples of the invention may include or otherwise have access to a computer-readable medium, such as a storage medium, computer storage medium, or data storage device (removable) and/or non-removable) such as, for example, a magnetic disk, optical disk, or magnetic tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data.
Based on the above, the present invention also discloses a computer readable storage medium having stored therein computer program code for myopia prediction, which when loaded and executed by a processor, implements the operational flow of the method described in connection with fig. 1-3.
The computer readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as Resistive Random Access Memory (RRAM), dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), enhanced Dynamic Random Access Memory (EDRAM), high-Bandwidth Memory (HBM), hybrid cubic HMC (Hybrid Memory Cube), etc., or any other medium that can be used to store the desired information and that can be accessed by an application, module, or both. Any such computer storage media may be part of, or accessible or connectable to, a device. Any applications or modules described herein may be implemented using computer-readable/executable instructions that may be stored or otherwise maintained by such computer-readable media.
It should be understood that the possible terms "first" or "second" etc. in the claims, the description and the drawings of the present disclosure are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the invention disclosed. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in this disclosure and in the claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Although the embodiments of the present invention are described above, the descriptions are only examples for facilitating understanding of the present invention, and are not intended to limit the scope and application scenarios of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A method for myopia prediction, comprising:
generating an average myopia prediction result of a user with myopia to be predicted by using a machine learning model;
determining an eyeball quality index of the user related to myopia based on the fundus picture of the user; and
generating an individual myopia prediction associated with the user based on the average myopia prediction and the eye quality indicator,
wherein determining the user's myopia-related eye quality indicator based on the user's fundus photograph comprises:
performing fundus feature extraction on the fundus picture to obtain fundus feature quantities related to myopia;
inputting the fundus feature quantity and non-fundus feature quantity into a machine learning model to obtain an average diopter of the user; and
comparing the average refractive power with an actually measured refractive power of the user to take a difference between the average refractive power and the actually measured refractive power as the eyeball quality index.
2. The method of claim 1, further comprising:
forming non-fundus feature quantities to be input to the machine learning model by using the collected non-fundus features; and
and performing early training on the machine learning model by using the non-fundus characteristic quantity.
3. The method of claim 2, wherein the non-fundus feature comprises one or more of age, gender, identity of eye, and current diopter.
4. The method of claim 1, wherein performing a feature extraction operation on the fundus photograph comprises:
performing a segmentation operation on the fundus image using at least one image segmentation model associated with fundus features to obtain fundus feature quantities associated with the myopia.
5. The method according to claim 4, wherein the ocular fundus feature comprises an ocular fundus leopard line and/or an ocular fundus arcus plaque.
6. The method of any of claims 1-5, wherein the user comprises an adolescent age range between 6-18 years.
7. An apparatus for myopia prediction, comprising:
a processor; and
memory having stored thereon computer program code for myopia prediction, which when executed by the processor implements the method according to any of claims 1-6.
8. A computer readable storage medium having stored thereon computer program code for myopia prediction, which when executed by a computer program code processor implements a method according to any of claims 1-6.
CN202210847259.4A 2022-07-19 2022-07-19 Method for myopia prediction and related products Active CN114937307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210847259.4A CN114937307B (en) 2022-07-19 2022-07-19 Method for myopia prediction and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210847259.4A CN114937307B (en) 2022-07-19 2022-07-19 Method for myopia prediction and related products

Publications (2)

Publication Number Publication Date
CN114937307A CN114937307A (en) 2022-08-23
CN114937307B true CN114937307B (en) 2023-04-18

Family

ID=82868560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210847259.4A Active CN114937307B (en) 2022-07-19 2022-07-19 Method for myopia prediction and related products

Country Status (1)

Country Link
CN (1) CN114937307B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358036A (en) * 2017-06-30 2017-11-17 北京机器之声科技有限公司 A kind of child myopia Risk Forecast Method, apparatus and system
IL264530B1 (en) * 2019-01-29 2024-03-01 Eyeway Vision Ltd Eye tracking device and a method thereof
CN109754885A (en) * 2019-03-18 2019-05-14 杭州镜之镜科技有限公司 Near-sighted forecasting system and method
CN109998477B (en) * 2019-04-12 2021-12-10 复旦大学附属眼耳鼻喉科医院 Intelligent prognosis system for high-myopia cataract surgery
CN114078596A (en) * 2020-08-17 2022-02-22 上海市静安区市北医院 Children and teenagers myopia prediction system, method and robot
CN112365107B (en) * 2020-12-16 2024-01-23 北京易华录信息技术股份有限公司 Myopia risk assessment method, device and system based on artificial intelligence
CN113768460B (en) * 2021-09-10 2023-11-14 北京鹰瞳科技发展股份有限公司 Fundus image analysis system, fundus image analysis method and electronic equipment
CN113989217A (en) * 2021-10-26 2022-01-28 北京工业大学 Human eye diopter detection method based on deep learning
CN114300136A (en) * 2021-12-28 2022-04-08 复旦大学附属眼耳鼻喉科医院 Artificial intelligence assisted and optimized high myopia intraocular lens power calculator

Also Published As

Publication number Publication date
CN114937307A (en) 2022-08-23

Similar Documents

Publication Publication Date Title
US11586941B2 (en) Recommendation method and apparatus
CN111653359B (en) Intelligent prediction model construction method and prediction system for hemorrhagic disease
JP4742192B2 (en) Age estimation apparatus and method, and program
CN111383210B (en) Fundus image classification model training method and device
CN113240655B (en) Method, storage medium and device for automatically detecting type of fundus image
CN116563932A (en) Eye image recognition method and related equipment based on multitask learning
Jaeger et al. Trustworthiness detection from faces: Does reliance on facial impressions pay off?
CN114343585A (en) Early warning method, device, equipment and storage medium for cognitive and behavioral disorders
CN114005541A (en) Dynamic dry eye early warning method and system based on artificial intelligence
CN114937307B (en) Method for myopia prediction and related products
CN113052227A (en) Pulmonary tuberculosis identification method based on SE-ResNet
Liu et al. Graphical diagnostics to check model misspecification for the proportional odds regression model
CN114119359B (en) Image generation method for disease evolution based on fundus images and related product
US10755088B2 (en) Augmented reality predictions using machine learning
CN114864093B (en) Apparatus, method and storage medium for disease prediction based on fundus image
Dimas et al. MedGaze: Gaze Estimation on WCE Images Based on a CNN Autoencoder
Cuong et al. Eye Strain Detection During Online Learning.
CN112070662B (en) Evaluation method and device of face changing model, electronic equipment and storage medium
US20240049960A1 (en) Predicting clinical parameters relating to glaucoma from central visual field patterns
Mohan et al. Using artificial intelligence in diabetic retinopathy
CN115482933B (en) Method for evaluating driving risk of driver and related product thereof
Garaszczuk et al. Machine learning‐based prediction of tear osmolarity for contact lens practice
CN116310623A (en) Method for training predictive model for predicting coronary heart disease and related products
CN114847871B (en) Method, system and related product for analyzing fundus variation trend of subject
CN114255234A (en) Method for training model for identifying cardiovascular and cerebrovascular risks based on fundus images and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant