CN114557670A - Physiological age prediction method, apparatus, device and medium - Google Patents

Physiological age prediction method, apparatus, device and medium Download PDF

Info

Publication number
CN114557670A
CN114557670A CN202210163247.XA CN202210163247A CN114557670A CN 114557670 A CN114557670 A CN 114557670A CN 202210163247 A CN202210163247 A CN 202210163247A CN 114557670 A CN114557670 A CN 114557670A
Authority
CN
China
Prior art keywords
image
result
age
fundus
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210163247.XA
Other languages
Chinese (zh)
Inventor
苏昊
吕彬
吕传峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210163247.XA priority Critical patent/CN114557670A/en
Publication of CN114557670A publication Critical patent/CN114557670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The application provides a physiological age prediction method, a physiological age prediction device, physiological age prediction equipment and a physiological age prediction medium, wherein the physiological age prediction method comprises the steps of obtaining an eye fundus image, performing age bracket prediction on eye characteristics in the eye fundus image, and determining an age offset according to focus characteristics in the eye fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained. In addition, the resolution of the fundus image can be adjusted, age group prediction can be performed by using a low-resolution image, and lesion detection can be performed by using a high-resolution image. Thus, the physiological age can be predicted using the fundus image, saving the prediction cost and shortening the prediction time.

Description

Physiological age prediction method, apparatus, device and medium
Technical Field
The present application relates to the field of digital medical technology, and in particular, to a method, an apparatus, a device, and a medium for predicting a physiological age.
Background
The aging speed of each person is different, and the physiological age can reflect the aging degree of each person more accurately than the actual age. Currently, there are many methods for assessing physiological age, such as measuring telomere length, assessing DeoxyriboNucleic Acid (DNA) methylation clock or predicting using Magnetic Resonance Imaging (MRI), but these detection methods all have high detection cost and long detection period.
Therefore, the difficulty, the cost and the time for predicting the physiological age are great, and the problems to be solved are urgent.
Disclosure of Invention
The application provides a physiological age prediction method, a physiological age prediction device, physiological age prediction equipment and a physiological age prediction medium, wherein the physiological age prediction method comprises the steps of obtaining an eye fundus image, performing age bracket prediction on eye characteristics in the eye fundus image, and determining an age offset according to focus characteristics in the eye fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained.
The object and other objects are achieved by the features in the independent claims. Further implementations are presented in the dependent claims, the description and the drawings.
In a first aspect, the present application provides a physiological age prediction method characterized by acquiring a fundus image of a user; determining a first result and a second result from the fundus image, the first result being determined based on a probability that the fundus image belongs to each age group, the second result being determined based on the number of lesions and a type of lesion in the fundus image; determining a physiological age of the user based on the first result and the second result.
In a second aspect, the present application provides a physiological age prediction device, comprising: an acquisition unit that acquires a fundus image of a user; a determination unit for determining, based on the fundus image, a first result determined based on a probability that the fundus image belongs to each age group and a second result determined based on the number of lesions and a type of lesion in the fundus image; the determining unit is used for determining the physiological age of the user according to the first result and the second result.
In a third aspect, the present application provides a computer device, comprising: a processor and a memory, the memory storing a computer program, the processor executing the computer program in the memory to perform the method as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program, which when run on a computer causes the computer to perform the method as described in the first aspect.
In summary, the physiological age prediction method provided in the embodiment of the present application performs age group prediction on eye features in an eye fundus image by acquiring the eye fundus image, and then determines an age offset according to lesion features in the eye fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained. In addition, the resolution of the fundus image can be adjusted, age group prediction can be performed by using a low-resolution image, and lesion detection can be performed by using a high-resolution image. Thus, the physiological age can be predicted using the fundus image, saving the prediction cost and shortening the prediction time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a schematic structural diagram of an AI system according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for predicting a physiological age according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for predicting a physiological age in an application scenario according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a physiological age prediction device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the listed items.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
An AI model is a set of mathematical methods to implement AI. A large amount of sample data can be used for training the model to be trained to obtain a target model with prediction capability, and then the data to be predicted is input into the target model to obtain a prediction result.
The structure of the AI system is explained below. As shown in fig. 1, fig. 1 is an architecture diagram of an AI system, the system 100 is a system architecture commonly used in the AI field, and the system 100 includes: a database 110, a training device 120, and an execution device 130.
The database 110 is used for storing a sample set, wherein the samples in the sample set may be graphics, images, voice, text, etc., and the database 110 is also used for model training of the sample sending training device 120. In a medical application scenario, the sample data may be a medical image, and the type of the object included in the sample data is a lesion, i.e., a portion of the body where a lesion occurs. Medical images refer to images of internal tissues, e.g., fundus, stomach, abdomen, heart, knee, brain, which are obtained in a non-invasive manner for medical treatment or medical research, such as images generated by medical instruments using Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), X-ray images, electroencephalograms, and photo lamps.
The training apparatus 120 is used to train the model to be trained using the samples. The method specifically comprises the following steps: performing one-time iterative training on a model to be trained by using a batch of samples, comparing an output result of the model to be trained with labels of the samples, adjusting model parameters of the model to be trained according to a comparison result, and performing the next iterative training until the model training meets a preset termination condition, terminating the training, and obtaining a target model. The preset termination condition may be that the training reaches the number of training iterations, or a value of a loss function (loss function) or an objective function (objective function) is smaller than a preset value, where the loss function and the objective function are used to measure a difference between a model output result and a sample data label, that is, a difference between a predicted value and a target value.
The execution device 130 is used to implement various functions according to the target model trained by the training device 120. The method specifically comprises the following steps: the execution device 130 obtains data to be predicted from the user, and then inputs the user data into the target model to obtain a prediction result.
In summary, the implementation of various applications in the AI field depends on the AI model, and different functions, such as classification, recognition, detection, and the like, are implemented by the AI model, and the AI model needs to be trained in advance by using the sample set before being deployed into the execution device 130 for use.
Currently, the physiological age, which more accurately reflects the degree of aging of each person than the actual age, can be predicted by measuring the length of telomeres of chromosomes, evaluating DeoxyriboNucleic Acid (DNA) methylation clocks, or using MRI or other techniques. However, the above detection methods require high detection cost and long detection period, and cannot be used in large scale.
In order to solve the problems of difficulty, high cost and long time consumption in predicting the physiological age, the scheme provides a physiological age prediction method 200, which can be applied to the AI system 100.
S210, the execution apparatus 130 acquires a fundus image of the user.
The fundus image of the user acquired by the execution device 130 may be a normal fundus color photograph or an ultra-wide-angle fundus color photograph. It is understood that the imaging field angle of the ultra-wide-angle camera is larger, and the fundus information can be observed by one ultra-wide-angle fundus image more than that of the common fundus color photograph. Therefore, if the fundus image is not an ultra-wide-angle fundus color photograph, a plurality of common fundus color photographs need to be acquired and spliced to obtain the fundus image.
In some embodiments, the fundus image is a clear and complete user fundus image that may be obtained by a wash screening operation. For example, fundus images with blur, eye closure, improper position, and the like are screened, and for fundus images that do not meet the requirements, it is necessary to acquire a clear and complete fundus image again.
In some embodiments, the fundus image is a normalized image. The method specifically comprises the following steps: and selecting a fundus image from the database as a standard image, wherein the standard image is a fundus image with clear imaging. The fundus image is normalized according to a standard image, and the specific process refers to the following formula (1):
Figure BDA0003514865660000041
where sta is a matrix of pixel values of the fundus image after being subjected to the normalization processing, x is a matrix of pixel values of the original fundus image, μ is a mean value of pixel values of the standard image, and σ is a standard deviation of pixel values of the standard image.
In some embodiments, the normalization process is performed on the fundus image, and may be performed separately from the pixel values of the three channels of RGB of the fundus image. Alternatively, since the fundus image is formed by a red and green laser beam, effective data on the blue channel is small, and the fundus image may be normalized, or the pixel values of R, G two channels of the fundus image may be divided.
In some embodiments, the fundus image is an image that has been subjected to Limited Contrast Adaptive Histogram Equalization (CLAHE). CLAHE can change the image contrast by calculating a local histogram of the image and then redistributing the brightness, thereby obtaining more image details in the subsequent feature extraction.
In some embodiments, the fundus image is a normalized image. The method specifically comprises the following steps: by expressing each pixel value in the fundus image by a value in the [0,1] interval according to the maximum value and the minimum value in the fundus image pixel values, a normalized fundus image can be obtained, and the specific process can refer to the following formula (2):
Figure BDA0003514865660000051
where nor is a pixel value of the fundus image after normalization, y is a pixel value in the fundus image, min is a minimum value among fundus image pixel values, and max is a maximum value among fundus image pixel values. It is to be understood that the normalization processing on the fundus image may be performed separately from the pixel values of the three channels of RGB of the fundus image.
In some embodiments, the fundus image is normalized, CLAHE processed, and normalized, or a combination of any two of the above. The fundus images acquired by the training apparatus 120 from the database 110 may also be subjected to the above-described processing.
In some embodiments, the execution device 130 may also obtain user information, such as obtaining the user's gender, actual age, height, and the user's life habits. Wherein, the life habit of user still includes whether the user smokes, whether the user often stays up night, whether the user often drinks etc..
In some embodiments, the user information is data subjected to normalization processing, for example, discrete data such as whether to smoke, whether to drink, whether to stay up night, and the like, the result is expressed as "yes" and "no" with values of 1 and 0.
It will be appreciated that to ensure the accuracy of the prediction, in some embodiments it may also be desirable to ensure that the user fundus image and user information are acquired no more than a preset length of time. For example, the acquisition time of the fundus image of the user and the user information does not exceed one year.
S220, the execution apparatus 130 performs age bracket prediction from the fundus image, and obtains a first result.
The execution apparatus 130 performs the age range prediction using the fundus image to obtain scores of the fundus image belonging to each age zone, and then multiplies the representative values of the age zones by the scores of the respective age zones and adds them together to obtain a preliminary predicted age value, i.e., a first result.
The method specifically comprises the following steps: the execution apparatus 130 extracts fundus features of the fundus image, such as a disc size, a ring size, a retinal arteriovenous diameter, and the like, first, then compresses the extracted feature vectors in the pooling layer to obtain feature vectors, then obtains a score of the fundus image for each age zone through the full-connected layer and softmax, and then multiplies the scores of the respective age zones by the representative values of the age zones and adds them to obtain a first result.
The score of each age interval may be a similarity between the fundus feature and the fundus feature of each age interval, that is, a fundus image of each age interval is stored in the database, and the similarity between the user fundus image and the fundus image of each age interval is calculated. The score of each age interval may also be determined according to the conditional probability of the fundus image showing the fundus feature, that is, according to the conditional probability of the fundus feature appearing in each age interval stored in the database, for example, the diameter of retinal arteriovenous vessels of the user is k, the probability of the retinal arteriovenous vessels of each age interval is stored in the database, and the probability is the score of the fundus image in each age interval.
The representative value of the age interval may be any value in the age interval, for example, the minimum value of the age interval is used as the representative value of the age interval, and the first result may be represented by the following formula (3):
Figure BDA0003514865660000061
wherein, age is the first result, Min (range)i) Denotes the minimum value of each age interval, scoreiIndicates the score of each age interval, N is the number of intervals, and i and N are positive integers.
For example, the age interval may be divided into 11 intervals from 1 year to 111 years, i.e., [1,11), [11,21 ], …, [101,111) with an interval size of 10 years.
It should be understood that the prediction model for performing the age group prediction may be a classification model, such as a residual error network (resnet50), inclusion v3, etc., and the type of the prediction model is not particularly limited in this application. The training device 120 may select a Mean Squared Error (MSE) in the training process, which may specifically refer to the following formula (4):
Figure BDA0003514865660000062
wherein MSELThe MSE loss function value is M, and M is the number of samples used in one iteration training;
Figure BDA0003514865660000063
is the tag value; z is a radical ofiIs a predicted value.
In some embodiments, the execution apparatus 130 further includes, before performing the age group prediction from the fundus image, obtaining a first image by adjusting the resolution of the fundus image to a low resolution, and performing the age group prediction from the first image, resulting in the first result. For example, the resolution of the fundus image may be adjusted to 256 × 256, resulting in a first image.
It should be appreciated that the first image may also be normalized, CLAHE processed, normalized, or any combination thereof. However, when the first image is normalized, it is necessary to adjust the resolution of the standard image to the same low resolution as the first image to obtain the first standard image, and to normalize the first image using the first standard image.
And S230, performing focus detection by the executing equipment 130 according to the fundus image to obtain a second result.
The second result is used to indicate an age shift amount obtained from the fundus lesion state of the fundus image, and is used to correct the first result to obtain a more accurate predicted age. The eyeground focus can reflect the physical condition of the user, and the more serious the eyeground lesion is, the larger the physiological age of the user corresponding to the eyeground image is.
The calculating process of the second result specifically includes: the executive device 130 obtains lesion features from the database, each lesion corresponding to a different weight; detecting the focus in the fundus image by using a focus detection model to obtain the similarity between the fundus image and the characteristics of each focus; then multiplying the similarity of each focus by the corresponding focus weight, and then adding the multiplication results of all the focuses to obtain a second result.
For example, the lesion types included in the database are microaneurysms, hard extravasation, vitreous oil, hemorrhage, geographic atrophy, etc., wherein the weights of the lesions increase in order. That is, if the weight of geographic atrophy is large and the probability that the fundus image detects that the user has geographic atrophy is large, that is, the confidence score is high, the second result is high and indicates that the physiological age of the user is high.
It should be understood that the lesion detection model may be yolo (young Only Look one) or may be a Regional Convolutional Neural Network (RCNN), and the type of the lesion detection model is not particularly limited in the present application. Before lesion detection is performed by using a lesion detection model, the lesion detection model needs to be trained, and a Loss function in the training process can be a Smooth L1 Loss function, which is specifically referred to the following formula (5):
Figure BDA0003514865660000071
wherein a is the difference between the predicted value and the tag value.
In some embodiments, the performing device 130 further includes, before performing the lesion detection through the fundus image, adjusting the resolution of the fundus image to a high resolution to obtain a second image, and performing the lesion prediction based on the second image to obtain a second result. Wherein the resolution of the second image is higher than the resolution of the fundus image subjected to the age group prediction, and when the fundus image subjected to the age group prediction is the first image, the resolution of the second image is higher than that of the first image. For example, when the first image resolution is 256 × 256, the resolution of the fundus image may be adjusted to 512 × 512, resulting in a second image.
It should be understood that the second image may also be normalized, CLAHE processed, normalized, or any combination thereof. However, when the second image is normalized, it is necessary to adjust the resolution of the standard image to the same high resolution as the second image to obtain the second standard image, and to normalize the second image using the second standard image.
In some embodiments, the performing device 130 may perform step S230 for lesion detection based on the fundus image to obtain the second result, and then perform step S220 for age bracket prediction based on the fundus image to obtain the first result. Alternatively, step S220 and step S230 may be performed synchronously.
And S240, obtaining the physiological age according to the first result and the second result.
The execution device 130 adds a first result and a second result, the added result being the physiological age of the user, wherein the first result is the predicted age according to the prediction model, and the second result is the age offset according to the lesion detection model. That is, the physiological age finally obtained is obtained by adding the characteristics of the fundus and the condition of the lesion.
In some embodiments, the physiological age may be obtained by adding the first result and the second result after being weighted respectively.
In summary, the physiological age prediction method 200 provided by the present application predicts the age of the ocular features in the fundus image by acquiring the fundus image, and then determines the age offset according to the lesion features in the fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained. In addition, the resolution of the fundus image can be adjusted, age group prediction can be performed by using a low-resolution image, and lesion detection can be performed by using a high-resolution image. Thus, the physiological age can be predicted using the fundus image, saving the prediction cost and shortening the prediction time.
The physiological age prediction method 200 of the present application is exemplified below, and as shown in fig. 3, fig. 3 exemplarily shows a process of acquiring a fundus image of the user a and predicting the physiological age of the user a from the fundus image of the user a. The steps of the method are described in detail below with reference to fig. 3.
Step 1, acquiring a fundus image of a user.
By the fundus imaging technique, after the fundus image 300 of the user a is taken, the execution apparatus acquires this fundus image 300. It should be understood that the performing device may also perform screening on the fundus image to ensure that the fundus image 300 is a clear and complete image, and if the fundus image is blurred and has a closed eye and is not in a correct position, the clear and complete fundus image needs to be acquired again.
After the execution apparatus 130 acquires the fundus image 300, the resolution of the fundus image 300 is adjusted to obtain a first image and a second image, where the resolution of the first image is smaller than the resolution of the second image.
The execution device 130 may also perform normalization processing on the first image and the second image, wherein when performing the normalization processing on the first image, it is also necessary to adjust the resolution of the standard image to be the same as the low resolution of the first image, obtain the first standard image, and perform the normalization processing on the first image using the first standard image. When the second image is normalized, it is necessary to adjust the resolution of the standard image to the same high resolution as the second image to obtain the second standard image, and to normalize the second image using the second standard image.
The execution device 130 may also perform one or more of normalization, CLAHE, and normalization on the first and second images. The detailed process may refer to the related description of step S210, and is not described herein again.
And 2, carrying out age group prediction according to the first image to obtain a first result.
The execution apparatus 130 extracts the fundus feature of the first image first, and in this example, the execution apparatus extracts the diameter of the fundus artery, where the artery diameter of the user a is d, and d is a positive number.
The execution device 130 will then obtain from the database the probability that the artery diameter d belongs to each age group, wherein the probability that the artery diameter d is in the age groups of 1-10 years is P1The probability of the artery diameter being d in the age range of 11-20 years is P2…, probability of artery diameter d in age range 91-100 years is P10
The execution apparatus 130 takes the maximum value of each age group as the representative value of the age group, multiplies the probability of each age group by the representative value of each age group, and then adds up to obtain the first result X. The detailed process can refer to the related description of step S220, which is not described herein again.
And 3, detecting the focus according to the second image to obtain a second result.
The executive device 130 obtains an existing lesion sample from the database, the lesion sample including an image of the lesion, a lesion type and a weight of the lesion. Fig. 3 exemplarily shows that an image of a lesion a and its corresponding weight a, and an image of a lesion B and its corresponding weight B are obtained.
The executing device 130 calculates the similarity of the second image and the focus sample one by one, and finally obtains the similarity K between the fundus image of the user A and the focus AaSimilarity to lesion B is Kb
Finally, the performing device 130 multiplies the similarity of each lesion by the corresponding weight of the lesion, and then adds the results of all the lesions to obtain a second result Y. For a specific process, reference may be made to the related description of step S230, which is not described herein again.
It should be understood that the performing device 130 may perform step 3 to perform lesion detection according to the second image to obtain the second result, and then perform step 2 to perform age bracket prediction according to the first image to obtain the first result. Alternatively, step 2 and step 3 may be performed simultaneously.
And 4, obtaining the physiological age according to the first result and the second result.
The execution device 130 adds the first result and the second result, and the result after the addition is the physiological age of the user a. It is to be understood that the physiological age includes both age group prediction based on the characteristics of the fundus oculi and an age shift amount based on the characteristics of the fundus oculi focus. Specifically, reference may be made to the related description of the foregoing step S240, which is not repeated herein.
In summary, the physiological age prediction method provided by the application predicts the age bracket of the eye feature in the eye fundus image by acquiring the eye fundus image, and then determines the age offset according to the focus feature in the eye fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained.
In order to solve the problems of difficulty, high cost and long time consumption in predicting the physiological age, the present application provides a physiological age predicting device, as shown in fig. 4, the physiological age predicting device 400 includes: an acquisition unit 410 and a determination unit 420.
The acquisition unit 410 is used to acquire a fundus image of the user.
The determination unit 420 is configured to determine, based on the fundus image, a first result determined based on a probability that the fundus image belongs to each age group and a second result determined based on the number of lesions and a type of the lesion in the fundus image; the determining unit 420 is further configured to determine the physiological age of the user according to the first result and the second result.
In some embodiments, the determination unit 420 is further configured to determine a probability that the fundus image belongs to each age group based on a similarity between a fundus feature in the fundus image and a fundus feature of each age group, the fundus feature including one or more of a disc size, a arteriovenous diameter, a blood color.
In some embodiments, the obtaining unit 410 is further configured to obtain, from a database in which the probability of each fundus feature appearing in each age group, a probability that the fundus image belongs to each age group according to the fundus feature.
In some embodiments, the determining unit 420 is further configured to obtain a first result by multiplying the probability that the fundus image belongs to each age group by a representative value corresponding to each age group, the representative value corresponding to the age group being a numerical value in the age group range, such as a maximum value, a minimum value, a mean value, or a median value within the age group range.
In some embodiments, the determination unit 420 is further configured to adjust a resolution of the fundus image, resulting in a first image and a second image, wherein the resolution of the first image is less than the resolution of the second image; determining a first result from the first image; a second result is determined from the second image.
In some embodiments, the obtaining unit 410 is further configured to obtain a plurality of lesion samples from the database, the lesion samples including a lesion image, a lesion type, and a weight of the lesion. The determining unit 420 is further configured to determine a similarity between the second image and each lesion sample according to the second image and the lesion image of each lesion sample; and determining a second result according to the similarity of the second image and each focus sample and the focus weight corresponding to each focus sample.
In some embodiments, the determining unit 420 is further configured to add the first result and the second result to obtain the physiological age, or the determining unit 420 is further configured to weight the first result and the second result respectively and then add the first result and the second result to obtain the physiological age.
In some embodiments, the determining unit 420 is further configured to perform a normalization process on the first image, so as to obtain a normalized first image, where each pixel value in the normalized first image is represented by a value between 0 and 1; determining a first result according to the first image after normalization processing; the determining unit 420 is further configured to perform normalization processing on the second image to obtain a normalized second image; and determining a second result according to the normalized second image.
It should be understood that the first image and the second image may also be normalized, CLAHE processed, normalized, or any combination thereof. When the first image is normalized, the resolution of the standard image needs to be adjusted to be the same as the low resolution of the first image to obtain the first standard image, and the first image needs to be normalized by using the first standard image. When the second image is normalized, it is necessary to adjust the resolution of the standard image to the same high resolution as the second image to obtain the second standard image, and to normalize the second image using the second standard image.
In summary, the physiological age prediction apparatus 400 provided by the present application performs age bracket prediction on eye features in an eye fundus image by acquiring the eye fundus image, and then determines an age offset amount according to lesion features in the eye fundus image; according to the age group prediction result and the age offset, the physiological age of the user can be obtained. In addition, the resolution of the fundus image can be adjusted, age group prediction can be performed by using a low-resolution image, and lesion detection can be performed by using a high-resolution image. Thus, the physiological age can be predicted using the fundus image, saving the prediction cost and shortening the prediction time.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 may be the physiological age prediction apparatus 400 in the foregoing. As shown in fig. 5, the electronic device 500 includes: processor 510, communication interface 520, and memory 530, processor 510, communication interface 520, and memory 530 shown are interconnected by an internal bus 540.
The processor 510, the communication interface 520, and the memory 530 may be connected by a bus, or may communicate by other means such as wireless transmission. The present embodiment is exemplified by connection via a bus 540, wherein the bus 540 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 540 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The processor 510 may be formed by one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an Application-Specific integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), General Array Logic (GAL), or any combination thereof. The processor 510 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 530, which enable the electronic device 500 to provide a wide variety of services.
Specifically, the processor 510 may be formed by at least one general-purpose processor, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an Application-Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), General Array Logic (GAL), or any combination thereof. The processor 510 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 530, which enable the electronic device 500 to provide a wide variety of services.
Memory 530 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory 530 may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); memory 530 may also include combinations of the above. Memory 530 may store, among other things, application program code and program data. The program code may predict an age group for eye features in the fundus image by acquiring the fundus image, and then determine an age offset based on lesion features in the fundus image; according to the age group prediction result and the age offset, the physiological age of the user and the like can be obtained. And may also be used to perform other steps described in the embodiment of fig. 2, which are not described herein again. The codes of the memory 530 may include codes for implementing functions of an acquisition unit and a determination unit, the functions of the acquisition unit include the functions of the acquisition unit 410 in fig. 4, for example, acquiring a fundus image of a user, and particularly, the steps S210 and optional steps thereof, which may be used for executing the foregoing method, and are not described in detail herein. The functions of the generation unit include the function of the determination unit 420 in fig. 4, for example, determining a first result determined based on the probability that the fundus image belongs to each age group and a second result determined based on the number of lesions in the fundus image and the type of lesion of each lesion, based on the fundus image; and determining the physiological age of the user according to the first result and the second result. Specifically, steps S220 to S240 and optional steps thereof for performing the foregoing method can be performed, and are not described herein again.
The communication interface 520 may be a wired interface (e.g., an ethernet interface), may be an internal interface (e.g., a Peripheral Component Interconnect express (PCIe) bus interface), a wired interface (e.g., an ethernet interface), or a wireless interface (e.g., a cellular network interface or a wireless lan interface), for communicating with other devices or modules.
It should be noted that fig. 5 is only one possible implementation manner of the embodiment of the present application, and in practical applications, the electronic device may further include more or less components, which is not limited herein. For the content that is not shown or described in the embodiment of the present application, reference may be made to the related explanation in the embodiment described in fig. 2, and details are not described here. The electronic device shown in fig. 5 may also be a computer cluster formed by a plurality of computing nodes, and the present application is not limited in particular.
Embodiments of the present application also provide a computer-readable storage medium, which stores instructions that, when executed on a processor, implement the method flow illustrated in fig. 2.
Embodiments of the present application also provide a computer program product, where when the computer program product runs on a processor, the method flow shown in fig. 4 is implemented.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, for example, from one website, computer, server, or data center, over a wired (e.g., coaxial cable, fiber optics, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.) network, the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media, which may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., high density Digital Video Disc, DVD), or semiconductor media. The semiconductor medium may be an SSD.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of predicting a physiological age, comprising:
acquiring a fundus image of a user;
determining a first result and a second result from the fundus image, the first result being determined from a probability that the fundus image belongs to each age group, the second result being determined from the number of lesions and the type of lesion in the fundus image;
determining a physiological age of the user based on the first result and the second result.
2. The method of claim 1,
the probability that the fundus image belongs to each age group is determined according to the similarity between the fundus features in the fundus image and the fundus features of each age group, wherein the fundus features comprise one or more of optic disc size, arteriovenous vessel diameter and blood color.
3. The method of claim 1,
the probability that the fundus image belongs to each age group is acquired from a database in which the probability that each fundus feature appears in each age group is stored, based on the fundus feature.
4. The method according to any one of claims 2 or 3, wherein said determining a first result and a second result from said fundus image comprises:
adjusting the resolution of the fundus image to obtain a first image and a second image, wherein the resolution of the first image is smaller than that of the second image;
determining the first result according to the first image;
determining the second result according to the second image.
5. The method of claim 4, further comprising, prior to said determining the second result from the second image:
obtaining a plurality of lesion samples from the database, wherein the lesion samples comprise lesion images, lesion types and weights of lesions;
said determining the second result from the second image comprises:
determining the similarity of the second image and each focus sample according to the second image and the focus image of each focus sample;
and determining the second result according to the similarity of the second image and each focus sample and the focus weight corresponding to each focus sample.
6. The method of claim 5,
the physiological age is determined from a sum of the first result and the second result, or the physiological age is determined from a weighted sum of the first result and the second result.
7. The method of claim 6, wherein determining the first result from the first image comprises:
normalizing the first image to obtain a normalized first image, wherein each pixel value in the normalized first image is represented by a numerical value between 0 and 1;
determining the first result according to the first image after the normalization processing;
determining the second result from the second image, including:
carrying out normalization processing on the second image to obtain a normalized second image;
and determining the second result according to the normalized second image.
8. A physiological age prediction device, comprising: the acquisition unit, the determination unit,
the acquisition unit is used for acquiring a fundus image of a user;
the determination unit is configured to determine, based on the fundus image, a first result determined based on a probability that the fundus image belongs to each age group and a second result determined based on the number of lesions and a type of the lesion in the fundus image;
the determining unit is used for determining the physiological age of the user according to the first result and the second result.
9. A computer device, comprising: a processor and a memory, the memory storing a computer program, the processor executing the computer program in the memory to implement the method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program, for causing a computer to perform the method of any of claims 1 to 7 when the computer program is run on the computer.
CN202210163247.XA 2022-02-22 2022-02-22 Physiological age prediction method, apparatus, device and medium Pending CN114557670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210163247.XA CN114557670A (en) 2022-02-22 2022-02-22 Physiological age prediction method, apparatus, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210163247.XA CN114557670A (en) 2022-02-22 2022-02-22 Physiological age prediction method, apparatus, device and medium

Publications (1)

Publication Number Publication Date
CN114557670A true CN114557670A (en) 2022-05-31

Family

ID=81713143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210163247.XA Pending CN114557670A (en) 2022-02-22 2022-02-22 Physiological age prediction method, apparatus, device and medium

Country Status (1)

Country Link
CN (1) CN114557670A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171204A (en) * 2022-09-06 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training prediction model for predicting retinal age and related product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171204A (en) * 2022-09-06 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training prediction model for predicting retinal age and related product
CN115171204B (en) * 2022-09-06 2023-02-21 北京鹰瞳科技发展股份有限公司 Method for training prediction model for predicting retinal age and related product

Similar Documents

Publication Publication Date Title
Elangovan et al. Glaucoma assessment from color fundus images using convolutional neural network
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
Wu et al. Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation
WO2020260936A1 (en) Medical image segmentation using an integrated edge guidance module and object segmentation network
Kou et al. Microaneurysms segmentation with a U-Net based on recurrent residual convolutional neural network
CN112016626B (en) Uncertainty-based diabetic retinopathy classification system
CN111681219A (en) New coronary pneumonia CT image classification method, system and equipment based on deep learning
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
Cheng et al. Retinal blood vessel segmentation based on Densely Connected U-Net
CN111080643A (en) Method and device for classifying diabetes and related diseases based on fundus images
CN109146891B (en) Hippocampus segmentation method and device applied to MRI and electronic equipment
CN114565620B (en) Fundus image blood vessel segmentation method based on skeleton prior and contrast loss
US20230108389A1 (en) Data processing method and apparatus, device and medium
CN113012093B (en) Training method and training system for glaucoma image feature extraction
CN111028232A (en) Diabetes classification method and equipment based on fundus images
JP2023551899A (en) Automated screening of diabetic retinopathy severity using color fundus image data
CN111047590A (en) Hypertension classification method and device based on fundus images
CN114557670A (en) Physiological age prediction method, apparatus, device and medium
Gulati et al. Comparative analysis of deep learning approaches for the diagnosis of diabetic retinopathy
US11475561B2 (en) Automated identification of acute aortic syndromes in computed tomography images
US20220172370A1 (en) Method for detecting white matter lesions based on medical image
CN115424093A (en) Method and device for identifying cells in fundus image
CN115035133A (en) Model training method, image segmentation method and related device
CN115206494A (en) Film reading system and method based on fundus image classification
Munira et al. Multi-Classification of Brain MRI Tumor Using ConVGXNet, ConResXNet, and ConIncXNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination