CN114463319B - Data prediction method and device, electronic equipment and storage medium - Google Patents

Data prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114463319B
CN114463319B CN202210137171.3A CN202210137171A CN114463319B CN 114463319 B CN114463319 B CN 114463319B CN 202210137171 A CN202210137171 A CN 202210137171A CN 114463319 B CN114463319 B CN 114463319B
Authority
CN
China
Prior art keywords
image
target
predicted
parameter
intraocular lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210137171.3A
Other languages
Chinese (zh)
Other versions
CN114463319A (en
Inventor
方慧卉
许言午
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210137171.3A priority Critical patent/CN114463319B/en
Publication of CN114463319A publication Critical patent/CN114463319A/en
Priority to PCT/CN2022/132040 priority patent/WO2023155509A1/en
Application granted granted Critical
Publication of CN114463319B publication Critical patent/CN114463319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The present disclosure provides a data prediction method, apparatus, electronic device, readable storage medium and computer program product, relating to the AI medical field. The specific scheme is as follows: determining target image characteristics corresponding to an image to be predicted, wherein the image to be predicted comprises a front eye section of a target eye; and predicting the postoperative camber of the target eye after the target intraocular lens is implanted by using the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens. After the target image characteristics are obtained, the post-operation camber of the target eye after the target intraocular lens is implanted can be predicted by utilizing the target image characteristics and the intraocular lens characteristics preset for the target intraocular lens. Therefore, the method does not need to be set by doctors and other related personnel and manually measure related parameters for predicting the postoperative arch height, and the arithmetic arch height is calculated based on the related parameters, so that the prediction efficiency of the postoperative arch height and the prediction accuracy of the postoperative arch height can be improved.

Description

Data prediction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence (Artificial Intelligence, AI), in particular to computer vision and AI medical technology, in particular for use in computer vision, AI medical, etc. scenarios.
Background
Implantable contact lens (Implantable CollamerLens, ICL) technology, which is a safe and high-end refractive correction technology, has gradually become a new trend for myopia correction. The implantable contact lens technology implants an artificial lens into the eye of a patient through minimally invasive surgery, and can correct myopia on the premise of not damaging the cornea of the eye.
Clinically, there is a suitable post-operative camber after implantation of an intraocular lens into a patient's eye. Therefore, it is often desirable to predict post-operative camber prior to surgery and to select an intraocular lens for implantation in a patient's eye based on the post-operative camber.
Disclosure of Invention
The present disclosure provides a data prediction method, apparatus, electronic device, readable storage medium, and computer program product to improve the prediction efficiency of post-operative camber and the prediction accuracy of post-operative camber.
According to an aspect of the present disclosure, there is provided a data prediction method, which may include the steps of:
Determining target image characteristics corresponding to an image to be predicted, wherein the image to be predicted comprises a front eye section of a target eye;
and predicting the postoperative camber of the target eye after the target intraocular lens is implanted by using the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens.
According to a second aspect of the present disclosure, there is provided a data prediction apparatus, the apparatus may comprise:
the target image feature determining unit is used for determining target image features corresponding to an image to be predicted, wherein the image to be predicted comprises a front eye section of a target eye;
the post-operation arch height prediction unit is used for predicting post-operation arch height of the target eye after the target intraocular lens is implanted by utilizing the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to the technology disclosed by the invention, after the target image characteristics corresponding to the image to be predicted are obtained, the post-operation camber of the target eye after the target intraocular lens is implanted can be predicted by utilizing the target image characteristics and the intraocular lens characteristics preset for the target intraocular lens. Therefore, related parameters for predicting the postoperative arch height do not need to be set by doctors and other related personnel and manually measured, and the postoperative arch height is calculated based on the related parameters, so that the prediction efficiency of the postoperative arch height and the prediction accuracy of the postoperative arch height can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method for data prediction according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a target image feature determination method provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of another object image feature determination method provided in an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of an anterior ocular segment provided in an embodiment of the present disclosure;
FIG. 5 is a schematic view of another anterior ocular segment provided in an embodiment of the present disclosure;
FIG. 6 is a schematic illustration of an image carrying attention weights provided in an embodiment of the present disclosure;
FIG. 7 is a flow chart of a post-operative camber prediction method provided in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a data prediction process provided in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a data prediction apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Clinically, after an intraocular lens is implanted into a patient's eye by minimally invasive surgery, post-operative camber is often used to determine whether the intraocular lens implanted into the patient's eye is appropriate. If the post-operative vault height is too low, mechanical friction of the Implantable contact lens (impiable CollamerLens, ICL) with the lens body of the eye lens and obstruction of aqueous humor circulation at the anterior surface of the lens of the eye can result, thereby inducing cataracts. If the postoperative camber is too high, problems such as pigment spreading syndrome, iris atrophy and acute angle closure glaucoma can be caused. The post-operative camber generally refers to the height from the center of the posterior surface of the ICL optical zone to the anterior surface of the lens of the eye, and in practical applications, may also refer to the height from the center of the inferior surface of the ICL optical zone (center of the posterior surface of the ICL optical zone) to the superior surface of the lens of the eye (anterior surface of the lens).
To ensure proper post-operative camber after implantation of an intraocular lens into a patient's eye, it is often necessary to predict post-operative camber prior to surgery. After the appropriate target post-operation camber is predicted, an intraocular lens corresponding to the target post-operation camber is selected and implanted into the eye of the patient.
In order to predict post-operative camber after implantation of a target intraocular lens into a target eye, the present disclosure provides a data prediction method. Referring to fig. 1 in particular, a flowchart of a data prediction method is provided in an embodiment of the disclosure. The method may comprise the steps of:
Step S101: and determining target image characteristics corresponding to the image to be predicted, wherein the image to be predicted comprises the anterior ocular segment of the target eye.
Step S102: and predicting the postoperative camber of the target eye after the target intraocular lens is implanted by using the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens.
According to the data prediction method provided by the embodiment of the disclosure, after the target image characteristics corresponding to the image to be predicted are obtained, the post-operation camber of the target eye after the target intraocular lens is implanted can be predicted by utilizing the target image characteristics and the intraocular lens characteristics preset for the target intraocular lens. Therefore, the related parameters for predicting the postoperative arch height do not need to be set by doctors and other related personnel and manually measured, and the postoperative arch height is calculated based on the related parameters, so that the prediction efficiency of the postoperative arch height and the prediction accuracy of the postoperative arch height can be improved.
The image to be predicted generally means: anterior ocular segment optical coherence tomography (Anterior Segment Optical Coherence tomography, AS-OCT) images obtained for the anterior ocular segment of the target eye. The image to be predicted may also be: images of the anterior segment of the target eye are obtained by other image acquisition means. Specifically, in the embodiment of the present disclosure, an image acquisition manner of an image to be predicted is not particularly limited.
The target eye may be a human eye or an eye of a pet or other animal. The embodiments of the present disclosure will take the target eye as an example, and the data prediction method provided in the embodiments of the present disclosure will be described in detail.
The target intraocular lens refers to an intraocular lens with the current post-operation camber to be predicted. The characteristics preset for the target intraocular lens generally refer to model characteristics of the intraocular lens. That is, the intraocular lens characteristics are typically model characteristics of the target intraocular lens. Specifically, the model characteristics of the target intraocular lens at least include: optical portion/total diameter of the target intraocular lens. I.e. the total diameter of the optical portion of the target intraocular lens.
In practice, a common ICL technique is phakic posterior chamber intraocular lens implantation. The intraocular lens used in the current posterior chamber intraocular lens implantation of the phakic eye is mainly a central hole type refractive lens. That is, when the target intraocular lens is implanted into the eye of a patient using phakic posterior chamber intraocular lens implantation, the target intraocular lens is a central hole type refractive lens. At this time, the artificial lens features are model features of a central hole type refractive lens.
In an embodiment of the present disclosure, an implementation manner of determining a target image feature is shown in fig. 2, and fig. 2 is a flowchart of a target image feature determining method provided in an embodiment of the present disclosure, where the method includes the following steps:
step S201: and obtaining the image characteristics of the anterior ocular segment corresponding to the anterior ocular segment.
Step S202: target image features are determined using the anterior ocular segment image features.
Since the anterior ocular segment includes: anterior chamber, posterior chamber, zonules, corners of the chamber, partial lens, peripheral vitreous, retinal and extraocular muscle attachment points, conjunctiva, and the like. Therefore, the target image features are determined by utilizing the anterior segment image features corresponding to the anterior segment, so that the target image features can be ensured to completely characterize the features related to the postoperative arch prediction. And further, the post-operation camber can be predicted by utilizing the target image characteristics and the artificial lens characteristics.
In an embodiment of the present disclosure, a specific implementation manner of determining a target image feature using an anterior segment image feature may be: the anterior ocular segment image features are determined as target image features.
In an embodiment of the present disclosure, a specific implementation manner of determining a target image feature by using an anterior segment image feature may also be shown in fig. 3, where fig. 3 is a flowchart of another target image feature determining method provided in an embodiment of the present disclosure, and the method includes the following steps:
Step S301: and obtaining a first image feature corresponding to a first image area, wherein the first image area is an image area corresponding to a lateral naso-atrial angle in the anterior segment of the eye.
Step S302: and obtaining a second image feature corresponding to a second image area, wherein the second image area is an image area corresponding to a temporal atrial angle in the anterior ocular segment.
Step S303: and performing feature stitching on the anterior ocular segment image features, the first image features and the second image features to obtain target image features.
The characteristic of the angle of the nasal and temporal angles in the anterior segment of the eye can influence the model characteristics of the intraocular lens implanted in the target eye, which in turn have a direct correlation with the post-operative camber corresponding to the intraocular lens. That is, the atrial corner features of the nasal and temporal atrial corners in the anterior ocular segment can affect the prediction of post-operative camber. Therefore, on the basis of obtaining the anterior segment image features, the first image features and the second image features are further obtained, and the target image features are obtained by feature stitching the anterior segment image features, the first image features and the second image features, so that the post-operation camber can be predicted, and the prediction accuracy of the post-operation camber can be further improved.
In general, the image to be predicted includes other contents besides the anterior ocular segment, such as: image background, etc. Other content contained in the image to be predicted often adversely affects the prediction of post-operative camber.
In addition, there are approximately nine structural parameters of the anterior ocular segment that have a greater impact on post-operative camber according to a priori values. The nine structural parameters involved in the anterior segment are located near the corner of the house, near the iris, and near the central vertical line in the anterior segment.
Therefore, in order to improve the accuracy of prediction of the post-operative camber, the attention weight to the image region in the anterior segment of the eye, such as the vicinity of the corner of the house, the vicinity of the iris, and the vicinity of the central vertical line, can be increased in determining the target image feature. The specific implementation process is as follows: first, a third image region corresponding to an image to be predicted is determined, the third image region being an image region determined using a scleral spur in the anterior segment of the eye. Then, feature extraction is performed on the image to be predicted, and the attention weight of the third image area is increased in the feature extraction process, so that the target image feature is determined.
It should be noted that, nine structural parameters of the anterior ocular segment having a large influence on the post-operation camber are shown in fig. 4, and fig. 4 is a schematic diagram of an anterior ocular segment provided in the embodiment of the present disclosure. The nine structural parameters seen in fig. 4 are: 1. distance between nasal side angle to temporal side angle (hereinafter referred to as angle-to-angle distance); 2. anterior chamber depth; 3. lens elevation; 4. angle of the nasal side chamber; 5. angular opening distance at 500 μm from the scleral spur (hereinafter referred to as nasal AOD 500); 6. iris trabecular space area at 500 μm from the nasal side to the scleral spur (hereinafter referred to as nasal TISA 500); 7. temporal side atrial angle; 8. an angular opening distance of the temporal side at 500 μm from the scleral spur (hereinafter referred to as temporal side AOD 500); 9. the area of the trabecular space of the iris at 500 μm from the scleral spur on the temporal side (hereinafter referred to as temporal TISA 500).
In addition, post-operative camber (Vault) is further shown in FIG. 4. I.e., the height from the center of the lower surface of the implant to the center of the upper surface of the lens.
In the embodiments of the present disclosure, the so-called third image region is an image region in the anterior segment of the eye in the vicinity of the corner of the house, in the vicinity of the iris, in the vicinity of the central vertical line, corresponding in the image to be predicted. The specific implementation of determining the third image region, so-called scleral spur, is as follows:
first, the position of the Scleral Spur (SS) is determined in the image to be predicted.
In practical applications, the scleral spur position output by the scleral spur determination model may be obtained by inputting the image to be predicted into the trained scleral spur determination model. The scleral spur determination model is obtained by performing model training based on a preset sample image and corresponding labeled scleral spur positions, and is used for obtaining an image to be processed and outputting the scleral spur positions in the image to be processed.
Additionally, in embodiments of the present disclosure, other ways of determining scleral spur position may also be employed. That is, in the embodiments of the present disclosure, the manner of determining the position of the scleral spur is not particularly limited.
Second, in the image to be predicted, the corresponding image area is determined with the scleral spur positions on the nasal side and the temporal side as the center and R as the radius, and the image area is determined as the corner image area corresponding to the vicinity of the corner. Wherein, the scleral spur positions of the nasal side and the temporal side are shown in fig. 5, which is a schematic diagram of another anterior ocular segment provided in the embodiments of the present disclosure, P in fig. 5 is used to represent the scleral spur position of the nasal side, and Q is used to represent the scleral spur position of the temporal side.
Thirdly, connecting P and Q in the image to be predicted to construct a line segment PQ, and constructing a perpendicular bisector of the line segment PQ. The line segment PQ and the image area within the center vertical line preset range are set as the center image area.
Fourth, the corner image area and the center image area are determined as a third image area.
In an embodiment of the present disclosure, the manner of increasing the attention weight to the third image region may be as follows:
first, the following formula is adopted to calculate a first attention weight value corresponding to an image to be predicted:
wherein, (i, j) is used for representing coordinates of a pixel point in an image to be predicted, i is used for representing an abscissa, j is used for representing an ordinate, w (i, j) is used for representing an attention weight value corresponding to the pixel point with coordinates of (i, j), d (SS, (i, j)) is used for representing a Euclidean distance from the pixel point to a sclera process, lambda is a predetermined attention weight change smoothness degree parameter, sigma is a predetermined parameter, and R is used for representing a radius preset when an atrial angle image area is determined.
Secondly, the following formula is adopted to calculate a second attention weight value corresponding to the image to be predicted:
wherein (i, j) is used for representing coordinates of a pixel point in an image to be predicted, i is used for representing an abscissa, j is used for representing an ordinate, w (i, j) is used for representing an attention weight value corresponding to the pixel point with coordinates of (i, j), dt (i, j) is used for representing a distance between the pixel point and a center intersection point, the center intersection point is an intersection point between a line segment PQ and a perpendicular bisector of the line segment PQ, gamma is a predetermined parameter, and R is used for representing a radius preset when determining an image area of a room angle.
And thirdly, adding the first attention weight value and the second attention weight value in the pixel dimension to obtain a target attention weight value corresponding to the image to be predicted.
In addition, as can be seen from the determination formulas of the first attention weight value and the second attention weight value, the attention weight value corresponding to the third image region is higher than the attention weight values corresponding to other image regions in the image to be predicted.
And finally, correspondingly giving a target attention weight value to the image to be predicted.
In the embodiment of the present disclosure, an attention-weighted image obtained by correspondingly assigning a target attention weight value to an image to be predicted is shown in fig. 6, and fig. 6 is a schematic diagram of an attention-weighted image provided in the embodiment of the present disclosure. The attention weight value is represented by a gray value corresponding to the image, and the larger the gray value of the image is, the larger the attention weight value represented by the gray value is.
In addition, in the embodiment of the present disclosure, the expression mode of the attention weight value is not particularly limited, and for example, the attention weight may be directly expressed by a numerical mode.
Because the attention weight value corresponding to the third image area is higher than the attention weight values corresponding to other image areas in the image to be predicted. Therefore, in the feature extraction process, the attention weight to the third image region is higher than that to other image regions in the image to be predicted.
In embodiments of the present disclosure, the attention weight to the third image region may also be increased by other means. Specifically, in the embodiment of the present disclosure, the manner of determining the attention weight value corresponding to the image to be predicted is not particularly limited, as long as the attention weight value corresponding to the third image area can be ensured to be higher than the attention weight values corresponding to other image areas in the image to be predicted.
For example: the image to be predicted can be input into a trained attention weight value determination model, and a target attention weight value output by the attention weight value determination model is obtained.
The attention weight determining model is trained in advance based on the appointed sample image and the attention weight value corresponding to the annotation, and is used for obtaining an image to be processed and outputting a target attention weight value corresponding to the image to be processed.
In an embodiment of the present disclosure, a specific implementation manner of predicting a post-operation camber may be shown in fig. 7, where fig. 7 is a flowchart of a post-operation camber prediction method provided in an embodiment of the present disclosure, and the method includes the following steps:
step S701: the method comprises the steps of obtaining a trained postoperative arch height prediction model, wherein the postoperative arch height prediction model is obtained by performing model training by using a first sample image, sample artificial crystal characteristics and a corresponding marked postoperative arch height result.
Step S702: and inputting the target image characteristics and the artificial lens characteristics into a post-operation arch height prediction model to obtain the post-operation arch height.
The post-operation arch height is obtained through the trained post-operation arch height prediction model, so that the prediction efficiency of the post-operation arch height and the prediction accuracy of the post-operation arch height can be further improved.
In addition, in the embodiments of the present disclosure, a method of predicting post-operation camber is not particularly limited. That is, other means may be employed to predict post-operative camber using target image features as well as intraocular lens features. For example: firstly, obtaining a relation list for storing image characteristics and the corresponding relation between artificial lens characteristics and candidate post-operation camber; then, in the relation list, candidate post-operation camber corresponding to the target image feature and the intraocular lens feature is searched, and the candidate post-operation camber is determined as the post-operation camber.
In the embodiment of the disclosure, after the post-operation camber is obtained, the post-operation camber can be further verified to increase the reliability of the post-operation camber. The way to verify post-operative camber may be as follows:
firstly, at least nine structural parameters of anterior ocular segment with great influence on the postoperative camber may be preset as parameters for assisting in verifying the postoperative camber. Then, after the post-operation camber is obtained, based on the utilization of the target image characteristics, a parameter value corresponding to a parameter for assisting in verifying the post-operation camber is predicted. And finally, verifying the postoperative camber by using a preset verification strategy based on the parameter value.
The so-called preset verification policy may be: and inputting the parameter value into a predetermined postoperative arch height calculation formula to obtain a reference value of the postoperative arch height, and comparing the reference value with the postoperative arch height to verify the postoperative arch height.
In the embodiment of the disclosure, a specific implementation manner of predicting a parameter value corresponding to a parameter for assisting in verifying post-operation camber is as follows:
firstly, determining global parameters for assisting in verifying postoperative camber, wherein the global parameters are preset parameters aiming at anterior ocular segments; and predicting global parameter values corresponding to the global parameters by utilizing the target image characteristics.
Since the global parameter is an auxiliary verification parameter for the anterior ocular segment. Therefore, the global parameter value corresponding to the global parameter is predicted by using the target image characteristics at least comprising the anterior ocular segment image characteristics, and the smooth progress of the global parameter value can be ensured.
Wherein the global parameter values include at least: angle-to-angle distance, anterior chamber depth, and lens elevation height.
Specifically, by using the target image features, a specific implementation manner of predicting the global parameter value may be: and obtaining a trained global parameter prediction model, inputting the target image characteristics into the global parameter prediction model, and obtaining a global parameter value output by the global parameter prediction model.
The global parameter prediction model is obtained by training based on the first sample image and the global parameter value corresponding to the label and is used for extracting the image characteristics of the image to be processed and obtaining the global parameter value corresponding to the global parameter based on taking the image characteristics corresponding to the image to be processed.
Secondly, determining a nose side parameter for assisting in verifying postoperative camber, wherein the nose side parameter is a parameter preset for a nose side atrial angle; and predicting a nose parameter value corresponding to the nose parameter by using the first image characteristic.
Since the nasal parameter is an auxiliary verification parameter for the nasal angle, and the first image region is an image region corresponding to the nasal angle in the anterior segment of the eye. Therefore, the second image features corresponding to the first image area are utilized to predict the nose parameter value corresponding to the nose parameter, so that the accuracy of the nose parameter value can be ensured.
Wherein the nasal parameter values include at least: angle of the nasal side, nasal side AOD500, and nasal side TISA500.
Specifically, with the target image features, a specific implementation of predicting the nasal parameter values may be: a trained nasal parameter prediction model is obtained, the first image feature is input into the nasal parameter prediction model, and a nasal parameter value output by the nasal parameter prediction model is obtained.
The nose parameter prediction model is obtained through training based on the second sample image and the nose parameter value corresponding to the label and is used for extracting the image characteristics of the image to be processed and obtaining the nose parameter value corresponding to the nose parameter based on taking the image characteristics corresponding to the image to be processed.
Thirdly, determining temporal parameters for assisting in verifying postoperative arch height prediction, wherein the temporal parameters are parameters preset for temporal atrial angles; and predicting temporal parameter values corresponding to the temporal parameters by using the second image characteristics.
Since the temporal parameter is an auxiliary verification parameter for the temporal angle, and the second image area is the image area corresponding to the temporal angle in the anterior ocular segment. Therefore, the temporal parameter value corresponding to the temporal parameter is predicted by using the second image feature corresponding to the second image region, so that the accuracy of the temporal parameter value can be ensured.
Wherein the temporal parameter values include at least: temporal angle, temporal AOD500, temporal TISA500.
Specifically, using the second image feature, a specific implementation of predicting the temporal parameter value may be: and obtaining a trained temporal parameter prediction model, inputting the second image characteristic into the temporal parameter prediction model, and obtaining a temporal parameter value output by the temporal parameter prediction model.
The temporal parameter prediction model is obtained through training based on the third sample image and the temporal parameter value corresponding to the label, and is used for extracting the image characteristics of the image to be processed, and obtaining the temporal parameter value corresponding to the temporal parameter based on taking the image characteristics corresponding to the image to be processed.
In an embodiment of the present disclosure, a plurality of neural network models may be used to implement data prediction, and referring specifically to fig. 8, which is a schematic diagram of a data prediction process provided in an embodiment of the present disclosure, the data prediction process is as follows:
first, an anterior ocular segment optical coherence tomographic image is input into an attention weight value determination model, and a target attention weight value output by the attention weight value determination model is obtained.
Second, after obtaining a first matrix corresponding to the anterior ocular segment optical coherence tomographic image, a second matrix corresponding to the anterior ocular segment optical coherence tomographic image is obtained further based on the first matrix. Then, the second matrix is convolved through the trained anterior ocular segment feature extraction model, and the convolution result is determined as anterior ocular segment image features.
The elements in the first matrix are obtained by correspondingly multiplying the normalized pixel value and the target attention weight value in the pixel dimension; the elements in the second matrix are obtained by corresponding addition of pixel values of the anterior ocular segment optical coherence tomographic image to the elements in the first matrix.
Third, a first image region is obtained, and the first image region is input into a trained first image feature extraction model to obtain first image features. Correspondingly, a second image area is obtained, and the second image area is input into a trained second image feature extraction model, so that second image features are obtained.
Fourthly, performing feature stitching on the anterior ocular segment image features, the first image features and the second image features to obtain target image features.
Fifthly, inputting the target image features and the artificial lens features into a post-operation camber prediction model to obtain the post-operation camber.
Sixthly, inputting the target image characteristics into a global parameter prediction model to obtain a global parameter value output by the global parameter prediction model; inputting the first image characteristic into a nose parameter prediction model to obtain a nose parameter value output by the nose parameter prediction model; and inputting the second image characteristic into the temporal parameter prediction model to obtain a temporal parameter value output by the temporal parameter prediction model.
The data prediction provided by the embodiment of the disclosure can further determine whether the target intraocular lens meets the implantation requirement of the target eye according to whether the post-operation camber is in a preset post-operation camber range after the post-operation notice is obtained.
As shown in fig. 9, an embodiment of the present disclosure provides a data prediction apparatus, including:
a target image feature determining unit 901, configured to determine a target image feature corresponding to an image to be predicted, where the image to be predicted includes a front eye section of a target eye;
the post-operation doming prediction unit 902 is configured to predict post-operation doming of the target eye after implantation of the target intraocular lens by using the target image feature and the intraocular lens feature, where the intraocular lens feature is a feature preset for the target intraocular lens.
In one embodiment, the target image feature determining unit 901 may include:
the anterior segment image feature determining unit is used for obtaining anterior segment image features corresponding to anterior segments;
and the target image characteristic determining subunit is used for determining the target image characteristic by utilizing the anterior ocular segment image characteristic.
In one embodiment, the target image feature determination subunit may include:
the first image feature obtaining subunit is used for obtaining a first image feature corresponding to a first image region, wherein the first image region is an image region corresponding to a nose side atrial angle in a front eye section;
a second image feature obtaining subunit, configured to obtain a second image feature corresponding to a second image area, where the second image area is an image area corresponding to a temporal atrial angle in an anterior segment of an eye;
And the characteristic splicing subunit is used for carrying out characteristic splicing on the anterior ocular segment image characteristic, the first image characteristic and the second image characteristic to obtain a target image characteristic.
In one embodiment, the post-operative camber prediction unit 902 may comprise:
the post-operation arch height prediction model obtaining subunit is used for obtaining a trained post-operation arch height prediction model, wherein the post-operation arch height prediction model is obtained by performing model training by using a first sample image, sample artificial crystal characteristics and a corresponding marked post-operation arch height result;
and the postoperative camber obtaining subunit is used for inputting the target image characteristics and the artificial lens characteristics into a postoperative camber prediction model to obtain the postoperative camber.
In one embodiment, the data prediction apparatus may further include:
the global parameter determining unit is used for determining global parameters for assisting in verifying postoperative camber, wherein the global parameters are preset parameters aiming at anterior ocular segment;
and the global parameter value prediction unit is used for predicting the global parameter value corresponding to the global parameter by utilizing the target image characteristic.
In one embodiment, the data prediction apparatus may further include:
the nose parameter determining unit is used for determining a nose parameter for assisting in verifying postoperative camber, wherein the nose parameter is a parameter preset for a nose atrial angle;
And the nose parameter value prediction unit is used for predicting a nose parameter value corresponding to the nose parameter by utilizing the first image characteristic.
In one embodiment, the data prediction apparatus may further include:
a temporal parameter determining unit for determining temporal parameters for assisting in verifying post-operative arch height prediction, the temporal parameters being parameters preset for temporal atrial angles;
and the temporal parameter value prediction unit is used for predicting temporal parameter values corresponding to temporal parameters by using the second image characteristics.
In one embodiment, the target image feature determining unit 901 may include:
a third image region determination subunit configured to determine a third image region corresponding to the image to be predicted, the third image region being an image region determined using scleral spur in the anterior segment of the eye;
and the attention weight increasing subunit is used for extracting the characteristics of the image to be predicted and increasing the attention weight of the third image area in the characteristic extraction process so as to determine the characteristics of the target image.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 10 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the respective methods and processes described above, such as a data prediction method. For example, in some embodiments, the data prediction method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When a computer program is loaded into RAM 1003 and executed by computing unit 1001, one or more steps of the data prediction method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the data prediction method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data prediction apparatus, such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram block or blocks to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (16)

1. A method of data prediction, comprising:
determining target image characteristics corresponding to an image to be predicted, wherein the image to be predicted comprises a front eye section of a target eye;
predicting the post-operation camber of the target eye after the target intraocular lens is implanted by utilizing the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens;
the determining mode of the target image features comprises the following steps: determining a third image area corresponding to the image to be predicted, wherein the third image area is an image area determined by utilizing scleral spur in the anterior segment of the eye; extracting the characteristics of the image to be predicted, and increasing the attention weight of the third image area in the characteristic extraction process to determine the characteristics of the anterior ocular segment image, wherein the attention weight value corresponding to the third image area of the image to be predicted is higher than the attention weight values corresponding to other image areas in the image to be predicted; performing feature stitching on the anterior ocular segment image feature, a first image feature and a second image feature to obtain the target image feature, wherein the first image feature is a feature of an image area corresponding to a nasal side atrial angle in an anterior ocular segment, and the second image feature is a feature of an image area corresponding to a temporal side atrial angle in the anterior ocular segment;
Wherein said increasing the attention weight to said third image area comprises: calculating a first attention weight value corresponding to the image to be predicted based on the Euclidean distance from the pixel point in the image to be predicted to the scleral spur; calculating a second attention weight value corresponding to the image to be predicted based on the distance between the pixel point in the image to be predicted and a central intersection point, wherein the central intersection point is an intersection point between a connecting line of a scleral spur position on the nasal side and a scleral spur position on the temporal side and a perpendicular bisector of the connecting line; and adding the first attention weight value and the second attention weight value in a pixel dimension to obtain the attention weight value of the third image area corresponding to the image to be predicted.
2. The method of claim 1, wherein the determining the target image feature corresponding to the image to be predicted comprises:
acquiring anterior ocular segment image features corresponding to the anterior ocular segment;
and determining the target image characteristic by utilizing the anterior ocular segment image characteristic.
3. The method of claim 2, wherein the acquiring the first image feature includes: obtaining a first image feature corresponding to a first image region, wherein the first image region is an image region corresponding to a nasal side atrial angle in the anterior segment of the eye;
The second image feature obtaining mode includes: and obtaining a second image feature corresponding to a second image area, wherein the second image area is an image area corresponding to a temporal atrial angle in the anterior ocular segment.
4. A method according to claim 2 or 3, wherein said predicting post-operative camber of said target eye after implantation of a target intraocular lens using said target image features and intraocular lens features comprises:
obtaining a trained postoperative arch height prediction model, wherein the postoperative arch height prediction model is obtained by performing model training by using a first sample image, sample artificial crystal characteristics and a corresponding marked postoperative arch height result;
and inputting the target image characteristics and the intraocular lens characteristics into the postoperative arch height prediction model to obtain the postoperative arch height.
5. A method according to claim 3, further comprising:
determining global parameters for assisting in verifying the postoperative camber, wherein the global parameters are preset parameters aiming at the anterior ocular segment;
and predicting a global parameter value corresponding to the global parameter by utilizing the target image characteristic.
6. The method of claim 3 or 5, further comprising:
Determining a nose side parameter for assisting in verifying the postoperative camber, wherein the nose side parameter is a parameter preset for the nose side atrial angle;
and predicting a nose parameter value corresponding to the nose parameter by using the first image characteristic.
7. The method of claim 6, further comprising:
determining temporal parameters for assisting in verifying the post-operative camber prediction, wherein the temporal parameters are parameters preset for the temporal atrial angle;
and predicting temporal parameter values corresponding to the temporal parameters by using the second image characteristics.
8. A data prediction apparatus comprising:
the target image feature determining unit is used for determining target image features corresponding to an image to be predicted, wherein the image to be predicted comprises a front eye section of a target eye;
the post-operation camber prediction unit is used for predicting post-operation camber of the target eye after the target intraocular lens is implanted by utilizing the target image characteristics and the intraocular lens characteristics, wherein the intraocular lens characteristics are preset characteristics aiming at the target intraocular lens;
wherein the target image feature determination unit includes: a third image region determining subunit, configured to determine a third image region corresponding to the image to be predicted, where the third image region is an image region determined by using a scleral spur in the anterior segment of the eye; an attention weight increasing subunit, configured to perform feature extraction on the image to be predicted, and increase an attention weight on the third image area during feature extraction to determine an anterior segment image feature, where an attention weight value corresponding to the third image area of the image to be predicted is higher than attention weight values corresponding to other image areas in the image to be predicted; wherein the target image feature determination unit includes: a target image feature determination subunit, the target image feature determination subunit comprising: the characteristic stitching subunit is configured to perform characteristic stitching on the anterior ocular segment image characteristic, a first image characteristic and a second image characteristic, so as to obtain the target image characteristic, where the first image characteristic is a characteristic of an image area corresponding to a nasal side atrial angle in an anterior ocular segment, and the second image characteristic is a characteristic of an image area corresponding to a temporal side atrial angle in the anterior ocular segment;
The attention weight increasing subunit is used for calculating a first attention weight value corresponding to the image to be predicted based on the Euclidean distance from the pixel point in the image to be predicted to the scleral spur; calculating a second attention weight value corresponding to the image to be predicted based on the distance between the pixel point in the image to be predicted and a central intersection point, wherein the central intersection point is an intersection point between a connecting line of a scleral spur position on the nasal side and a scleral spur position on the temporal side and a perpendicular bisector of the connecting line; and adding the first attention weight value and the second attention weight value in a pixel dimension to obtain the attention weight value of the third image area corresponding to the image to be predicted.
9. The apparatus according to claim 8, wherein the target image feature determination unit includes:
the anterior segment image feature determining unit is used for obtaining anterior segment image features corresponding to the anterior segments;
and the target image characteristic determining subunit is used for determining the target image characteristic by utilizing the anterior ocular segment image characteristic.
10. The apparatus of claim 9, wherein the target image feature determination subunit comprises:
A first image feature obtaining subunit, configured to obtain a first image feature corresponding to a first image region, where the first image region is an image region corresponding to a nasal side atrial angle in the anterior segment of the eye;
and the second image feature obtaining subunit is used for obtaining a second image feature corresponding to a second image region, wherein the second image region is an image region corresponding to a temporal atrial angle in the anterior ocular segment.
11. The apparatus of claim 9 or 10, wherein the post-operative camber prediction unit comprises:
the post-operation arch height prediction model obtaining subunit is used for obtaining a trained post-operation arch height prediction model, wherein the post-operation arch height prediction model is obtained by performing model training by using a first sample image, sample artificial crystal characteristics and a corresponding marked post-operation arch height result;
and the postoperative camber obtaining subunit is used for inputting the target image characteristics and the intraocular lens characteristics into the postoperative camber prediction model to obtain the postoperative camber.
12. The apparatus of claim 10, further comprising:
the global parameter determining unit is used for determining global parameters for assisting in verifying the postoperative arch height, wherein the global parameters are preset parameters aiming at the anterior ocular segment;
And the global parameter value prediction unit is used for predicting the global parameter value corresponding to the global parameter by utilizing the target image characteristic.
13. The apparatus of claim 10 or 12, further comprising:
a nose side parameter determining unit for determining a nose side parameter for assisting in verifying the post-operation camber, the nose side parameter being a parameter preset for the nose side atrial angle;
and the nose parameter value prediction unit is used for predicting a nose parameter value corresponding to the nose parameter by utilizing the first image characteristic.
14. The apparatus of claim 13, further comprising:
a temporal parameter determining unit configured to determine a temporal parameter for assisting in verifying the post-operation camber prediction, the temporal parameter being a parameter preset for the temporal atrial angle;
and the temporal parameter value prediction unit is used for predicting temporal parameter values corresponding to the temporal parameters by using the second image characteristics.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
CN202210137171.3A 2022-02-15 2022-02-15 Data prediction method and device, electronic equipment and storage medium Active CN114463319B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210137171.3A CN114463319B (en) 2022-02-15 2022-02-15 Data prediction method and device, electronic equipment and storage medium
PCT/CN2022/132040 WO2023155509A1 (en) 2022-02-15 2022-11-15 Data prediction method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210137171.3A CN114463319B (en) 2022-02-15 2022-02-15 Data prediction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114463319A CN114463319A (en) 2022-05-10
CN114463319B true CN114463319B (en) 2024-01-02

Family

ID=81413319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210137171.3A Active CN114463319B (en) 2022-02-15 2022-02-15 Data prediction method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114463319B (en)
WO (1) WO2023155509A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463319B (en) * 2022-02-15 2024-01-02 北京百度网讯科技有限公司 Data prediction method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113017831A (en) * 2021-02-26 2021-06-25 上海鹰瞳医疗科技有限公司 Method and equipment for predicting arch height after artificial lens implantation
CN113499033A (en) * 2021-05-20 2021-10-15 北京鹰瞳科技发展股份有限公司 Medical data method and system
CN113642431A (en) * 2021-07-29 2021-11-12 北京百度网讯科技有限公司 Training method and device of target detection model, electronic equipment and storage medium
CN113850762A (en) * 2021-09-02 2021-12-28 南方科技大学 Eye disease identification method, device, equipment and storage medium based on anterior segment image
CN113886996A (en) * 2021-11-10 2022-01-04 杭州明视康眼科医院有限公司 Postoperative arch height prediction method for intraocular lens implantation with lens and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011075149A1 (en) * 2011-05-03 2011-12-08 *Acri.Tec Gmbh Method for preoperative prediction of postoperative deep horizontal position of intraocular lens in patient's eye, involves connecting main reference points by straight edge-connecting lines to form forecasting network in coordinate system
WO2019026862A1 (en) * 2017-07-31 2019-02-07 株式会社ニデック Intraocular lens power determination device and intraocular lens power determination program
CN114463319B (en) * 2022-02-15 2024-01-02 北京百度网讯科技有限公司 Data prediction method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113017831A (en) * 2021-02-26 2021-06-25 上海鹰瞳医疗科技有限公司 Method and equipment for predicting arch height after artificial lens implantation
CN113499033A (en) * 2021-05-20 2021-10-15 北京鹰瞳科技发展股份有限公司 Medical data method and system
CN113642431A (en) * 2021-07-29 2021-11-12 北京百度网讯科技有限公司 Training method and device of target detection model, electronic equipment and storage medium
CN113850762A (en) * 2021-09-02 2021-12-28 南方科技大学 Eye disease identification method, device, equipment and storage medium based on anterior segment image
CN113886996A (en) * 2021-11-10 2022-01-04 杭州明视康眼科医院有限公司 Postoperative arch height prediction method for intraocular lens implantation with lens and electronic equipment

Also Published As

Publication number Publication date
WO2023155509A1 (en) 2023-08-24
CN114463319A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US11766293B2 (en) Systems and methods for intraocular lens selection
US10426551B2 (en) Personalized refractive surgery recommendations for eye patients
JP2019034218A (en) Surgical guidance and surgical planning software for astigmatism treatment
CN114463319B (en) Data prediction method and device, electronic equipment and storage medium
US20180296320A1 (en) Forecasting cataract surgery effectiveness
WO2020019286A1 (en) Blepharoptosis detection method and system
US20230078161A1 (en) Machine learning-supported pipeline for dimensioning an intraocular lens
US20230037772A1 (en) Method for determining lens and apparatus using the method
EP3962424A1 (en) Cloud based system cataract treatment database and algorithm system
US10357154B2 (en) Systems and methods for providing astigmatism correction
US20200163727A1 (en) Cloud based system cataract treatment database and algorithm system
CN113330522A (en) System and method for selecting intraocular lens using frontal view zone prediction
KR102542015B1 (en) Method for determining lens and apparatus using the method
Bullimore et al. Correction of low levels of astigmatism
CN115272152A (en) Method, device, equipment and storage medium for generating confrontation medical image
US11488725B2 (en) Automated intraocular lens selection process
CN117238514B (en) Intraocular lens refractive power prediction method, system, equipment and medium
TWI673034B (en) Methods and system for detecting blepharoptosis
KR102323355B1 (en) Method for predecting a vaulting value and apparatus using the method
TWI761931B (en) Method for determining lens and apparatus using the method
CN113012281B (en) Determination method and device for human body model, electronic equipment and storage medium
CN115100380B (en) Automatic medical image identification method based on eye body surface feature points
CN115908300B (en) Method, device, equipment and storage medium for heart valve calcification segmentation
WO2022023882A1 (en) Systems and methods for eye cataract removal
CN117038070A (en) Myopia diopter development prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant