CN115099158A - Construction method of parameter prediction model - Google Patents

Construction method of parameter prediction model Download PDF

Info

Publication number
CN115099158A
CN115099158A CN202210848932.6A CN202210848932A CN115099158A CN 115099158 A CN115099158 A CN 115099158A CN 202210848932 A CN202210848932 A CN 202210848932A CN 115099158 A CN115099158 A CN 115099158A
Authority
CN
China
Prior art keywords
neural network
output
network model
input
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210848932.6A
Other languages
Chinese (zh)
Inventor
袁冬青
汤福南
杨春花
张晖
汪缨
刘庆淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Province Hospital First Affiliated Hospital Of Nanjing Medical University
Original Assignee
Jiangsu Province Hospital First Affiliated Hospital Of Nanjing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Province Hospital First Affiliated Hospital Of Nanjing Medical University filed Critical Jiangsu Province Hospital First Affiliated Hospital Of Nanjing Medical University
Priority to CN202210848932.6A priority Critical patent/CN115099158A/en
Publication of CN115099158A publication Critical patent/CN115099158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Urology & Nephrology (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Mathematical Physics (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Prostheses (AREA)

Abstract

The invention discloses a method for constructing a parameter prediction model, which comprises the following steps: step 1: selecting input values and expected output values of a model training stage, and step 2: constructing a BP neural network model; and step 3: and training the model to obtain a BP neural network model suitable for SMILE surgical cutting thickness prediction. The advantages are that: the BP neural network is based on deep learning of existing mass data, and a prediction function can be realized independently of an SMILE surgical machine. Therefore, the invention can fundamentally realize scientific and rapid screening of full femtosecond operation indication on the basis of ensuring the prediction precision and can save the clinical time.

Description

Construction method of parameter prediction model
Technical Field
The invention relates to a construction method of a parameter prediction model.
Background
At present, the application of femtosecond laser in myopia corneal refractive surgery is more and more extensive, and in particular, the femtosecond laser small incision corneal stroma lens extraction (SMILE) shows better safety, effectiveness, stability and predictability. However, the clinical effect of different patients after surgery is still different, so how to achieve real individuation needs to be continuously discussed, and many unknown factors still exist in the design of surgical parameters. In recent years, refractive surgeons continuously explore and optimize the design of SMILE surgical parameters on the premise of following the basic theory of visual optics, so as to ensure the accuracy of surgery and improve the postoperative visual quality of patients.
SMILE, a relatively new form of corneal refractive surgery, is currently in an ongoing exploratory stage of development, the most important of which is the lack of sufficient experience in setting the surgical parameters. In order to reduce damage to corneal tissue, achieve individualized surgical planning, and achieve optimal surgical results, careful analysis of surgical parameter settings and associated surgical planning, optimization, and necessary adjustments are required.
The BP Neural Network (Back-Propagation Neural Network) is also called a Back Propagation Neural Network, and is a multi-layer forward Neural Network based on an error Back Propagation algorithm, and the Network has good nonlinear function approximation capability and can realize accurate prediction.
The key parameters of SMILE surgery are mainly the cut thickness and the residual basal thickness, wherein the cut thickness is influenced by the data of the patient's Sphere Power (SPH), cylinder power (CYL), corneal curvature radius (K) and stromal lens Diameter (Diameter), and the residual basal thickness value is influenced by the parameters of the corneal thickness, the cut thickness, the basal thickness and the cap thickness.
Because the key parameters of the SMILE operation are influenced by the factors, the clinical judgment result including the cutting thickness, the residual basal thickness, whether the patient meets the operation condition and the like is difficult to visually obtain.
Usually, the desired conclusion of the surgical parameters is to input various parameters to obtain the results of the cut thickness and the residual substrate thickness by means of the SMILE surgical machine, but this method takes up the time of the SMILE surgical machine and cannot be calculated and judged quickly during the preoperative screening process.
Therefore, if partial data provided by a manufacturer and patient data accumulated for a long time are used as prior conditions, a scientific SIMLE operation parameter prediction method is established, the cutting thickness and the residual substrate thickness are rapidly predicted, a clinician is helped to judge whether a patient meets the operation conditions, meanwhile, the time of occupying an SMILE operation machine can be avoided, and better economic and social benefits are obtained.
Disclosure of Invention
Aiming at the defects, the invention provides a method for constructing a parameter prediction model, which simulates and predicts a corneal ablation formula of SMILE surgery based on a machine learning mode of a BP neural network, calculates the corneal thickness to be ablated in the SMLE surgery by inputting the current surgery data of a patient so as to predict the feasibility of the surgery of the patient, assists in evaluating the surgery risk and guides a surgeon to select a personalized surgery scheme.
The technical scheme adopted by the invention is as follows:
a construction method of a parameter prediction model comprises the following steps:
step 1: according to official data and historical patient data provided by a manufacturer, 4 indexes of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the Diameter (Diameter) of a stromal lens of a patient are selected as input values in a model training stage, and cutting thickness values Y corresponding to the input quantities one by one are selected as expected output values in the model training stage;
step 2: constructing a BP neural network model according to the input quantity and the expected output quantity in the step 1, wherein the BP neural network model comprises three layers of feedforward neural network structures, namely an input layer, a hidden layer and an output layer, the input index of the input layer is the input quantity selected in the step 1, and the output index of the output layer is the expected output quantity; setting an expected error W according to the actual prediction precision requirement;
the hidden layer and the output layer are selected as tansig, the network training function is thingm, and the network performance function is mse;
the BP neural network model adopts an S-shaped transfer function logsig with the expression of
Figure BDA0003752528010000021
By back-propagation of error functions
Figure BDA0003752528010000022
Continuously adjusting the network weight and the threshold value to enable the error function E to reach the degree smaller than the expected error W; wherein, T i To expect output, Q i Computing an output for the network;
the number of neurons of the BP neural network model, the number of neurons L of the hidden layer, is determined by referring to the following formula:
Figure BDA0003752528010000023
c is the number of nodes of the input layer, b is the number of nodes of the output layer, and a is a constant between [1 and 10 ];
and step 3: training the BP neural network model in the step 2; the specific training steps are as follows:
step 31, generating an input vector P1 by using the data of the 4 indexes selected in the step 1; generating an output vector T1 with the data of the desired output values selected in step 1, the output vector T1 being the desired output vector;
P 1 =[S 1 ,S 2 ,S 3 ,...,S n ;C 1 ,C 2 ,C 3 ,...,C n ;K1 K 2 ,K 3 ,...,K n ;D 1 ,D 2 ,D 3 ,...,D n ];T1=[Y 1 ,Y 2 ,Y 3 ,...,Y n ];
wherein n is the number of historical sample data, the input vector P consists of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient, and the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient are respectively abbreviated as S, C, K and D;
step 32, inputting the input vector P1 into a BP neural network model to obtain an actual output vector, wherein the actual output vector is a predicted value of the cutting thickness value Y;
step 33, inputting the output vector T1 into a BP neural network model, and calculating the root mean square error between the predicted value of the cutting thickness value Y and the expected value;
step 34, taking the root mean square error as input data of a BP neural network error back propagation algorithm, and performing cyclic reciprocating training on a BP neural network model until the error between the predicted value and the expected value of the output cutting thickness value Y is smaller than a set expected error W, completing model training, and storing the BP neural network model;
and step 35, obtaining a BP neural network model suitable for SMILE surgery cutting thickness prediction.
The invention is beneficial to screening SMILE operation indications, and particularly for doctors who carry out SMLE operation at the initial stage, the surgical parameters of patients can be conveniently and effectively obtained without the help of the infusion verification of a Zeiss VISUMAX machine, and the safe and reasonable operation mode can be selected by the doctors in an auxiliary manner.
Preferably, in step 2, the hidden layer of the BP neural network model may have one or more layers.
Compared with the prior art, the invention has the following beneficial effects:
in the prior art, the operation parameters of a patient can be verified and judged only by a manufacturer machine, so that operation time is possibly occupied, and the efficiency is low. In addition, the table look-up method has certain feasibility, but the table look-up method is only effective on discrete input data and has low execution efficiency under the condition of multi-dimensional parameter input, and the network after deep learning training reaches expected errors can realize rapid and accurate data prediction on any input parameter in a reasonable range, rapidly judge whether a patient meets SMILE surgery conditions, and save clinical time.
The invention carries out deep learning on the existing mass data based on the BP neural network and can realize the prediction function independent of an SMILE surgical machine. Therefore, the invention can fundamentally realize scientific and rapid screening of full femtosecond operation indication on the basis of ensuring the prediction precision and can save the clinical time.
Drawings
Fig. 1 is a flowchart of a method for constructing a parameter prediction model according to the present invention.
Fig. 2 is a flowchart of a method for predicting SMILE surgical parameters based on a BP neural network model according to embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of a BP neural network configuration for predicting the cutting thickness of the SMILE surgery according to embodiment 1 of the present invention.
Fig. 4 is a training result of the fitting degree of the training samples.
Fig. 5 is a training result of verifying the degree of fit of a sample.
Fig. 6 is a training result of the degree of fitting of the test samples.
Fig. 7 is a training result of the fitting degree of all samples.
Fig. 8 is a schematic diagram of the training effect of the BP neural network model in the training step 2 as a function of the number of iterations.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to fig. 1 to 8 and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The present invention is further described in detail below, wherein tansig, logsig, and thingdx are conventional calculation formulas or functions.
As shown in fig. 1, a method for constructing a parameter prediction model specifically includes the following steps:
step 1: according to official data and historical patient data provided by a manufacturer, 4 indexes of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the Diameter (Diameter) of a stromal lens of a patient are selected as input values in a model training stage, and cutting thickness values Y corresponding to the input quantities one by one are selected as expected output values in the model training stage;
and 2, step: constructing a BP neural network model according to the input quantity and the expected output quantity in the step 1, wherein the BP neural network model comprises three layers of feedforward neural network structures, namely an input layer, a hidden layer and an output layer, the input index of the input layer is the input quantity selected in the step 1, and the output index of the output layer is the expected output quantity; setting an expected error W according to the actual prediction precision requirement;
the hidden layer and the output layer excitation functions are selected to be tansig, the network training function is the thingmm, and the network performance function is mse;
the BP neural network model adopts an S-shaped transfer function logsig with the expression of
Figure BDA0003752528010000051
By back-propagation of error functions
Figure BDA0003752528010000052
Continuously adjusting the network weight and the threshold value to enable the error function E to reach the degree smaller than the expected error W; wherein, T i To expect output, Q i Computing an output for the network;
the number of neurons of the BP neural network model, the number of neurons L of the hidden layer, is determined by referring to the following formula:
Figure BDA0003752528010000053
c is the number of nodes of the input layer, b is the number of nodes of the output layer, and a is a constant between [1 and 10 ];
and 3, step 3: training the BP neural network model in the step 2; the specific training steps are as follows:
step 31, generating an input vector P1 by using the data of the 4 indexes selected in the step 1; generating an output vector T1 with the data of the desired output values selected in step 1, the output vector T1 being the desired output vector;
P 1 =[S 1 ,S 2 ,S 3 ,...,S n ;C 1 ,C 2 ,C 3 ,...,C n ;K 1 ,K 2 ,K 3 ,...,K n ;D 1 ,D 2 ,D 3 ,...,D n ];T1=[Y 1 ,Y 2 ,Y 3 ,...,Y n ];
wherein n is the number of historical sample data, and the value of n is 13188; the input vector P consists of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient, and the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient are respectively S, C, K and D for short;
step 32, inputting the input vector P1 into a BP neural network model to obtain an actual output vector, wherein the actual output vector is a predicted value of the cutting thickness value Y;
step 33, inputting the output vector T1 into a BP neural network model, and calculating the root mean square error between the predicted value and the expected value of the cutting thickness value Y;
step 34, taking the root mean square error as input data of a BP neural network error back propagation algorithm, carrying out circular reciprocating training on a BP neural network model until the error between the predicted value and the expected value of the output cutting thickness value Y is smaller than a set expected error W, finishing model training, and storing the BP neural network model;
and step 35, obtaining a BP neural network model suitable for SMILE operation cutting thickness prediction.
The application example of the parameter prediction model of the present embodiment is as follows:
example 1
As shown in fig. 2, the present embodiment provides a method for predicting SMILE surgical parameters based on a BP neural network model, which specifically includes the following steps:
step 1: according to official data and historical patient data provided by a manufacturer, 4 indexes of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the Diameter (Diameter) of a stromal lens of a patient are selected as input values in a model training stage, and cutting thickness values Y corresponding to the input quantities one by one are selected as expected output values in the model training stage;
and 2, step: constructing a BP neural network model according to the input quantity and the expected output quantity in the step 1, wherein the BP neural network model comprises three layers of feedforward neural network structures, namely an input layer, a hidden layer and an output layer, the input index of the input layer is the input quantity selected in the step 1, and the output index of the output layer is the expected output quantity; setting an expected error W according to the actual prediction precision requirement;
the hidden layer and the output layer excitation functions are selected to be tansig, the network training function is the thingmm, and the network performance function is mse;
the BP neural network model adopts an S-shaped transfer function logsig with the expression of
Figure BDA0003752528010000061
By back-propagation of error functions
Figure BDA0003752528010000062
Continuously adjusting the network weight and the threshold value to enable the error function E to reach the degree smaller than the expected error W; wherein, T i To expect output, Q i Computing an output for the network;
the number of neurons of the BP neural network model, the number of neurons L of the hidden layer, is determined by referring to the following formula:
Figure BDA0003752528010000063
c is the number of nodes of the input layer, b is the number of nodes of the output layer, and a is a constant between [1 and 10 ];
and step 3: training the BP neural network model in the step 2; the specific training steps are as follows:
step 31, generating an input vector P1 by using the data of the 4 indexes selected in the step 1; generating an output vector T1 with the data of the desired output values selected in step 1, the output vector T1 being the desired output vector;
P1=[S 1 ,S 2 ,S 3 ,...,S n ;C 1 ,C 2 ,C 3 ,...,C n ;K 1 ,K 2 ,K 3 ,...,K n ;D 1 ,D 2 ,D 3 ,...,D n ];
T1=[Y 1 ,Y 2 ,Y 3 ,...,Y n ];
wherein n is the number of historical sample data, and the value of n is 13188; the input vector P consists of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient, and the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient are respectively S, C, K and D for short;
step 32, inputting the input vector P1 into a BP neural network model to obtain an actual output vector, wherein the actual output vector is a predicted value of the cutting thickness value Y;
step 33, inputting the output vector T1 into a BP neural network model, and calculating the root mean square error between the predicted value of the cutting thickness value Y and the expected value;
step 34, taking the root mean square error as input data of a BP neural network error back propagation algorithm, and performing cyclic reciprocating training on a BP neural network model until the error between the predicted value and the expected value of the output cutting thickness value Y is smaller than a set expected error W, completing model training, and storing the BP neural network model;
and step 35, obtaining a BP neural network model suitable for SMILE operation cutting thickness prediction.
Step 4, selecting S, C, K and D4 indexes as input values of a model application stage according to the current surgical data of the patient to be predicted, generating an input vector P2 by using the data of the selected input values of the model application stage,
P2=[S 1 ;C 1 ;K 1 ;D 1 ];
step 5, model application, namely inputting the input vector P2 in the step 4 into the BP neural network model in the step 35 to obtain an output vector T2, wherein the output vector T2 is a predicted value of the patient cutting thickness value Y;
step 6, calculating the predicted value of the thickness of the residual substrate, wherein the formula is as follows:
residual basal thickness-Cap thickness-cut thickness value Y
Wherein, the cutting thickness value Y in the formula is the predicted value of the cutting thickness value Y of the patient obtained in the step 5, and the corneal thickness, the basal thickness and the cap thickness are the surgical data of the current patient;
step 7, if the thickness of the residual substrate is more than or equal to 300 mu m, prompting that the surgical risk is small, and considering the operation; if the thickness of the residual stroma is less than 280 mu m, the operation risk is high, and the operation is not recommended; if the remaining matrix thickness is between 280 μm and 300 μm, the operation may be performed under certain circumstances.
Example 1 of the present invention, surgery can be performed under certain conditions with a residual stromal thickness between 280 μm and 300 μm. At this time, the remaining corneal stroma thickness can be increased to the operable range by appropriately adjusting the preset Diameter, base thickness, cap Diameter, and cap thickness. However, the preset parameter is adjusted within a certain range, for example, if the Diameter is too small, the introduced high-order aberration may be too large, thereby affecting the visual quality. Therefore, the preset parameters of the patient need to be reasonably selected, so that a safer, more effective and reasonable operation scheme can be selected.
As shown in fig. 3, in step 2 according to the embodiment of the present invention, there may be one or more hidden layers of the BP neural network model; the excitation functions of the hidden layer and the output layer are selected to be tansig, the network training function is thinglm, the network performance function is mse, the training times are set to be 5000, the expected error W is set to be 10 according to the actual prediction precision requirement -3
In this embodiment, the BP neural network model is input by four sets of parameter indexes, output by a cutting thickness value, the number of nodes c of the input layer is 4, the number of nodes b of the output layer is 1, and the number of neurons L of the hidden layer is determined by referring to the following formula:
Figure BDA0003752528010000081
where a is a constant between [1, 10], and therefore the number of neurons in the hidden layer is ultimately determined by the effect of the network training.
For example, in example a, take 3.764, so that the number of hidden layer neurons L is set to 6. Taking the root mean square error of the predicted value of the cutting thickness and the expected value of the cutting thickness as input data of a back propagation algorithm of the error of the BP neural network, and carrying out cyclic reciprocating training on the BP neural network model until the error between the output predicted value and the actual value is less than a set threshold value 10 -3 To end the training.
In this embodiment, the BP neural network model in step 2 is trained in step 3, and after the training is completed, the fitting degree between the neural network and the corresponding data is measured by drawing a regression line, as shown in fig. 4, 5, 6, and 7.
Fig. 4 is a training result of the fitting degree of the training samples, fig. 5 is a training result of the fitting degree of the verification samples, fig. 6 is a training result of the fitting degree of the test samples, and fig. 7 is a training result of the fitting degree of all the samples; where R refers to the regression coefficient, the closer R is to 1 indicating better fit.
In this embodiment, the BP neural network model in step 2 is randomly grouped, and the input vector P1 and the output vector T are divided into three data sets, one is a training set, one is a verification set, and the other is a test set.
Step 3, in the process of training the BP neural network model in the step 2, the prediction effectiveness of the BP neural network model is verified through the data of the verification set, and the test is carried out through the data of the test set; and if the root mean square error of the verification and test result is larger than the expected error, returning to the training set to continue training until the result smaller than the expected error is achieved or finishing the previous preset iteration times. If regression analysis of the results of the three data sets can reach the expected results, then the BP neural network model will be saved in the mat format.
As shown in fig. 8, in this embodiment, the BP neural network model in step 2 is trained in step 3, and the best verification performance is achieved after the iteration number reaches 2313.
In the embodiment, in the step 3, the trained BP neural network model is stored after training, so that the BP neural network model suitable for predicting the SMILE surgical cutting thickness is obtained; and storing the trained BP neural network model in a mat file. By providing the input vector to the file, an output vector, that is, a predicted value of the cutting thickness value Y, can be obtained.
After 13188 pieces of data of the historical sample data are reversely input into a stored network model, a reverse prediction result can be obtained, the predicted cutting thickness value in the actual use process is an integer, so the residual error is rounded, the distribution after rounding is shown in table 1, and the final error precision is (-0.00083 +/-0.561255) mu m on the premise that the total data provided by a machine is 13188.
TABLE 1 inverse prediction machine provided data result residual rounding distribution
Figure BDA0003752528010000091
In the prediction method of the embodiment, in the step 3, the trained BP neural network model is saved through training, so that the BP neural network model suitable for predicting the SMILE surgical cutting thickness is obtained; in order to verify the accuracy of the stored BP neural network model, 4840 pieces of historical sample data information of the patient are input into the BP neural network model, and the prediction accuracy is verified by comparing the residual error between the predicted cutting thickness value and the actual cutting thickness value, so that the final error accuracy is (-0.003791 ± 0.42211) μm on the premise that the total amount of the actual patient data is 4840, as shown in table 2.
TABLE 2 inverse prediction of patient actual data result distribution after rounding
Figure BDA0003752528010000092
Figure BDA0003752528010000101
From the analysis of the results, most errors are distributed in the range of +/-1 μm, and the accuracy of the method related by the invention is higher and higher along with the increase of the data volume and the continuous optimization training of the BP neural network model.
In the embodiment of the invention, in step 4, in a BP neural network model which is particularly suitable for SMILE surgical cutting thickness prediction, the predicted value of the patient cutting thickness value Y is calculated by utilizing the BP neural network model according to the surgical data of the patient to be predicted currently. Inputting a group of patient operation data into a BP neural network model to obtain a predicted value of the cutting thickness value Y.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.

Claims (2)

1. A construction method of a parameter prediction model is characterized by comprising the following steps:
step 1: according to official data and historical patient data provided by a manufacturer, 4 indexes of the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the Diameter (Diameter) of a stromal lens of a patient are selected as input values in a model training stage, and cutting thickness values Y corresponding to the input quantities one by one are selected as expected output values in the model training stage;
and 2, step: constructing a BP neural network model according to the input quantity and the expected output quantity in the step 1, wherein the BP neural network model comprises three layers of feedforward neural network structures, namely an input layer, a hidden layer and an output layer, the input index of the input layer is the input quantity selected in the step 1, and the output index of the output layer is the expected output quantity; setting an expected error W according to the actual prediction precision requirement;
the hidden layer and the output layer are selected as tansig, the network training function is thingm, and the network performance function is mse;
the BP neural network model adopts an S-shaped transfer function logsig with the expression of
Figure FDA0003752527000000011
By back-propagation of error functions
Figure FDA0003752527000000012
Continuously adjusting the network weight and the threshold value to enable the error function E to reach the degree smaller than the expected error W; wherein, T i To desired output, Q i Computing an output for the network;
the number of neurons of the BP neural network model, the number of neurons L of the hidden layer, is determined by referring to the following formula:
Figure FDA0003752527000000013
c is the number of nodes of the input layer, b is the number of nodes of the output layer, and a is a constant between [1, 10 ];
and step 3: training the BP neural network model in the step 2; the specific training steps are as follows:
step 31, generating an input vector P1 by using the data of the 4 indexes selected in the step 1; generating an output vector T1 with the data of the desired output values selected in step 1, the output vector T1 being the desired output vector;
P1=[S 1 ,S 2 ,S 3 ,...,S n ;C 1 ,C 2 ,C 3 ,...,C n ;K 1 ,K 2 ,K 3 ,...,K n ;D 1 ,D 2 ,D 3 ,...,D n ];
T1=[Y 1 ,Y 2 ,Y 3 ,...,Y n ];
wherein n is the number of historical sample data, the input vector P consists of the Sphere Power (SPH), cylinder power (CYL), corneal curvature radius (K) and stromal lens Diameter (Diameter) of the patient, and the Sphere Power (SPH), the cylinder power (CYL), the corneal curvature radius (K) and the stromal lens Diameter (Diameter) of the patient are respectively called S, C, K and D for short;
step 32, inputting the input vector P1 into a BP neural network model to obtain an actual output vector, wherein the actual output vector is a predicted value of the cutting thickness value Y;
step 33, inputting the output vector T1 into a BP neural network model, and calculating the root mean square error between the predicted value of the cutting thickness value Y and the expected value;
step 34, taking the root mean square error as input data of a BP neural network error back propagation algorithm, and performing cyclic reciprocating training on a BP neural network model until the error between the predicted value and the expected value of the output cutting thickness value Y is smaller than a set expected error W, completing model training, and storing the BP neural network model;
and step 35, obtaining a BP neural network model suitable for SMILE surgery cutting thickness prediction.
2. The method of claim 1, wherein in step 2, the hidden layer of the BP neural network model may have one or more layers.
CN202210848932.6A 2020-12-31 2020-12-31 Construction method of parameter prediction model Pending CN115099158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210848932.6A CN115099158A (en) 2020-12-31 2020-12-31 Construction method of parameter prediction model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210848932.6A CN115099158A (en) 2020-12-31 2020-12-31 Construction method of parameter prediction model
CN202011627434.6A CN112656507B (en) 2020-12-31 2020-12-31 Method for constructing BP neural network model suitable for SMILE surgical cutting thickness prediction

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011627434.6A Division CN112656507B (en) 2020-12-31 2020-12-31 Method for constructing BP neural network model suitable for SMILE surgical cutting thickness prediction

Publications (1)

Publication Number Publication Date
CN115099158A true CN115099158A (en) 2022-09-23

Family

ID=75412529

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011627434.6A Active CN112656507B (en) 2020-12-31 2020-12-31 Method for constructing BP neural network model suitable for SMILE surgical cutting thickness prediction
CN202210848932.6A Pending CN115099158A (en) 2020-12-31 2020-12-31 Construction method of parameter prediction model

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011627434.6A Active CN112656507B (en) 2020-12-31 2020-12-31 Method for constructing BP neural network model suitable for SMILE surgical cutting thickness prediction

Country Status (1)

Country Link
CN (2) CN112656507B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2712321C (en) * 1999-10-21 2013-07-30 Technolas Gmbh Ophthalmologische Systeme Iris recognition and tracking for optical treatment
SG11201909096WA (en) * 2017-03-31 2019-10-30 Annmarie Hipsley Systems and methods for ocular laser surgery and therapeutic treatments
CN109994195B (en) * 2019-03-22 2020-12-29 清华大学深圳研究生院 Artificial intelligence guide system for corneal crosslinking
CN114072111A (en) * 2019-05-04 2022-02-18 爱视视觉集团公司 System and method for laser surgery and therapeutic treatment of the eye
CN110338906B (en) * 2019-07-10 2020-10-30 清华大学深圳研究生院 Intelligent treatment system for photocrosslinking operation and establishment method

Also Published As

Publication number Publication date
CN112656507B (en) 2022-08-26
CN112656507A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CA2480197C (en) System and method for predictive ophthalmic correction
JP4307851B2 (en) Adaptive wavefront adjustment system and method for ophthalmic surgery
EP1613253B1 (en) Method and system related to treatment planning for vision correction
Canovas et al. Customized eye models for determining optimized intraocular lenses power
CN103970965B (en) Test run method for accelerated life test of gas turbine engine
US20070162265A1 (en) Method and apparatus for automated simulation and design of corneal refractive procedures
EP2361068B1 (en) Apparatus for providing a laser shot file
JP2006510455A (en) System and method for ablation based parametric models
JP4125606B2 (en) Adaptive wavefront adjustment system and method for laser refractive surgery
CN108538389A (en) A kind of method and system for predicting diopter adjusted value in SMILE refractive surgeries
CN109300548A (en) A kind of optimization method and system for predicting diopter adjusted value in SMILE refractive surgery
CN114269302A (en) Cloud-based system cataract treatment database and algorithm system
Vahdati et al. Computational biomechanical analysis of asymmetric ectasia risk in unilateral post-LASIK ectasia
CN112656507B (en) Method for constructing BP neural network model suitable for SMILE surgical cutting thickness prediction
US7844425B2 (en) Finite element modeling of the cornea
EP4258972A1 (en) Selection of a preferred intraocular lens based on ray tracing
Fraldi et al. The role of viscoelasticity and stress gradients on the outcome of conductive keratoplasty
Arrowsmith et al. Four-year update on predictability of radial keratotomy
US7987077B2 (en) System and method for simulating an LIOB protocol to establish a treatment plan for a patient
Valdés-Mas et al. Machine learning for predicting astigmatism in patients with keratoconus after intracorneal ring implantation
Businaro et al. Gaussian process prediction of the stress-free configuration of pre-deformed soft tissues: Application to the human cornea
CN113171172B (en) Method for simulating postoperative condition of cornea
KR101341772B1 (en) Method for Calculating Smart Integrated Prediction Management Value for Orthokeratology
US20200397283A1 (en) A method to quantify the corneal parameters to improve biomechanical modeling
US8388610B2 (en) Treatment pattern monitor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination