CN115634044A - Operation planning and model training method and device and electronic equipment - Google Patents

Operation planning and model training method and device and electronic equipment Download PDF

Info

Publication number
CN115634044A
CN115634044A CN202211246286.2A CN202211246286A CN115634044A CN 115634044 A CN115634044 A CN 115634044A CN 202211246286 A CN202211246286 A CN 202211246286A CN 115634044 A CN115634044 A CN 115634044A
Authority
CN
China
Prior art keywords
network model
image data
training
target image
marking information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211246286.2A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaowei Changxing Robot Co ltd
Original Assignee
Suzhou Xiaowei Changxing Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaowei Changxing Robot Co ltd filed Critical Suzhou Xiaowei Changxing Robot Co ltd
Priority to CN202211246286.2A priority Critical patent/CN115634044A/en
Publication of CN115634044A publication Critical patent/CN115634044A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides a method, a device and electronic equipment for operation planning and model training, wherein the operation planning method comprises the following steps: inputting preoperative image data of a target patient into a target network model; the target network model is used for generating target image data with operation marking information according to preoperative image data, and is obtained by training sample image data with operation marking information; and determining operation planning information according to the operation marking information in the target image data output by the target network model. According to the scheme, the preoperative image data is input into the first network model so as to obtain the target image data with the operation mark, and the operation planning information can be determined according to the operation mark in the target image data, so that the generation efficiency of the operation planning information is improved, and the generation process of the operation planning information is simplified.

Description

Operation planning and model training method and device and electronic equipment
Technical Field
The present application relates to the field of medical technology, and in particular, to a method, an apparatus, and an electronic device for surgical planning and model training.
Background
The surgical planning means obtaining information including surgical type, instrument selection, etc. according to preoperative images and other preoperative detection data of the lesion site of the patient. The existing operation planning method is that a doctor provides an optimal operation planning scheme with the help of personal experience and a computer. For example, a medical image is segmented, detected and classified by a computer to detect a lesion site, and a doctor makes a specific operation plan and a specific operation flow according to lesion information.
Therefore, the existing operation planning method can form different operation planning schemes due to different experience of doctors, and has higher dependence degree on the experience of the doctors; in addition, the existing surgical planning method needs to process the detection data by means of a computer, and then a doctor forms a surgical planning scheme according to the processing result of the computer, so that the generation efficiency of the surgical planning scheme is low, and the process is complex.
Disclosure of Invention
The application aims to provide a method, a device and an electronic device for operation planning and model training, so as to solve the problems of low generation efficiency and complex process of the existing operation planning scheme.
To solve the above technical problem, a first aspect of the present specification provides a surgical planning method, including: inputting preoperative image data of a target patient into a target network model; the target network model is used for generating target image data with operation marking information according to preoperative image data, and is obtained by training sample image data with operation marking information; and determining operation planning information according to the operation marking information in the target image data output by the target network model.
In some embodiments, the target network model comprises a first network model trained using the following model training method: acquiring a training data set, wherein each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation; alternately training a first network model and a second network model, wherein when the first network model is trained, the second network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and the parameter of the first network model is adjusted according to the judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
In some embodiments, in training the first network model, the following steps are performed in a loop until a first cutoff condition is reached: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result; in training the second network model, the following steps are executed in a loop until a second cutoff condition is reached: fixing the first network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the second network model according to a judgment result and a label in the training data.
In some embodiments, before alternately training the first network model and the second network model, further comprising: and circularly executing the following steps until a second cutoff condition is reached: and fixing the first network model, judging the degree of closeness between the operation marking information in the target image data in the training data set and the operation marking information of the real operation through the second network model, and adjusting the parameters of the second network model according to the judgment result and the label in the training data.
In some embodiments, obtaining a training data set comprises: acquiring first surgical data from a historical surgical database, wherein the first surgical data comprises first preoperative image data, first target image data and a label, and the difference between the first target image data and the first preoperative image data comprises surgical marking information added with a surgery; performing random image turning and/or zooming operation on a first preoperative image data and a first target image data in the first operation data, and determining a plurality of different second operation data according to an operation result; the plurality of different second surgical data are used as training data in a training data set.
In some embodiments, obtaining a training data set comprises: acquiring third operation data, wherein the third operation data comprises third preoperative image data, third target image data and a label; randomly adding noise to the third preoperative image data to obtain fourth preoperative image data; combining the fourth pre-operative image data with the third target image data to form fourth surgical data; and taking the third operation data and the fourth operation data as data in a training data set respectively.
In some embodiments, before alternately training the first network model and the second network model, further comprising: and acquiring an objective function, wherein the objective function comprises an output value of the first network model and an output value of the second network model, and the objective function is used for training the first network model and the second network model.
In some embodiments, the objective function is:
V(D,G)=E x~pdata(x) [log D(x)]+E z~pdata(z) [log(1-D(G(z)))]wherein x represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is surgical marker information of a real operation, z represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is not surgical marker information of a real operation, D (t) represents an output value of the second network model, G (t) represents an output value of the first network model, t represents preoperative image data, E (t) represents a threshold value of the first network model, and z represents a threshold value of the second network model x~pdata(x) Indicating surgeryExpected value of target image data in which the marker information is operation marker information of the actual operation, E z~pdata(z) And an expected value of the target image data indicating that the surgical marker information is not the surgical marker information of the actual surgery.
In some embodiments, in training the first network model, parameters of the first network model are adjusted toward a direction in which the objective function becomes smaller; and/or, when training the second network model, adjusting parameters of the second network model towards a direction of increasing objective function.
In some embodiments, the loss function when the first network model is trained is:
Figure BDA0003886774350000031
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Representing target image data, P, in the i-th training data i Representing target image data generated by the first network model according to preoperative image data in the ith training data; and/or the loss function during the training of the second network model is as follows:
Figure BDA0003886774350000032
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Denotes the label, P, in the ith training data i And the probability that the operation marking information in the target image data of the ith training data is judged to be the operation marking information of the real operation by the second network model is shown.
A second aspect of the present specification provides a model training method, including: acquiring a training data set, wherein each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation; alternately training a first network model and a second network model until a first preset condition is reached, taking the first network model as a trained target network model to generate target image data with operation marking information through the target network model, and determining operation planning information according to the generated target image data with the operation marking information; when the first network model is trained, fixing the second network model, generating target image data with operation marking information according to preoperative image data in training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to a judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
A third aspect of the present description provides a surgical planning apparatus comprising: the processing unit is used for inputting preoperative image data of a target patient into the target network model; the target network model is used for generating target image data with operation marking information according to preoperative image data, and is obtained by training sample image data with operation marking information; and the determining unit is used for determining operation planning information according to the operation marking information in the target image data output by the target network model.
In some embodiments, the apparatus further comprises: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a training data set, and each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation; the training unit is used for alternately training a first network model and a second network model, wherein when the first network model is trained, the second network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and the parameter of the first network model is adjusted according to the judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
In some embodiments, in training the first network model, the following steps are performed in a loop until a first cutoff condition is reached: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result; in training the second network model, the following steps are executed in a loop until a second cutoff condition is reached: fixing the first network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the second network model according to a judgment result and a label in the training data.
In some embodiments, the training unit is further configured to: and circularly executing the following steps until a second cutoff condition is reached: and fixing the first network model, judging the degree of closeness between the operation marking information in the target image data in the training data set and the operation marking information of the real operation through the second network model, and adjusting the parameters of the second network model according to the judgment result and the label in the training data.
In some embodiments, the first obtaining unit includes: the first acquiring subunit is configured to acquire first surgical data from a historical surgical database, where the first surgical data includes first preoperative image data, first target image data and a tag, and a difference between the first target image data and the first preoperative image data includes surgical marker information added with a surgical operation; the first operation subunit is used for performing random image turning and/or zooming operation on a first preoperative image data and a first target image data in the first operation data, and determining a plurality of different second operation data according to an operation result; a first determining subunit, configured to use the plurality of different second surgical data as training data in a training data set.
In some embodiments, the first obtaining unit includes: the second acquiring subunit is configured to acquire third surgical data, where the third surgical data includes third preoperative image data, third target image data, and a tag; a second operation subunit, configured to randomly add noise to the third preoperative image data to obtain fourth preoperative image data; a combination subunit, configured to combine the fourth pre-operation image data with the third target image data to form fourth operation data; and the second determining subunit is used for respectively using the third operation data and the fourth operation data as data in a training data set.
In some embodiments, the apparatus further comprises: a second obtaining unit, configured to obtain an objective function before alternately training the first network model and the second network model, where the objective function includes an output value of the first network model and an output value of the second network model, and the objective function is used for training the first network model and the second network model.
The first obtaining unit, the objective function is:
V(D,G)=E x~pdata(x) [log D(x)]+E z~pdata(z) [log(1-D(G(z)))]wherein x represents preoperative image data and the preoperative image data corresponds to a goalThe operation marker information in the target image data is operation marker information of a real operation, z represents preoperative image data and the operation marker information in the target image data corresponding to the preoperative image data is not operation marker information of the real operation, D (t) represents an output value of the second network model, G (t) represents an output value of the first network model, t represents preoperative image data, E (t) represents a time-varying parameter of the first network model, and z represents preoperative image data, E (t) represents a time-varying parameter of the second network model x~pdata(x) Expected value of target image data indicating that the operation marker information is operation marker information of the actual operation, E z~pdata(z) And an expected value of the target image data indicating that the surgical marker information is not the surgical marker information of the actual surgery.
The first acquisition unit adjusts parameters of the first network model towards the direction that the objective function becomes smaller when the first network model is trained; and/or, when training the second network model, adjusting parameters of the second network model towards a direction of increasing objective function.
The first obtaining unit, the loss function when the first network model is trained is:
Figure BDA0003886774350000051
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Representing target image data, P, in the i-th training data i Representing target image data generated by the first network model according to preoperative image data in the ith training data; and/or the loss function during the training of the second network model is as follows:
Figure BDA0003886774350000052
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Denotes the label, P, in the ith training data i And the probability that the operation marking information in the target image data of the ith training data is judged to be the operation marking information of the real operation by the second network model is shown.
A fourth aspect of the present specification provides a model training apparatus comprising: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a training data set, and each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation; the training unit is used for alternately training the first network model and the second network model until a first preset condition is reached, taking the first network model as a target network model obtained by training, generating target image data with operation marking information through the target network model, and determining operation planning information according to the generated target image data with the operation marking information; when the first network model is trained, fixing the second network model, generating target image data with operation marking information according to preoperative image data in training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to a judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
A fifth aspect of the present specification provides an electronic apparatus comprising: a memory and a processor, wherein the processor and the memory are communicatively connected to each other, the memory stores computer instructions, and the processor implements the steps of the method according to any one of the first aspect or the second aspect by executing the computer instructions.
A fifth aspect of the present description provides a computer storage medium storing computer program instructions which, when executed by a processor, implement the steps of the method of any one of the first or second aspects.
According to the operation planning and model training method, device and electronic equipment provided by the specification, the preoperative image data is input into the first network model, so that the target image data with the operation markers can be obtained, the operation planning information can be determined according to the operation markers in the target image data, the generation efficiency of the operation planning information is improved, and the generation process of the operation planning information is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 shows a schematic diagram of a surgical planning method provided herein;
FIG. 2 illustrates a flow chart of a model training method provided herein;
FIG. 3 illustrates a flow diagram of a method of training a first network model;
FIG. 4 illustrates a flow diagram of a method of training a second network model;
FIG. 5 illustrates a flow chart of another model training method provided herein;
FIG. 6 shows a schematic diagram of the relationship between a first network model and a second network model;
FIG. 7 is a schematic diagram illustrating a zoom-out operation, a zoom-in operation, and a random mirror inversion operation performed on original image data;
FIG. 8 is a schematic diagram showing the random addition of noise to image data;
FIG. 9 shows a schematic diagram of a method of pre-treatment;
FIG. 10 shows a schematic of an architecture of a first network model;
fig. 11 shows a schematic diagram of the internal structure of an up-sampling module in the first network model;
FIG. 12 shows a schematic of a second network model;
FIG. 13 is a diagram showing the internal structure of a convolution module in the second network model;
fig. 14 shows a schematic diagram of an electronic device provided by the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.
The present specification provides a surgical planning method that may be used in a physician client or a medical service platform. As shown in fig. 1, preoperative image data of a target patient is input into a target model, target image data is output from a target network model, and operation planning information is determined according to operation marker information in the target image data generated by the target network model.
The "preoperative image data" in the present specification may be image information such as B-ultrasound, nuclear magnetism, and CT captured before an operation.
The "surgical marker information" in the present specification means position information of a target point which is required to be reached and operated by the surgical robot to operate the surgical instrument at the time of surgery. For example, the surgical information may be: the position of the osteotomy point in the osteotomy operation and the position information of the lung nodule to be resected in the bronchial operation.
The target network model is used for generating target image data with operation marking information according to the preoperative image data, namely preoperative image data of a target patient are input into the target network model, and the target network model can output the target image data with the operation marking information. The target image data is obtained by adding operation mark information of an operation on the basis of preoperative image data.
In this surgical planning method, the target network model is critical. The target network model can be obtained by adopting a conventional network model training mode. In order to improve the accuracy of the target network model, the specification firstly provides a training method of the target network model based on the generation of the countermeasure network, and the training method can be used for any electronic equipment with computing capability. As shown in fig. 2, the training method of the target network model includes the following steps:
s10: the method comprises the steps of obtaining a training data set, wherein each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation or not. For example, the tag may be set to 0 (no) or 1 (yes), where 0 indicates that the surgical marker information in the target image data is not the surgical marker information of the actual surgery, and 1 indicates that the surgical marker information in the target image data is the surgical marker information of the actual surgery.
In this specification, the phrase "the operation marker information in the target image data is the operation marker information of the real operation", means that the operation marker information in the target image data can be actually used in the real operation to solve the disease condition represented by the preoperative image data. The "operation marker information of the real operation" refers to the position information of the marker point collected from the operation information that has been actually performed, that is, the operation marker information of the real operation is verified by the operation practice, and the verification result is positive. The "positive verification result" means that the condition represented by the preoperative image data can be solved by performing the operation according to the operation marking information.
In some embodiments, the data in the training data set may be determined from the output of the first network model at the last time the model was trained. For example, the first network model generates target image data from the target preoperative image data, and since the operation marker information in the target image data is not the operation marker information of the operation actually performed, the target label thereof may be determined as "no", and the preoperative image data, the target image data information, and the target label may be combined to form one training data set, and the training data set may be put therein.
In some embodiments, the training data may also be obtained by artificially modifying the surgical marker information in the real target image data.
The training data set may include a first type of data and a second type of data, wherein the first type of data is labeled "yes" and the second type of data is labeled "no".
S20: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of the real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result.
As shown in fig. 3, when training the first network model, the preoperative image data may be input into the first network model to obtain the generated target image data with the operation marker information, the generated target image data may be input into the second network model to obtain the output determination result, and then the loss function is calculated according to the determination result, the gradient update is performed, and further the parameter of the first network model is adjusted. Before adjusting the parameters of the first network model, it may be determined whether a first termination condition is reached, and in the case of reaching, the training of the first network model is terminated; otherwise, the parameters of the first network model are adjusted to continue training the first network model. In some embodiments, the target image data generated by the first network model may also be input to the second network model along with the preoperative image data.
When the second network model judges that the degree of proximity between the operation marker information in the target image data generated by the first network model and the operation marker information of the real operation is low, the first network model can adjust the parameters of the first network model according to the judgment result in a targeted manner. For example, if the first network model generates the target image data B1 from the preoperative image data a, the determination result of the second network model is "the degree of proximity between the operation marker information in the target image data B1 and the operation marker information of the actual operation is 0.7", then the first network model adjusts the parameter X in the increasing direction and generates the target image data B2 from the preoperative image data a, and the determination result of the second network model is "the degree of proximity between the operation marker information in the target image data B2 and the operation marker information of the actual operation is 0.4", it may indicate that the adjustment of the parameter X in the increasing direction is not correct, and the parameter X should be adjusted in the decreasing direction or not. Only one illustrative example of the first network model adjusting the parameters thereof according to the judgment result of the second network model is given here, and in the actual training process of the first network model, the parameters of the first network model may also be adjusted according to the judgment result of the second network model in other manners.
In some embodiments, the first cutoff condition may be at least one of: when the preset training times are reached, the difference between the value of the loss function in the training and the value of the loss function in the last training is smaller than a preset difference value. The term "one-time training" in this specification refers to a process of updating parameters of a network model once.
Through the training in step S30, the operation marker information in the target image data generated by the first network model is closer to the operation marker information in the real operation, but the operation marker information in the generated target image data still has a difference from the operation marker information in the real operation, so the operation marker information in the target image data still has a possibility of "being recognized as operation marker information not being a real operation", that is, the generated target image data has not yet reached the degree of "being falsified".
S30: fixing the first network model, generating target image data with operation planning information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between operation marking information in the generated target image data and operation marking information of a real operation through the second network model, and adjusting parameters of the second network model according to a judgment result and a label in the training data.
As shown in fig. 4, when training the second network model, the target image data output by the first network model, the preoperative image data input by the first network model, and the first preset tag may be input into the second network model to obtain an output determination result, and then a loss function is calculated according to the determination result, and gradient update is performed to adjust parameters of the second network model. Before adjusting the parameters of the second network model, whether a second cutoff condition is reached or not can be judged, and under the condition of reaching, the training of the second network model is finished; otherwise, the parameters of the second network model are adjusted to continue training the second network model.
After the operation marker information in the target image data generated by the first network model has a certain similarity with the operation marker information of the real operation, the target image data generated by the first network model may be used to train the second network model. In some embodiments, taking the output of the first network model as an input to the second network model comprises: forming a training data by the target image data generated by the first network model and the first preset label, and training the second network model by adopting the training data; the target image data is used as the input of the second network model, the first preset tag is used as the output of the second network model, and the first preset tag indicates that the operation marking information in the target image data input into the second network model is not the operation marking information of the real operation, or the approach degree of the operation marking information in the target image data input into the second network model and the operation marking information of the real operation is low.
For example, after the first network model generates the target image data B according to the preoperative image data a, the preoperative image data a and the first preset label C may be combined into a training data, and the training data may be used to train the second network model. Since the target image data B is generated by the first network model, and the operation marker information in the target image data B generated by the first network model is not the operation marker information of the real operation, in order to ensure the reliability of the operation marker information in the target image data output by the first network model obtained by the final training, the first preset label C may be set to represent a negative label. Therefore, the difference between the operation marking information in the target image data generated by the first network model and the operation marking information of the real operation can be reduced through the antagonistic action of the first network model and the second network model during the cyclic training.
In some embodiments, during the alternating training, the input of the second network model is the target image data generated by the first network model, and before the alternating training, the training data set may be used to train the second network model in advance, so that the second network model has a certain judgment capability. Accordingly, before S20 and S30, the following steps may be included: the following steps are executed in a loop until a second cutoff condition is reached: and fixing the first network model, judging the closeness degree between the operation marking information of the target image data in the training data and the operation marking information of the real operation through the second network model, and adjusting the parameters of the second network model according to the judgment result and the label in the training data.
In some embodiments, the input of the second network model may be target image data generated by the first network model and data in training data obtained from a historical surgery database during the alternating training, that is, the training data input into the second network model during the alternating training includes both first type data whose label represents positive and second type data whose label represents negative, and the training data set is set such that the determination result of the second network model is neither biased towards representing positive nor biased towards representing negative data. In this case, the second network model is trained first and then the first network model is trained during the alternate training, so that the second network model has a certain judgment capability during the training of the first network model. Thus, there may be no need to train the second network model once in advance before alternating training.
Step S20 gives a training mode of the first network model, and step S30 gives a training mode of the second network model.
In some embodiments, in the alternate training, S20 and S30 may be performed once, and then S20 and S30 may be performed once, and so on.
In other embodiments, a training cutoff condition of the first network model and a training cutoff condition of the second network model may be set, and S20 may specifically be S21: the following steps are executed in a loop until a first cutoff condition is reached: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result; s30 may specifically be S31: and circularly executing the following steps until a second cutoff condition is reached: fixing the first network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the second network model according to a judgment result and a label in the training data. In the alternate training, S21 and S31 may be performed once, and then S21 and S31 may be performed once, so that the first network model and the second network model are trained alternately in a cycle.
S40: it is determined whether a first predetermined condition is reached. Under the condition that a first preset condition is reached, finishing the training of the first network model and the second network model; otherwise, jumping to S20 to continue execution.
As shown in fig. 5, in some embodiments, S40 may also be performed once after S20 and before S30. Under the condition that a preset condition is reached, finishing the training of the first network model and the second network model; otherwise, S30 is continuously performed.
It should be noted that, during the alternate training, S30 may be executed first and then S20 is executed, that is, the order of executing the first network model and the second network model is not limited in this specification. Accordingly, fig. 5 may also be modified to perform S30 first, and S40 may also be performed once after S30 and before S20.
The relationship between the first network model and the second network model is shown in fig. 6, and the two network models are interdependent, help each other to train, and train in turn. Firstly, training a second network model to enable the second network model to have certain judgment capability; and then, judging the closeness degree of the operation marking information in the target image data of the first network model and the operation marking information of the real operation by adopting the trained second network model, and adjusting the parameters of the first network model based on the judgment result. And training a second network model, taking the output result of the first network model as the input of the second network model, and adjusting the parameters of the second network model according to the difference of the output result of the second network model and the label corresponding to the input data of the second network model. Thus, the training is cyclically alternated. Under ideal conditions, after training is finished, the operation marking information in the target image data generated by the first network model is difficult to be accurately identified by the second network model to determine whether the operation marking information is the operation marking information of the real operation, which means that the operation marking information in the target image data generated by the first network model is highly close to the operation marking information of the real operation. Thus, the first network model resulting from the last training may be used as the target network model for generating the surgical planning information.
The training target of the first network model is to generate "target image data in which the surgical marker information is regarded by the second network model as being highly similar to the surgical marker information of the actual surgery and difficult to distinguish", and the training target of the second network model is to "judge that the surgical marker information in the target image data generated by the first network model is not the surgical marker information of the actual surgery", whereby it can be seen that the training targets of the first network model and the second network model are antagonistic to each other. Therefore, the training network shown in fig. 6 is composed of a first network model and a second network model, and may also be referred to as a countermeasure network or a generation countermeasure network, where the first network model corresponds to a generator and the second network model corresponds to a discriminator.
In the operation planning method provided by the specification, the preoperative image data is input into the first network model to obtain the target image data with the operation mark, and the operation planning information can be determined according to the operation mark in the target image data, so that the generation efficiency of the operation planning information is improved, and the generation process of the operation planning information is simplified.
According to the model training method provided by the specification, when a first network model is trained, a second network model capable of judging the degree of closeness between operation marking information in target image data and operation marking information of a real operation is introduced, and when the model is trained, the first network model and the second network model alternately and circularly train, mutually depend and mutually confront, and the operation marking information in the target image data generated by the first network model can be closer to the operation marking information of the real operation through the training mode, so that the operation marking information output through the first network model is more reliable and accurate.
In some embodiments, during the alternating training of the first network model and the second network model, an overall objective function of the cyclic training may be determined from the output of the first network model and the output of the second network model. That is, the objective function is V = f (m, n), where m is an output value of the first network model, n is an output value of the second network model, and f represents processing of m, n.
For example, the objective function may be: v (D, G) = E x~pdata(x) [log D(x)]+E z~pdata(z) [log(1-D(G(z)))]Wherein V (D, G) represents a value of an objective function, x represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is surgical marker information of a real operation, z represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is not surgical marker information of a real operation, D (t) represents an output value of a second network model, i.e., n, G (t) represents an output value of a first network model, i.e., m, t represents preoperative image data, E (t) represents a value of a second network model, i.e., m, t represents preoperative image data, and x represents surgical marker information of a real operation x~pdata(x) Expected value, i.e., first class number, of target image data indicating that surgical marker information is surgical marker information of a real surgeryAccording to the expected value of the target image data, E z~pdata(z) The expected value of the target image data indicating that the operation marker information is not the operation marker information of the actual operation, that is, the expected value of the target image data of the second type data.
When the objective function is V (D, G) as described above, the first network model may be trained in a direction in which the objective function becomes smaller when the first network model is trained; in training the second network model, the second network model may be trained toward a direction in which the objective function increases.
In some embodiments, the second cutoff condition may be at least one of: when the preset training times are reached, the difference between the value of the loss function in the training and the value of the loss function in the last training is smaller than a preset difference value. The term "one training" in this specification refers to a process of updating parameters of a network model once.
In some embodiments, the loss function in the training of the first network model may be:
Figure BDA0003886774350000121
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Representing target image data, P, in the i-th training data i Representing target image data generated by the first network model according to preoperative image data in the ith training data.
In some embodiments, the loss function when the second network model is trained is:
Figure BDA0003886774350000131
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Denotes the label, P, in the ith training data i Representing the second network model to determine the operation in the target image data in the ith training dataThe marking information is the probability of the surgery marking information of the real surgery.
In some embodiments, the loss function when the first network model is trained may also be:
-[log(D 1 (x))+log(1-D 2 (G(x)))]the loss function during the training of the second network model may also be:
-[log(D 2 (G(x)))]where x represents the preoperative image data, D (t) represents an output value of the second network model, G (t) represents an output value of the first network model, and t represents the preoperative image data.
In some embodiments, S10 may include the steps of:
s11: first surgical data is obtained from a historical surgical database, and the target surgical data comprises first preoperative image data, first target image data and a label.
The difference between the first target image data and the first pre-operative image data includes surgical marker information that is augmented with a surgery.
The label is used for indicating whether the operation mark information in the target image data is operation mark information of a real operation.
S12: and performing random image turning and/or zooming operation on the first preoperative image data and the first target image data in the first operation data, and determining a plurality of different second operation data according to the operation result.
The operations performed on the first pre-operative image data and the first target image data in the same operation data may be the same, and the operations performed on the image data in different operation data may be different.
Determining a plurality of different second operation data according to the operation result, which means that the result of performing the operation on the first preoperative image data and the first target image data in the same operation data is combined with the tag in the operation data to form one second operation data, that is, one second operation data is obtained corresponding to one first operation data, and a plurality of second operation data are obtained from a plurality of first operation data.
Fig. 7 is a schematic diagram illustrating a reduction operation, an enlargement operation, and a random mirror inversion operation performed on original image data.
S13: the plurality of different second surgical data are used as training data in a training data set.
In the above steps S11 to S13, under the conditions that the first type of data is less, the training data is difficult to obtain, and the like, by expanding the number of training data in the training data set, enough training data is used for training the first network model and the second network model.
In some embodiments, S10 may include the steps of:
s15: and acquiring third operation data, wherein the third operation data comprises third preoperative image data, third target image data and a label.
S16: randomly adding noise to the third preoperative image data to obtain fourth preoperative image data.
S17: and combining the fourth preoperative image data with the third target image data and the label to form fourth operation data.
S18: and taking the third operation data and the fourth operation data as data in a training data set respectively.
Fig. 8 is a schematic diagram of the image data before and after noise is randomly added, wherein the left diagram is a schematic diagram before noise is added, and the right diagram is a schematic diagram after noise is added.
In the above steps S15 to S18, noise is added to the image data, so that the number of training data in the training data set can be increased, and the stability of the model can be increased.
In some embodiments, each type of image data in the training dataset needs to be preprocessed before being used for model training, as shown in fig. 9, the preprocessing may include at least one of the following: standardizing resolution, adjusting window width and level, and standardizing gray value.
The resolution standardization means that the image resolution is unified through three-dimensional linear interpolation operation, so that the network model is easy to converge during training. Adjusting the window width and level refers to filtering out redundant image information and enhancing the image contrast. And the gray value standardization means that the gray value of the image is normalized to be within the [0,1] interval so as to enable the training of the target network model to be more stable.
In some embodiments, the first network model and/or the second network model is at least one of: convolutional neural networks, cyclic neural networks, fully-connected networks.
Fig. 10 shows a schematic structural diagram of the first network model. Fig. 11 shows an internal structural diagram of an up-sampling module in the first network model. The three-dimensional transposed convolution layer is used for feature dimension-lifting and dimension recovery, and dimension-lifting and dimension recovery can be carried out by adopting three-dimensional transposed convolution with convolution kernel size of 3 multiplied by 3 and step length of 2; the batch normalization layer is used for accelerating model convergence and enabling the training of the network model to be more stable; the activation layer includes an activation function, which is a nonlinear function in nature, that activates according to the input value characteristics.
Fig. 12 shows a schematic structural diagram of the second network model. Fig. 13 is a schematic diagram showing the internal structure of the convolution module in the second network model. The three-dimensional convolution layer is used for feature dimension reduction, and dimension reduction can be performed by adopting three-dimensional transposed convolution with convolution kernel size of 3 multiplied by 3 and step length of 2; the batch normalization layer is used for accelerating the convergence of the network model and enabling the training of the network model to be more stable; the activation layer includes an activation function, which is a nonlinear function in nature, that activates according to the input value characteristics.
During the operation performed on the target patient according to the operation planning information, the doctor may make some adjustment to the operation planning information according to his own experience, so the operation information during the operation may not be consistent with the operation planning information output by the target network model. In some embodiments, after inputting the preoperative image data of the target patient into the target model and using the operation marking information in the outputted target image data as the operation planning information, the method may further include the following steps: after the operation is executed according to the operation planning information, the target image data is adjusted according to the operation execution condition, the adjusted target image data, the preoperative image data and a second preset label are combined to form training data, and the training data is put into a training data set so as to be used for updating and training the target network model when a second preset condition is met. The second preset label indicates that the operation marking information in the target image data is the marking information of the real operation. The second predetermined condition may be that a predetermined time threshold is reached, for example, the target network model may be trained once per week, and the predetermined time threshold is one week; or the number of operations performed after the last training of the target network model reaches a predetermined value, for example, the predetermined value may be 10 every time 10 operations are performed to train the target network model.
A fourth aspect of the present description provides a surgical planning apparatus comprising: a processing unit and a determination unit.
The processing unit is used for inputting preoperative image data of a target patient into the target network model; the target network model is used for generating target image data with operation marking information according to the preoperative image data and is obtained by training sample image data with operation marking information.
The determining unit is used for determining operation planning information according to operation marking information in the target image data output by the target network model.
In some embodiments, the apparatus further comprises: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a training data set, and each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation; the training unit is used for alternately training a first network model and a second network model, wherein the second network model is fixed when the first network model is trained, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and the parameter of the first network model is adjusted according to the judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
In some embodiments, in training the first network model, the following steps are performed in a loop until a first cutoff condition is reached: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result; in training the second network model, the following steps are executed in a loop until a second cutoff condition is reached: fixing the first network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the second network model according to a judgment result and a label in the training data.
In some embodiments, the training unit is further configured to: the following steps are executed in a loop until a second cutoff condition is reached: and fixing the first network model, judging the degree of closeness between the operation marking information in the target image data in the training data set and the operation marking information of the real operation through the second network model, and adjusting the parameters of the second network model according to the judgment result and the label in the training data.
In some embodiments, the first obtaining unit includes: the system comprises a first obtaining subunit, a second obtaining subunit, a third obtaining subunit, a fourth obtaining subunit, a fifth obtaining subunit, a sixth obtaining subunit, and a sixth obtaining subunit, wherein the first obtaining subunit is used for obtaining first surgical data from a historical surgical database, the first surgical data comprises first preoperative image data, first target image data and a label, and the difference between the first target image data and the first preoperative image data comprises surgical marking information added with surgery; the first operation subunit is used for performing random image turning and/or zooming operation on a first preoperative image data and a first target image data in the first operation data, and determining a plurality of different second operation data according to an operation result; a first determining subunit, configured to use the plurality of different second surgical data as training data in a training data set.
In some embodiments, the first obtaining unit includes: the second acquiring subunit is configured to acquire third surgical data, where the third surgical data includes third preoperative image data, third target image data, and a tag; a second operation subunit, configured to randomly add noise to the third preoperative image data to obtain fourth preoperative image data; a combination subunit, configured to combine the fourth preoperative image data and the third target image data to form fourth surgical data; and the second determining subunit is configured to use the third surgical data and the fourth surgical data as data in a training data set, respectively.
In some embodiments, the apparatus further comprises: and the second acquisition unit is used for acquiring an objective function before alternately training the first network model and the second network model, wherein the objective function comprises an output value of the first network model and an output value of the second network model, and the objective function is used for training the first network model and the second network model.
The first obtaining unit, the objective function is:
V(D,G)=E x~pdata(x) [log D(x)]+E z~pdata(z) [log(1-D(G(z)))]wherein x represents pre-operation image data and operation marker information in target image data corresponding to the pre-operation image data is operation marker information of a real operation, z represents pre-operation image data and operation marker information in target image data corresponding to the pre-operation image data is not operation marker information of a real operation, D (t) represents an output value of the second network model, G (t) represents an output value of the first network model, t represents pre-operation image data, E x~pdata(x) Surgical indication markExpected value of target image data whose marking information is operation marking information of actual operation, E z~pdata(z) And an expected value of the target image data indicating that the surgical marker information is not the surgical marker information of the actual surgery.
The first acquisition unit adjusts parameters of the first network model towards the direction that the objective function becomes smaller when the first network model is trained; and/or, when training the second network model, adjusting parameters of the second network model towards a direction of increasing objective function.
The first obtaining unit, the loss function when the first network model is trained is:
Figure BDA0003886774350000171
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Representing target image data, P, in the i-th training data i Representing target image data generated by the first network model according to preoperative image data in the ith training data; and/or the loss function during the training of the second network model is as follows:
Figure BDA0003886774350000172
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Denotes the label, P, in the ith training data i And the probability that the operation marking information in the target image data of the ith training data is judged to be the operation marking information of the real operation by the second network model is shown.
The description and the beneficial effects of the surgical planning device can be referred to the description and the beneficial effects of the method part, and are not described again.
The present specification provides a model training apparatus including a first acquisition unit and a training unit.
The first acquisition unit is used for acquiring a training data set, wherein each training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation.
The training unit is used for alternately training the first network model and the second network model until a first preset condition is reached, taking the first network model as a target network model obtained through training so as to generate target image data with operation marking information through the target network model, and determining operation planning information according to the generated target image data with the operation marking information; when the first network model is trained, fixing the second network model, generating target image data with operation marking information according to preoperative image data in training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting parameters of the first network model according to a judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
The description and the beneficial effects of the model training device can be referred to the description and the beneficial effects of the method part, and are not described again.
An embodiment of the present invention further provides an electronic device, as shown in fig. 14, the electronic device may include a processor 1401 and a memory 1402, where the processor 1401 and the memory 1402 may be connected by a bus or by another method, and fig. 14 takes the connection by the bus as an example.
Processor 1401 may be a Central Processing Unit (CPU). Processor 1401 may also be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, or any combination thereof.
The memory 1402, which is a non-transitory computer-readable storage medium, can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the model training method or the surgical planning method in the embodiments of the present invention. The processor 1401 implements the model training method or the surgical planning method in the above method embodiments by executing the non-transitory software programs, instructions, and modules stored in the memory 1402, thereby executing various functional applications and data classification of the processor.
The memory 1402 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 1401, and the like. Further, the memory 1402 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1402 may optionally include memory located remotely from processor 1401, which may be connected to processor 1401 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 1402 and, when executed by the processor 1401, perform the model training method or the surgical planning method described above.
The specific details of the electronic device may be understood by referring to the relevant description and effects in the corresponding embodiments, which are not described herein again.
The present specification provides a computer storage medium having computer program instructions stored thereon that, when executed by a processor, implement the steps of the model training method or the surgical planning method described above.
Those skilled in the art will appreciate that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can include the processes of the embodiments of the methods described above when executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on differences from other embodiments.
The systems, devices, modules or units described in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more pieces of software and/or hardware in the practice of the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of some parts of the embodiments of the present application.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Although the present application has been described in terms of embodiments, those of ordinary skill in the art will recognize that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (15)

1. A surgical planning method, comprising:
inputting preoperative image data of a target patient into a target network model; the target network model is used for generating target image data with operation marking information according to preoperative image data, and is obtained by training sample image data with operation marking information;
and determining operation planning information according to the operation marking information in the target image data output by the target network model.
2. The method of claim 1, wherein the target network model comprises a first network model trained using the following model training method:
acquiring a training data set, wherein each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation;
Alternately training a first network model and a second network model, wherein when the first network model is trained, the second network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and the parameter of the first network model is adjusted according to the judgment result; when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the second network model is used for judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
3. The method of claim 2, wherein in training the first network model, the following steps are performed in a loop until a first cutoff condition is reached: fixing a second network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the closeness degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to the judgment result;
In training the second network model, the following steps are executed in a loop until a second cutoff condition is reached: fixing the first network model, generating target image data with operation marking information according to preoperative image data in the training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the second network model according to a judgment result and a label in the training data.
4. The method of claim 2, further comprising, prior to training the first network model and the second network model alternately:
the following steps are executed in a loop until a second cutoff condition is reached: fixing the first network model, judging the closeness degree between the operation marking information in the target image data in the training data set and the operation marking information of the real operation through the second network model, and adjusting the parameters of the second network model according to the judgment result and the label in the training data.
5. The method of claim 2, wherein obtaining a training data set comprises:
acquiring first surgical data from a historical surgical database, wherein the first surgical data comprises first preoperative image data, first target image data and a label, and the difference between the first target image data and the first preoperative image data comprises surgical marking information added with a surgery;
Performing random image turning and/or scaling operation on first preoperative image data and first target image data in the first operation data, and determining a plurality of different second operation data according to operation results;
the plurality of different second surgical data are used as training data in a training data set.
6. The method of claim 2, wherein obtaining a training data set comprises:
acquiring third operation data, wherein the third operation data comprises third preoperative image data, third target image data and a label;
randomly adding noise to the third preoperative image data to obtain fourth preoperative image data;
combining the fourth pre-operative image data with the third target image data to form fourth surgical data;
and taking the third operation data and the fourth operation data as data in a training data set respectively.
7. The method of claim 2, further comprising, prior to training the first network model and the second network model alternately: and acquiring an objective function, wherein the objective function comprises an output value of the first network model and an output value of the second network model, and the objective function is used for training the first network model and the second network model.
8. The method of claim 7, wherein the objective function is:
V(D,G)=E x~pdata(x) [logD(x)]+E z~pdata(z) [log(1-D(G(z)))],
wherein x represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is surgical marker information of a real operation, z represents preoperative image data and surgical marker information in target image data corresponding to the preoperative image data is not surgical marker information of a real operation, D (t) represents an output value of the second network model, G (t) represents an output value of the first network model, t represents preoperative image data, E (t) represents a threshold value of the first network model, and z represents a threshold value of the second network model x~pdata(x) Expected value of target image data indicating that the operation marker information is operation marker information of the actual operation, E z~pdata(z) And an expected value of the target image data indicating that the surgical marker information is not the surgical marker information of the actual surgery.
9. The method of claim 8, wherein in training the first network model, parameters of the first network model are adjusted toward a direction in which the objective function becomes smaller; and/or the presence of a gas in the gas,
when training the second network model, the parameters of the second network model are adjusted in the direction of increasing objective function.
10. The method of claim 2, wherein the loss function during the training of the first network model is:
Figure FDA0003886774340000031
Wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Representing target image data, P, in the i-th training data i Representing target image data generated by the first network model according to preoperative image data in the ith training data;
and/or the presence of a gas in the gas,
the loss function during the training of the second network model is as follows:
Figure FDA0003886774340000032
wherein L is i For the values of the loss function, i represents the number of training data, N represents the total number of training data, y i Denotes the label, P, in the ith training data i And the probability that the operation marking information in the target image data of the ith training data is judged to be the operation marking information of the real operation by the second network model is shown.
11. A method of model training, comprising:
acquiring a training data set, wherein each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation;
alternately training a first network model and a second network model until a first preset condition is reached, taking the first network model as a trained target network model to generate target image data with operation marking information through the target network model, and determining operation planning information according to the generated target image data with the operation marking information;
When the first network model is trained, fixing the second network model, generating target image data with operation marking information according to preoperative image data in training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting the parameters of the first network model according to a judgment result;
when the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
12. A surgical planning apparatus, comprising:
the processing unit is used for inputting preoperative image data of a target patient into the target network model; the target network model is used for generating target image data with operation marking information according to preoperative image data, and is obtained by training sample image data with operation marking information;
And the determining unit is used for determining operation planning information according to the operation marking information in the target image data output by the target network model.
13. A model training apparatus, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a training data set, and each training data in the training data set comprises preoperative image data, target image data with operation marking information and a label, and the label is used for indicating whether the operation marking information in the target image data is operation marking information of a real operation;
the training unit is used for alternately training the first network model and the second network model until a first preset condition is reached, taking the first network model as a target network model obtained by training, generating target image data with operation marking information through the target network model, and determining operation planning information according to the generated target image data with the operation marking information;
when the first network model is trained, fixing the second network model, generating target image data with operation marking information according to preoperative image data in training data by adopting the first network model, judging the proximity degree between the operation marking information in the generated target image data and the operation marking information of a real operation through the second network model, and adjusting parameters of the first network model according to a judgment result;
When the second network model is trained, the first network model is fixed, the first network model is adopted to generate target image data with operation marking information according to preoperative image data in training data, the degree of proximity between the operation marking information in the generated target image data and the operation marking information of a real operation is judged through the second network model, and parameters of the second network model are adjusted according to a judgment result and labels in the training data.
14. An electronic device, comprising:
a memory and a processor, the processor and the memory being communicatively connected to each other, the memory having stored therein computer instructions, the processor implementing the steps of the method of any one of claims 1 to 11 by executing the computer instructions.
15. A computer storage medium characterized in that it stores computer program instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 11.
CN202211246286.2A 2022-10-12 2022-10-12 Operation planning and model training method and device and electronic equipment Pending CN115634044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211246286.2A CN115634044A (en) 2022-10-12 2022-10-12 Operation planning and model training method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211246286.2A CN115634044A (en) 2022-10-12 2022-10-12 Operation planning and model training method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115634044A true CN115634044A (en) 2023-01-24

Family

ID=84944504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211246286.2A Pending CN115634044A (en) 2022-10-12 2022-10-12 Operation planning and model training method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115634044A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118000908A (en) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 Total knee replacement planning method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118000908A (en) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 Total knee replacement planning method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110309706B (en) Face key point detection method and device, computer equipment and storage medium
US7403634B2 (en) Object tracking apparatus and method
JP7221421B2 (en) Vertebral localization method, device, device and medium for CT images
US11017210B2 (en) Image processing apparatus and method
EP2579210A1 (en) Face feature-point position correction device, face feature-point position correction method, and face feature-point position correction program
CN110222641B (en) Method and apparatus for recognizing image
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
CN113132633B (en) Image processing method, device, equipment and computer readable storage medium
CN115634044A (en) Operation planning and model training method and device and electronic equipment
US20080019568A1 (en) Object tracking apparatus and method
CN114495241A (en) Image identification method and device, electronic equipment and storage medium
CN111161153B (en) Wide view splicing method, device and storage medium
CN114022480B (en) Medical image key point detection method and device based on statistics and shape topological graph
JP5704909B2 (en) Attention area detection method, attention area detection apparatus, and program
JP6937782B2 (en) Image processing method and device
CN113724328A (en) Hip joint key point detection method and system
US11983844B2 (en) Panoramic stitching method, apparatus, and storage medium
US20180176108A1 (en) State information completion using context graphs
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN111145152A (en) Image detection method, computer device, and storage medium
CN113723515B (en) Moire pattern recognition method, device, equipment and medium based on image recognition
JP2023167320A (en) Learning model generation device, joint point detection device, learning model generation method, joint point detection method, and program
EP2889724B1 (en) System and method for selecting features for identifying human activities in a human-computer interacting environment
JPWO2009151002A1 (en) Pattern identification method, apparatus and program
US20190205613A1 (en) Distorted fingerprint matching using pose and minutia grouping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination