CN116844734A - Method and device for generating dose prediction model, electronic equipment and storage medium - Google Patents

Method and device for generating dose prediction model, electronic equipment and storage medium Download PDF

Info

Publication number
CN116844734A
CN116844734A CN202311123150.7A CN202311123150A CN116844734A CN 116844734 A CN116844734 A CN 116844734A CN 202311123150 A CN202311123150 A CN 202311123150A CN 116844734 A CN116844734 A CN 116844734A
Authority
CN
China
Prior art keywords
encoder
dose
reference object
decoder
actual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311123150.7A
Other languages
Chinese (zh)
Other versions
CN116844734B (en
Inventor
周琦超
冷子轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Manteia Data Technology Co ltd In Xiamen Area Of Fujian Pilot Free Trade Zone
Original Assignee
Manteia Data Technology Co ltd In Xiamen Area Of Fujian Pilot Free Trade Zone
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Manteia Data Technology Co ltd In Xiamen Area Of Fujian Pilot Free Trade Zone filed Critical Manteia Data Technology Co ltd In Xiamen Area Of Fujian Pilot Free Trade Zone
Priority to CN202311123150.7A priority Critical patent/CN116844734B/en
Publication of CN116844734A publication Critical patent/CN116844734A/en
Application granted granted Critical
Publication of CN116844734B publication Critical patent/CN116844734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for generating a dose prediction model, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring a small sample data set, wherein the small sample data set at least comprises medical images and actual dose distribution diagrams corresponding to M first reference objects; determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on the actual dose distribution map corresponding to the first reference object; performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region diagram and the actual dose change gradient diagram corresponding to the first reference object to obtain a target encoder and a target decoder; the target encoder and the target decoder are used as a dose prediction model. The method solves the technical problem that the dose prediction model has poor training effect in most scenes due to the fact that the existing training method of the dose prediction model has high requirement on the data volume of the data set.

Description

Method and device for generating dose prediction model, electronic equipment and storage medium
Technical Field
The application relates to the field of medical science and technology, in particular to a method and device for generating a dose prediction model, electronic equipment and a storage medium.
Background
Currently, with the development of artificial intelligence technology, in the field of medical science and technology, especially in the field of radiotherapy technology, dose distribution information is usually predicted based on medical images by training a deep learning model such as a dose prediction model.
However, the existing dose prediction model training method has high requirement on the data volume of the data set, however, due to factors such as patient privacy data, a certain difficulty exists in acquiring a large amount of sample data, and therefore the existing dose prediction model has a poor training effect in most scenes.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a storage medium for generating a dose prediction model, which at least solve the technical problem that the dose prediction model has poor training effect in most scenes due to the fact that the existing training method of the dose prediction model has high requirement on the data volume of a data set.
According to an aspect of the present application, there is provided a method of generating a dose prediction model, comprising: acquiring a small sample data set, wherein the small sample data set at least comprises medical images and actual dose distribution diagrams corresponding to M first reference objects, and M is an integer larger than 1; determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on an actual dose distribution map corresponding to the first reference object, wherein the robustness corresponding to multi-granularity information in the actual isodose line region map is higher than that in the actual dose distribution map, and the accuracy corresponding to profile information in the actual dose change gradient map is higher than that in the actual dose distribution map; performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are the encoders and the decoders obtained through training according to the medical images and the dose distribution maps corresponding to the N second reference objects, N is an integer greater than 1, and N is greater than M; the target encoder and the target decoder are used as a dose prediction model.
Optionally, the method for generating a dose prediction model further includes: before iteratively training a first encoder and a first decoder according to a small sample data set, an actual isodose line region map and an actual dose change gradient map corresponding to a first reference object to obtain a target encoder and a target decoder, performing pre-training operation on a preset initial encoder and initial decoder according to N medical images and dose distribution maps corresponding to second reference objects, wherein the pre-training operation is used for extracting image features of the medical images corresponding to the second reference objects through the initial encoder, generating a predicted dose distribution map corresponding to the second reference objects through the initial decoder according to the image features extracted by the initial encoder, and updating weights of the initial encoder and weights of the initial decoder according to the predicted dose distribution map corresponding to the second reference objects and the dose distribution map corresponding to the second reference objects; the initial encoder and the initial decoder are subjected to a plurality of pre-training operations to obtain a first encoder and a first decoder.
Optionally, the method for generating a dose prediction model further includes: extracting image features of a medical image corresponding to a first reference object in the small sample data set by a first encoder; executing a first task operation, wherein the first task operation is used for updating the weight of the first encoder and the weight of the first decoder according to the image features extracted by the first encoder and the actual dose distribution map corresponding to the first reference object; executing a second task operation, wherein the second task operation is used for updating the weight of the first encoder and the weight of the second decoder according to the image features extracted by the first encoder and the actual isodose line region map corresponding to the first reference object; executing a third task operation, wherein the third task operation is used for updating weights of a first encoder and a third decoder according to the image features extracted by the first encoder and an actual dose change gradient map corresponding to a first reference object, and the first decoder, the second decoder and the third decoder are mutually independent decoders; and executing the first task operation, the second task operation and the third task operation for a plurality of times in a parallel mode, taking the first encoder after the last updating weight as a target encoder, and taking the first decoder after the last updating weight as a target decoder.
Optionally, the first task operation includes the steps of: generating a first predicted dose distribution map corresponding to a first reference object according to the image features extracted by the first encoder through the first decoder; performing isodose line processing on a first predicted dose distribution map corresponding to a first reference object to obtain a first predicted isodose line region map corresponding to the first reference object; performing dose change gradient processing on a first predicted dose distribution map corresponding to a first reference object to obtain a first predicted dose change gradient map corresponding to the first reference object; the weight of the first encoder and the weight of the first decoder are updated according to the first predicted dose distribution map, the first predicted isodose line region map, the first predicted dose change gradient map, and the actual dose distribution map corresponding to the first reference object.
Optionally, the method for generating a dose prediction model further includes: taking pixel level difference information between a first predicted dose distribution map corresponding to a first reference object and an actual dose distribution map corresponding to the first reference object as a first parameter; taking pixel level difference information between a first predicted isodose line region map corresponding to a first reference object and an actual isodose line region map corresponding to the first reference object as a second parameter; taking pixel level difference information between a first predicted dose change gradient map corresponding to the first reference object and an actual dose change gradient map corresponding to the first reference object as a third parameter; and updating the weight of the first encoder and the weight of the first decoder according to the first parameter, the second parameter and the third parameter.
Optionally, the second task operation includes the steps of: generating a second predicted isodose line region map corresponding to the first reference object according to the image features extracted by the first encoder by a second decoder; updating the weight of the second decoder and the weight of the first encoder according to pixel-level difference information between a second predicted isodose line region map corresponding to the first reference object and an actual isodose line region map corresponding to the first reference object; and/or the third task operation comprises the steps of: generating a second predicted dose change gradient map corresponding to the first reference object according to the image features extracted by the first encoder through a third decoder; and updating the weight of the third decoder and the weight of the first encoder according to pixel-level difference information between the second predicted dose change gradient map corresponding to the first reference object and the actual dose change gradient map corresponding to the first reference object.
Optionally, the method for generating a dose prediction model further includes: after taking the target encoder and the target decoder as a dose prediction model, acquiring a medical image corresponding to the target object; and inputting the medical image corresponding to the target object into the dose prediction model to obtain a dose distribution diagram corresponding to the target object output by the dose prediction model.
Optionally, the medical image corresponding to the first reference object comprises at least a contour image corresponding to a radiotherapy target zone of the first reference object, a contour image corresponding to a jeopardizing organ of the first reference object, and a multi-modal medical image corresponding to the first reference object.
According to another aspect of the present application, there is also provided a device for generating a dose prediction model, including: the acquisition unit is used for acquiring a small sample data set, wherein the small sample data set at least comprises medical images and actual dose distribution diagrams corresponding to M first reference objects, and M is an integer larger than 1; a determining unit, configured to determine an actual isodose line area map and an actual dose change gradient map corresponding to the first reference object based on an actual dose distribution map corresponding to the first reference object, where robustness corresponding to multi-granularity information in the actual isodose line area map is higher than robustness corresponding to multi-granularity information in the actual dose distribution map, and accuracy corresponding to profile information in the actual dose change gradient map is higher than accuracy corresponding to profile information in the actual dose distribution map; the training unit is used for carrying out iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region graph corresponding to the first reference object and the actual dose change gradient graph to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are the encoder and the decoder which are obtained through training according to the medical images and the dose distribution graphs corresponding to the N second reference objects, N is an integer larger than 1, and N is larger than M; and the processing unit is used for taking the target encoder and the target decoder as a dose prediction model.
According to another aspect of the present application, there is also provided a computer readable storage medium, wherein the computer readable storage medium comprises a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the method of generating a dose prediction model according to any one of the above.
According to another aspect of the present application, there is also provided an electronic device, wherein the electronic device comprises one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for generating a dose prediction model of any of the above.
In the application, a mode of pre-training according to medical images and dose distribution graphs corresponding to N second reference objects to obtain a first encoder and a second decoder so as to set proper initial weights for a model is adopted, and a small sample data set is firstly obtained, wherein the small sample data set at least comprises medical images and actual dose distribution graphs corresponding to M first reference objects, and M is an integer larger than 1. And then, determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on the actual dose distribution map corresponding to the first reference object, wherein the robustness corresponding to the multi-granularity information in the actual isodose line region map is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution map, and the accuracy corresponding to the contour information in the actual dose change gradient map is higher than the accuracy corresponding to the contour information in the actual dose distribution map. And then, performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are the encoders and the decoders obtained through training according to the medical images and the dose distribution maps corresponding to the N second reference objects, N is an integer greater than 1, and N is greater than M. Finally, the target encoder and the target decoder are used as a dose prediction model.
From the above, it can be seen that the first encoder and the first decoder are obtained by pre-training according to the medical images and the dose distribution diagrams corresponding to the N second reference objects, so that a relatively suitable initial weight is set for the model, and on this basis, the dose prediction model can be obtained by training with a small sample data set with a smaller data volume more efficiently, so that the training time of the model is shortened. And secondly, the actual isodose line region diagram and the actual dose change gradient diagram corresponding to the first reference object are added in the training process of the dose prediction model as training constraint conditions, and the robustness corresponding to the multi-granularity information in the actual isodose line region diagram is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution diagram, and the accuracy corresponding to the contour information in the actual dose change gradient diagram is higher than the accuracy corresponding to the contour information in the actual dose distribution diagram, so that the method is equivalent to providing marginal priori knowledge in the model training process by utilizing the actual dose change gradient diagram, providing priori knowledge of the influence of the radiation field on the dose distribution by utilizing the actual isodose line region diagram in the model training process, and being beneficial to improving the prediction accuracy of the model.
Therefore, the technical scheme of the application achieves the aim of obtaining the dose prediction model based on the training of the small sample data set with less data volume, thereby realizing the technical effect of reducing the training cost of the dose prediction model, and further solving the technical problem that the training effect of the dose prediction model in most scenes is poor due to the higher requirement of the existing training method of the dose prediction model on the data volume of the data set.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of an alternative method of generating a dose prediction model according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a training process of an alternative dose prediction model according to an embodiment of the present application;
FIG. 3 is a schematic illustration of the use of an alternative dose prediction model according to an embodiment of the present application;
fig. 4 is a schematic diagram of an alternative dose prediction model generation device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the related information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or institution, before acquiring the relevant information, the system needs to send an acquisition request to the user or institution through the interface, and acquire the relevant information after receiving the consent information fed back by the user or institution.
The application is further illustrated below in conjunction with the examples.
Example 1
Currently, with the continuous development of medical technology, there are various methods for tumor treatment, and intensity modulated radiation therapy is an important treatment method. In intensity modulated radiotherapy methods, the central requirement is to reduce the irradiation dose as much as possible while guaranteeing the coverage and intensity of the radiotherapy target region. On the basis, in the design process of the radiotherapy plan, the initial steps are to set a radiotherapy target and optimize conditions. Wherein in the conventional treatment plan design process, the radiotherapy target is designed by the main doctor, and the optimization condition is specified by the physical engineer according to the comprehensive consideration of the cancer species and the patient condition. If the initial optimization condition is not designed well, the radiotherapy plan optimization process can be obviously lengthened.
In a practical use scenario, it is difficult to quickly specify appropriate initial optimization conditions in consideration of variations in target position, patient combination, and physical person level although cancer species are fixed, so other ways are often cited to achieve finer initial optimization condition designs. With the development of artificial intelligence technology, a dose prediction model is trained by using a sufficient data set through a deep learning mode, and dose distribution information output by the dose prediction model is used as an initial optimization condition, which is a reasonable and efficient mode. However, in the existing training method of the dose prediction model, the accuracy of the result of model prediction is severely dependent on the quality and quantity of data in the data set. However, in practical use, due to factors such as patient privacy data, it is often difficult to acquire a sufficient amount of data set to complete model training, so that the existing dose prediction model has a poor training effect in most scenes.
In order to solve the above-described problems, according to an embodiment of the present application, there is provided a method embodiment of a dose prediction model generation method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
FIG. 1 is a flowchart of an alternative method of generating a dose prediction model according to an embodiment of the present application, as shown in FIG. 1, the method comprising the steps of:
step S101, a small sample data set is acquired.
In step S101, the small sample data set includes at least M medical images and actual dose distribution maps corresponding to the first reference objects, M being an integer greater than 1.
Optionally, the first reference object is a case that has undergone radiation therapy, wherein the medical image corresponding to the first reference object includes at least a contour image corresponding to a target radiotherapy region of the first reference object, a contour image corresponding to an organ at risk of the first reference object, and a multi-modal medical image corresponding to the first reference object. The contour image corresponding to the radiotherapy target zone of the first reference object and the contour image corresponding to the jeopardizing organ of the first reference object may be referred to as ROI (region of interest ) delineating images corresponding to the first reference object, and the multi-modal medical images corresponding to the first reference object include, but are not limited to, CT images, MR images, CBCT images, and other various images capable of characterizing pathological features of the first reference object.
In addition, the actual dose distribution map corresponding to the first reference object is used to characterize dose distribution information corresponding to the first reference object during actual radiotherapy, for example, a DVH (dose and volume histogram, dose volume histogram) map corresponding to the first reference object during actual radiotherapy, and in the present application, the actual dose distribution map corresponding to the first reference object may be represented as a GT-dose map.
Step S102, determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on the actual dose distribution map corresponding to the first reference object.
In step S102, the robustness corresponding to the multi-granularity information in the actual isodose line region map is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution map, and the accuracy corresponding to the contour information in the actual dose change gradient map is higher than the accuracy corresponding to the contour information in the actual dose distribution map.
Alternatively, in the present application, a device for generating a dose prediction model (hereinafter referred to as a generating device) may be used as an execution subject of the method for generating a dose prediction model in the present application, where the generating device may be a software system or an embedded system combining software and hardware.
Alternatively, the generating means may perform an isodose linearization operation and a gradient operation, respectively, based on the actual dose distribution map corresponding to each first reference object, where the isodose linearization operation is used to generate an actual isodose line area map (may be denoted as GT-isodose line area map) corresponding to each first reference object according to the actual dose distribution map corresponding to the first reference object; the graduating operation is used for generating an actual dose change gradient map (which can be recorded as a GT-dose change gradient map) corresponding to each first reference object according to the actual dose distribution map corresponding to the first reference object.
And step S103, performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region diagram corresponding to the first reference object and the actual dose change gradient diagram to obtain a target encoder and a target decoder.
In step S103, the first encoder and the first decoder are the encoder and the decoder trained according to the medical images and the dose distribution maps corresponding to the N second reference objects, where N is an integer greater than 1 and N is greater than M.
The first encoder and the first decoder are pre-trained encoders and decoders, the first reference object and the second reference object are historic cases corresponding to the same cancer species, and the first reference object and the second reference object may be the same object or different objects.
In addition, it should be noted that, N is larger than M to represent that the data amount in the data set used when the first encoder and the first decoder are pre-trained is larger than the number of the small sample data sets, in other words, the data set with larger data amount is applied in the pre-training process, so that the first encoder and the first decoder are obtained through pre-training, the purpose of setting a relatively suitable initial weight for the model is achieved, and further on this basis, the dose prediction model can be obtained through training with the small sample data set with smaller data amount more efficiently, the training time of the model is shortened, and the training cost of the model is reduced. The first encoder and the first decoder can be reused, i.e. only need to train once, and can be multiplexed into the training of a plurality of dose prediction models, for example, the small sample data set A is used for carrying out iterative training on the first encoder and the first decoder to obtain a dose prediction model A; the first encoder and the first decoder are iteratively trained using the small sample dataset B to obtain a dose prediction model B.
In addition, unlike the traditional dose prediction model training mode, the application also adds the actual isodose line region graph (i.e. GT-isodose line region graph) and the actual dose change gradient graph (i.e. GT-dose change gradient graph) corresponding to the first reference object as constraint conditions of model training, and because the robustness corresponding to multi-granularity information in the actual isodose line region graph is higher than that corresponding to multi-granularity information in the actual dose distribution graph, the accuracy corresponding to contour information in the actual dose change gradient graph is higher than that corresponding to contour information in the actual dose distribution graph, thereby providing edge priori knowledge in the model training process by using the GT-isodose line region graph and providing priori knowledge of the influence of the radiation field on the dose distribution in the model training process by using the GT-dose change gradient graph, thereby being beneficial to improving the prediction accuracy of the dose prediction model.
Step S104, the target encoder and the target decoder are used as a dose prediction model.
Optionally, the generating means finally connects the target encoder and the target decoder together, resulting in a dose prediction model.
Based on the foregoing contents of steps S101 to S103, in the present application, a mode of pre-training to obtain the first encoder and the second decoder according to the medical images and the dose distribution diagrams corresponding to the N second reference objects, so as to set a relatively suitable initial weight for the model is adopted, and a small sample data set is first obtained, where the small sample data set includes at least the medical images and the actual dose distribution diagrams corresponding to the M first reference objects, and M is an integer greater than 1. And then, determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on the actual dose distribution map corresponding to the first reference object, wherein the robustness corresponding to the multi-granularity information in the actual isodose line region map is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution map, and the accuracy corresponding to the contour information in the actual dose change gradient map is higher than the accuracy corresponding to the contour information in the actual dose distribution map. And then, performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are the encoders and the decoders obtained through training according to the medical images and the dose distribution maps corresponding to the N second reference objects, N is an integer greater than 1, and N is greater than M. Finally, the target encoder and the target decoder are used as a dose prediction model.
From the above, it can be seen that the first encoder and the first decoder are obtained by pre-training according to the medical images and the dose distribution diagrams corresponding to the N second reference objects, so that a relatively suitable initial weight is set for the model, and on this basis, the dose prediction model can be obtained by training with a small sample data set with a smaller data volume more efficiently, so that the training time of the model is shortened. And secondly, the actual isodose line region diagram and the actual dose change gradient diagram corresponding to the first reference object are added in the training process of the dose prediction model as training constraint conditions, and the robustness corresponding to the multi-granularity information in the actual isodose line region diagram is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution diagram, and the accuracy corresponding to the contour information in the actual dose change gradient diagram is higher than the accuracy corresponding to the contour information in the actual dose distribution diagram, so that the method is equivalent to providing marginal priori knowledge in the model training process by utilizing the actual dose change gradient diagram, providing priori knowledge of the influence of the radiation field on the dose distribution by utilizing the actual isodose line region diagram in the model training process, and being beneficial to improving the prediction accuracy of the model.
Therefore, the technical scheme of the application achieves the aim of obtaining the dose prediction model based on the training of the small sample data set with less data volume, thereby realizing the technical effect of reducing the training cost of the dose prediction model, and further solving the technical problem that the training effect of the dose prediction model in most scenes is poor due to the higher requirement of the existing training method of the dose prediction model on the data volume of the data set.
In an alternative embodiment, before performing iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line area map and the actual dose change gradient map corresponding to the first reference object to obtain the target encoder and the target decoder, the generating device performs a pre-training operation on a preset initial encoder and initial decoder according to the N medical images and dose distribution maps corresponding to the second reference object, where the pre-training operation is used to extract image features of the medical images corresponding to the second reference object through the initial encoder, generate a predicted dose distribution map corresponding to the second reference object through the initial decoder according to the image features extracted by the initial encoder, and update weights of the initial encoder and weights of the initial decoder according to the predicted dose distribution map corresponding to the second reference object and the dose distribution map corresponding to the second reference object. The first encoder and the first decoder are obtained by performing a plurality of pre-training operations on the initial encoder and the initial decoder.
Optionally, fig. 2 is a schematic diagram of a training process of an optional dose prediction model according to an embodiment of the present application, as shown in fig. 2, a pre-training dataset is first obtained, where the pre-training dataset includes N medical images corresponding to second reference objects and dose distribution diagrams, the medical images corresponding to the second reference objects include at least a multi-modal medical image corresponding to the second reference objects and an ROI delineating image corresponding to the second reference objects, and the dose distribution diagrams corresponding to the second reference objects represent actual dose information corresponding to the second reference objects in an actual radiotherapy process. The initial encoder and initial decoder are then trained from the pre-training data set.
Optionally, in the pre-training stage, a dose distribution map corresponding to the second reference object is used as a training label, and a preset initial encoder (and an initial decoder are subjected to iterative training for a preset number of times (for example, 200 times) in combination with a medical image corresponding to the second reference object, wherein each training process, the initial encoder extracts image features of the medical image corresponding to the second reference object, the initial decoder generates a predicted dose distribution map corresponding to the second reference object according to the image features extracted by the initial encoder.
In an alternative embodiment, in order to train to obtain the target encoder and the target decoder, the generating device firstly extracts the image features of the medical image corresponding to the first reference object in the small sample data set through the first encoder, and then respectively executes a first task operation, a second task operation and a third task operation, wherein the first task operation is used for updating the weight of the first encoder and the weight of the first decoder according to the image features extracted by the first encoder and the actual dose distribution map corresponding to the first reference object; the second task is used for updating the weight of the first encoder and the weight of the second decoder according to the image characteristics extracted by the first encoder and the actual isodose line region map corresponding to the first reference object; the third task is operable to update weights of the first encoder and the third decoder according to the image features extracted by the first encoder and the actual dose change gradient map corresponding to the first reference object, where the first decoder, the second decoder, and the third decoder are independent decoders.
It should be noted that, the present application executes the first task operation, the second task operation, and the third task operation multiple times in parallel, and uses the first encoder after the last update of the weight as the target encoder, and uses the first decoder after the last update of the weight as the target decoder.
Alternatively, as shown in fig. 2, the first task operation corresponds to task 1 in fig. 2, the second task operation corresponds to task 2 in fig. 2, and the third task operation corresponds to task 3 in fig. 3. Further, the execution times of the task 1, the task 2, and the task 3 may be custom set, for example, 300 times, 200 times, 100 times, and the like.
In order to further explain the specific contents of the first task operation, the second task operation, and the third task operation, the specific execution procedure of each task operation will be described below.
Optionally, the first task operation includes the steps of: generating a first predicted dose distribution map corresponding to a first reference object according to the image features extracted by the first encoder through the first decoder; performing isodose line processing on a first predicted dose distribution map corresponding to a first reference object to obtain a first predicted isodose line region map corresponding to the first reference object; performing dose change gradient processing on a first predicted dose distribution map corresponding to a first reference object to obtain a first predicted dose change gradient map corresponding to the first reference object; the weight of the first encoder and the weight of the first decoder are updated according to the first predicted dose distribution map, the first predicted isodose line region map, the first predicted dose change gradient map, and the actual dose distribution map corresponding to the first reference object.
Optionally, as shown in fig. 2, when the first task operation is executed each time, the generating device generates a first predicted dose distribution map corresponding to the first reference object according to the image feature extracted by the first encoder through the first decoder, and then the generating device performs isodose linearization processing on the first predicted dose distribution map to obtain a first predicted isodose line area map corresponding to the first reference object; and performing dose change gradient treatment on the first predicted dose distribution map to obtain a first predicted dose change gradient map corresponding to the first reference object. Then, the generation device updates the weight of the first encoder and the weight of the first decoder based on the first predicted dose distribution map, the first predicted isodose line region map, the first predicted dose change gradient map, and the GT-dose map, GT-dose change gradient map, and GT-isodose line region map corresponding to the first reference object.
Alternatively, the generating means may take pixel level difference information between the first predicted dose distribution map corresponding to the first reference object and the actual dose distribution map corresponding to the first reference object as the first parameter; taking pixel level difference information between a first predicted isodose line region map corresponding to a first reference object and an actual isodose line region map corresponding to the first reference object as a second parameter; and taking pixel level difference information between the first predicted dose change gradient map corresponding to the first reference object and the actual dose change gradient map corresponding to the first reference object as a third parameter. Finally, the generating device updates the weight of the first encoder and the weight of the first decoder according to the first parameter, the second parameter and the third parameter.
Optionally, the present application is provided with a second loss function, a third loss function and a fourth loss function based on task 1, wherein the second loss function is used for calculating a loss function value (i.e. a first parameter) between the first predicted dose distribution map and the GT-dose distribution map, the third loss function is used for calculating a loss function value (i.e. a second parameter) between the first predicted isodose line region map and the GT-isodose line region map, and the fourth loss function is used for calculating a loss function value (i.e. a third parameter) between the first predicted dose change gradient map and the GT-dose change gradient map.
Each time task 1 in fig. 2 is performed, the generating device updates the weight of the first encoder and the weight of the first decoder according to the first parameter, the second parameter, and the third parameter.
In an alternative embodiment, the second task operation includes the steps of: generating a second predicted isodose line region map corresponding to the first reference object according to the image features extracted by the first encoder by a second decoder; the weight of the second decoder and the weight of the first encoder are updated according to pixel-level difference information between the second predicted isodose line region map corresponding to the first reference object and the actual isodose line region map corresponding to the first reference object.
Optionally, the method is further provided with a separate second decoder connected with the first encoder, wherein the second decoder is used for generating a second prediction isodose line region map corresponding to the first reference object according to the image features extracted by the first encoder. Furthermore, the application is based on task 2 in fig. 2 provided with a fifth loss function for calculating a loss function value between the second predicted isodose line region map and the GT-isodose line region map (i.e. pixel level difference information between the second predicted isodose line region map and the GT-isodose line region map). Finally, the application updates the weight of the first encoder and the weight of the first decoder according to the loss function value corresponding to the fifth loss function.
In an alternative embodiment, the third task operation includes the steps of: generating a second predicted dose change gradient map corresponding to the first reference object according to the image features extracted by the first encoder through a third decoder; and updating the weight of the third decoder and the weight of the first encoder according to pixel-level difference information between the second predicted dose change gradient map corresponding to the first reference object and the actual dose change gradient map corresponding to the first reference object.
Optionally, the present application further provides a separate third decoder connected to the first encoder, where the third decoder is configured to generate a second predicted dose change gradient map corresponding to the first reference object according to the image feature extracted by the first encoder. Furthermore, the application is based on task 3 in fig. 2 provided with a sixth loss function, wherein the sixth loss function is used to calculate a loss function value between the second predicted dose change gradient map and the GT-dose change gradient map (i.e. pixel level difference information between the second predicted dose change gradient map and the GT-dose change gradient map). Finally, the application updates the weight of the first encoder and the weight of the first decoder according to the loss function value corresponding to the sixth loss function.
It should be noted that, the execution times of each of the task 1, the task 2, and the task 3 in fig. 2 may be set by user, for example, the task 1 is executed 200 times, the task 2 and the task 3 are executed 100 times, and the starting sequences of the various tasks may be set by user, for example, the task 1 is executed 100 times, and then the task 1, the task 2, and the task 3 are executed 100 times in parallel.
From the above, in the present application, the training process of the dose prediction model can be divided into two phases, the first phase is a pre-training phase and the second phase is a small sample training phase.
In the pre-training stage, the first encoder and the first decoder are obtained through pre-training of medical images and dose distribution diagrams corresponding to N second reference objects in the pre-training data set, so that a proper initial weight is set for the model, and on the basis, a dose prediction model can be obtained through subsequent training by using a small sample data set with smaller data quantity more efficiently, and the training time of the model is shortened.
In a small sample training stage, based on a small sample data set, a first encoder and a first decoder obtained in a pre-training stage are used, a dose gradient map decoder (corresponding to a third decoder) and an isodose line map decoder (corresponding to a second decoder) which are subjected to random initialization are added, three-task learning is carried out, in the three-task learning process, pixel level difference information between a second predicted dose change gradient map and a GT-dose change gradient map output by the third encoder and pixel level difference information between a second predicted isodose line region map and a GT-isodose line region map output by the second encoder are added to serve as additional training constraint conditions, and as robustness corresponding to multi-granularity information in the GT-isodose line region map is higher than that corresponding to multi-granularity information in the GT-dose map, accuracy corresponding to contour information in the GT-dose change gradient map is higher than accuracy corresponding to contour information in the GT-dose map, therefore the prior knowledge of the dose curve region is improved in the process of providing the prior knowledge of the pixel level difference information between the dose gradient map and the GT-dose change gradient map in the prior process.
In an alternative embodiment, after the target encoder and the target decoder are used as the dose prediction model, the generating device further acquires a medical image corresponding to the target object, and inputs the medical image corresponding to the target object into the dose prediction model to obtain a dose distribution map corresponding to the target object output by the dose prediction model.
Optionally, the medical image corresponding to the target object at least includes a contour image corresponding to a radiotherapy target region of the target object, a contour image corresponding to a jeopardy organ of the target object, and a multi-modal medical image corresponding to the target object, wherein the target object is an object to be subjected to radiotherapy, and the target object corresponds to the same cancer species as the first reference object and the second reference object.
Optionally, fig. 3 is a schematic view of a usage process of a dose prediction model according to an embodiment of the present application, as shown in fig. 3, during the usage process of the dose prediction model, a medical image corresponding to a target object may be input into a target encoder of the dose prediction model, an image feature of the medical image corresponding to the target object is extracted by the target encoder, then the target encoder transmits the extracted image feature to a target decoder, and the target decoder generates a dose distribution map corresponding to the target object based on the received image feature.
In an alternative embodiment, the application further provides a method for evaluating the performance of a trained dose prediction model, which is specifically as follows:
preparing a test sample, wherein the test sample comprises a medical image corresponding to a test object and an actual dose distribution diagram of the test object, a predicted dose distribution diagram corresponding to the test object is obtained according to the medical image corresponding to the test object through a dose prediction model, an average absolute error (Mean Absolute Error) (MAE of all ROIs and whole images) is calculated according to the predicted dose distribution diagram corresponding to the test object and the actual dose distribution diagram corresponding to the test object, the ratio of the average absolute error before and after is set as X, and the smaller X indicates that the prediction performance of the dose prediction model is better.
In addition, visual comparison can be performed according to a predicted dose distribution diagram corresponding to the test object and an actual dose distribution diagram corresponding to the test object, so that the performance of the prediction model is measured according to a comparison result, an isodose line is added to the predicted dose distribution diagram corresponding to the test object, and whether the fineness of the dose distribution is optimized is observed from the bending detail of the isodose line. If macroscopic detail change optimization exists, the count is increased by one, and the larger Y is provided that the ratio of the count to the number of actual dose distribution patterns in the test sample is Y, the better the prediction performance of the dose prediction model is.
From the above, it can be seen that the first encoder and the first decoder are obtained by pre-training according to the medical images and the dose distribution diagrams corresponding to the N second reference objects, so that a relatively suitable initial weight is set for the model, and on this basis, the dose prediction model can be obtained by training with a small sample data set with a smaller data volume more efficiently, so that the training time of the model is shortened. And secondly, the actual isodose line region diagram and the actual dose change gradient diagram corresponding to the first reference object are added in the training process of the dose prediction model as training constraint conditions, and the robustness corresponding to the multi-granularity information in the actual isodose line region diagram is higher than the robustness corresponding to the multi-granularity information in the actual dose distribution diagram, and the accuracy corresponding to the contour information in the actual dose change gradient diagram is higher than the accuracy corresponding to the contour information in the actual dose distribution diagram, so that the method is equivalent to providing marginal priori knowledge in the model training process by utilizing the actual dose change gradient diagram, providing priori knowledge of the influence of the radiation field on the dose distribution by utilizing the actual isodose line region diagram in the model training process, and being beneficial to improving the prediction accuracy of the model.
Example 2
According to an embodiment of the present application, an embodiment of a generation device of a dose prediction model is provided. Fig. 4 is a schematic diagram of an alternative apparatus for generating a dose prediction model according to an embodiment of the present application, and as shown in fig. 4, the apparatus for generating a dose prediction model includes: an acquisition unit 401, a determination unit 402, a training unit 403, and a processing unit 404.
Optionally, the acquiring unit 401 is configured to acquire a small sample data set, where the small sample data set includes at least M medical images and actual dose distribution diagrams corresponding to the first reference objects, and M is an integer greater than 1; a determining unit 402, configured to determine an actual isodose line area map and an actual dose change gradient map corresponding to the first reference object based on an actual dose distribution map corresponding to the first reference object, where robustness corresponding to multi-granularity information in the actual isodose line area map is higher than robustness corresponding to multi-granularity information in the actual dose distribution map, and accuracy corresponding to contour information in the actual dose change gradient map is higher than accuracy corresponding to contour information in the actual dose distribution map; a training unit 403, configured to perform iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object, so as to obtain a target encoder and a target decoder, where the first encoder and the first decoder are the encoder and the decoder obtained by training according to the medical images and the dose distribution maps corresponding to the N second reference objects, where N is an integer greater than 1, and N is greater than M; a processing unit 404 for taking the target encoder and the target decoder as dose prediction models.
Optionally, the generating device of the dose prediction model includes: a first pre-training unit and a second pre-training unit. The first pre-training unit is used for performing pre-training operation on a preset initial encoder and an initial decoder according to N medical images and dose distribution diagrams corresponding to second reference objects, wherein the pre-training operation is used for extracting image features of the medical images corresponding to the second reference objects through the initial encoder, generating a predicted dose distribution diagram corresponding to the second reference objects through the initial decoder according to the image features extracted by the initial encoder, and updating the weight of the initial encoder and the weight of the initial decoder according to the predicted dose distribution diagram corresponding to the second reference objects and the dose distribution diagram corresponding to the second reference objects; and the second pre-training unit is used for performing multiple pre-training operations on the initial encoder and the initial decoder to obtain a first encoder and a first decoder.
Optionally, the training unit 403 includes: the device comprises an extraction subunit, a first execution subunit, a second execution subunit, a third execution subunit and a fourth execution subunit. The extraction subunit is used for extracting the image characteristics of the medical image corresponding to the first reference object in the small sample data set through the first encoder; a first execution subunit, configured to execute a first task operation, where the first task operation is configured to update a weight of the first encoder and a weight of the first decoder according to the image feature extracted by the first encoder and an actual dose distribution map corresponding to the first reference object; the second execution subunit is used for executing a second task operation, wherein the second task operation is used for updating the weight of the first encoder and the weight of the second decoder according to the image characteristics extracted by the first encoder and the actual isodose line region map corresponding to the first reference object; the third execution subunit is used for executing a third task operation, wherein the third task operation is used for updating the weights of the first encoder and the third decoder according to the image characteristics extracted by the first encoder and the actual dose change gradient map corresponding to the first reference object, and the first decoder, the second decoder and the third decoder are mutually independent decoders; and the fourth execution subunit is used for executing the first task operation, the second task operation and the third task operation for a plurality of times in a parallel mode, taking the first encoder after the last updating weight as a target encoder and taking the first decoder after the last updating weight as a target decoder.
Optionally, the first execution subunit includes: the device comprises a first generation module, a first processing module, a second processing module and a first updating module. The first generation module is used for generating a first predicted dose distribution map corresponding to a first reference object according to the image features extracted by the first encoder through the first decoder; the first processing module is used for carrying out isodose line processing on a first predicted dose distribution map corresponding to the first reference object to obtain a first predicted isodose line area map corresponding to the first reference object; the second processing module is used for carrying out dose change gradient processing on the first predicted dose distribution map corresponding to the first reference object to obtain a first predicted dose change gradient map corresponding to the first reference object; the first updating module is used for updating the weight of the first encoder and the weight of the first decoder according to the first predicted dose distribution diagram corresponding to the first reference object, the first predicted isodose line area diagram, the first predicted dose change gradient diagram and the actual dose distribution diagram corresponding to the first reference object.
Optionally, the first updating module includes: the system comprises a first parameter determination sub-module, a second parameter determination sub-module, a third parameter determination sub-module and a first updating sub-module. The first parameter determination submodule is used for taking pixel level difference information between a first predicted dose distribution map corresponding to the first reference object and an actual dose distribution map corresponding to the first reference object as a first parameter; a second parameter determination submodule, configured to use pixel-level difference information between a first predicted isodose line region map corresponding to a first reference object and an actual isodose line region map corresponding to the first reference object as a second parameter; a third parameter determining sub-module, configured to use pixel level difference information between a first predicted dose change gradient map corresponding to the first reference object and an actual dose change gradient map corresponding to the first reference object as a third parameter; the first updating sub-module is used for updating the weight of the first encoder and the weight of the first decoder according to the first parameter, the second parameter and the third parameter.
Optionally, the second execution subunit includes: the second generation module and the second updating module. A third execution subunit comprising: the third generation module and the third updating module. The second generation module is used for generating a second prediction isodose line region map corresponding to the first reference object according to the image characteristics extracted by the first encoder through the second decoder; and the second updating module is used for updating the weight of the second decoder and the weight of the first encoder according to pixel-level difference information between the second prediction isodose line region map corresponding to the first reference object and the actual isodose line region map corresponding to the first reference object. The third generation module is used for generating a second predicted dose change gradient map corresponding to the first reference object according to the image characteristics extracted by the first encoder through a third decoder; and the third updating module is used for updating the weight of the third decoder and the weight of the first encoder according to pixel level difference information between the second predicted dose change gradient map corresponding to the first reference object and the actual dose change gradient map corresponding to the first reference object.
Optionally, the generating device of the dose prediction model includes: the device comprises a second acquisition unit and a first input unit, wherein the second acquisition unit is used for acquiring medical images corresponding to a target object; the first input unit is used for inputting the medical image corresponding to the target object into the dose prediction model to obtain a dose distribution diagram corresponding to the target object output by the dose prediction model.
Optionally, the medical image corresponding to the first reference object comprises at least a contour image corresponding to a radiotherapy target zone of the first reference object, a contour image corresponding to a jeopardizing organ of the first reference object, and a multi-modal medical image corresponding to the first reference object.
Example 3
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium, including a stored computer program, where the computer program when executed controls a device in which the computer readable storage medium is located to perform the method for generating a dose prediction model according to any one of the above embodiments 1.
Example 4
According to another aspect of the embodiment of the present application, there is also provided an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of generating a dose prediction model of any of the above embodiments 1 via execution of executable instructions.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of units may be a logic function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1. A method of generating a dose prediction model, comprising:
acquiring a small sample data set, wherein the small sample data set at least comprises medical images and actual dose distribution diagrams corresponding to M first reference objects, and M is an integer larger than 1;
determining an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on an actual dose distribution map corresponding to the first reference object, wherein the robustness corresponding to multi-granularity information in the actual isodose line region map is higher than that in the actual dose distribution map, and the accuracy corresponding to profile information in the actual dose change gradient map is higher than that in the actual dose distribution map;
performing iterative training on a first encoder and a first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are encoders and decoders obtained through training according to medical images and dose distribution maps corresponding to N second reference objects, N is an integer greater than 1, and N is greater than M;
The target encoder and the target decoder are used as a dose prediction model.
2. The method of generating a dose prediction model according to claim 1, wherein before iteratively training a first encoder and a first decoder according to the small sample dataset, an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object, to obtain a target encoder and a target decoder, the method of generating a dose prediction model further comprises:
pre-training an initial encoder and an initial decoder according to N medical images and dose distribution diagrams corresponding to the second reference objects, wherein the pre-training operation is used for extracting image features of the medical images corresponding to the second reference objects through the initial encoder, generating a predicted dose distribution diagram corresponding to the second reference objects through the initial decoder according to the image features extracted by the initial encoder, and updating weights of the initial encoder and the initial decoder according to the predicted dose distribution diagram corresponding to the second reference objects and the dose distribution diagram corresponding to the second reference objects;
And performing the pre-training operation on the initial encoder and the initial decoder for a plurality of times to obtain the first encoder and the first decoder.
3. The method of generating a dose prediction model according to claim 1, wherein iteratively training a first encoder and a first decoder according to the small sample dataset, an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, comprises:
extracting image features of a medical image corresponding to a first reference object in the small sample dataset by the first encoder;
performing a first task operation, wherein the first task operation is used for updating the weight of the first encoder and the weight of the first decoder according to the image features extracted by the first encoder and the actual dose distribution diagram corresponding to the first reference object;
executing a second task operation, wherein the second task operation is used for updating the weight of the first encoder and the weight of the second decoder according to the image features extracted by the first encoder and the actual isodose line region map corresponding to the first reference object;
Performing a third task operation, wherein the third task operation is used for updating weights of the first encoder and a third decoder according to the image features extracted by the first encoder and an actual dose change gradient map corresponding to the first reference object, and the first decoder, the second decoder and the third decoder are mutually independent decoders;
and executing the first task operation, the second task operation and the third task operation for a plurality of times in a parallel mode, taking the first encoder after the last updating of the weight as the target encoder, and taking the first decoder after the last updating of the weight as the target decoder.
4. A method of generating a dose predictive model as claimed in claim 3, wherein the first task operation comprises the steps of:
generating a first predicted dose distribution map corresponding to the first reference object according to the image features extracted by the first encoder through the first decoder;
performing isodose line processing on a first predicted dose distribution map corresponding to the first reference object to obtain a first predicted isodose line region map corresponding to the first reference object;
Performing dose change gradient processing on a first predicted dose distribution map corresponding to the first reference object to obtain a first predicted dose change gradient map corresponding to the first reference object;
and updating the weight of the first encoder and the weight of the first decoder according to the first predicted dose distribution diagram, the first predicted isodose line region diagram, the first predicted dose change gradient diagram and the actual dose distribution diagram corresponding to the first reference object.
5. The method of generating a dose prediction model according to claim 4, wherein updating the weights of the first encoder and the weights of the first decoder based on the first predicted dose distribution map, the first predicted isodose line region map, the first predicted dose change gradient map, and the actual dose distribution map corresponding to the first reference object comprises:
taking pixel level difference information between a first predicted dose distribution map corresponding to the first reference object and an actual dose distribution map corresponding to the first reference object as a first parameter;
taking pixel-level difference information between a first predicted isodose line region map corresponding to the first reference object and an actual isodose line region map corresponding to the first reference object as a second parameter;
Taking pixel level difference information between a first predicted dose change gradient map corresponding to the first reference object and an actual dose change gradient map corresponding to the first reference object as a third parameter;
and updating the weight of the first encoder and the weight of the first decoder according to the first parameter, the second parameter and the third parameter.
6. A method of generating a dose predictive model as claimed in claim 3, wherein,
the second task operation includes the steps of:
generating a second predicted isodose line region map corresponding to the first reference object according to the image features extracted by the first encoder through the second decoder;
updating the weight of the second decoder and the weight of the first encoder according to pixel-level difference information between a second predicted isodose line region map corresponding to the first reference object and an actual isodose line region map corresponding to the first reference object;
and/or the number of the groups of groups,
the third task operation includes the steps of:
generating a second predicted dose change gradient map corresponding to the first reference object according to the image features extracted by the first encoder through the third decoder;
And updating the weight of the third decoder and the weight of the first encoder according to pixel level difference information between a second predicted dose change gradient map corresponding to the first reference object and an actual dose change gradient map corresponding to the first reference object.
7. The method of generating a dose prediction model according to claim 1, characterized in that after taking the target encoder and the target decoder as a dose prediction model, the method of generating a dose prediction model further comprises:
acquiring a medical image corresponding to a target object;
and inputting the medical image corresponding to the target object into the dose prediction model to obtain a dose distribution diagram corresponding to the target object output by the dose prediction model.
8. The method of claim 1, wherein the medical image corresponding to the first reference object includes at least a contour image corresponding to a radiotherapy target zone of the first reference object, a contour image corresponding to a jeopardizing organ of the first reference object, and a multi-modal medical image corresponding to the first reference object.
9. A device for generating a dose prediction model, comprising:
An acquisition unit, configured to acquire a small sample data set, where the small sample data set includes at least M medical images and actual dose distribution graphs corresponding to first reference objects, and M is an integer greater than 1;
a determining unit, configured to determine an actual isodose line region map and an actual dose change gradient map corresponding to the first reference object based on an actual dose distribution map corresponding to the first reference object, where robustness corresponding to multi-granularity information in the actual isodose line region map is higher than robustness corresponding to multi-granularity information in the actual dose distribution map, and accuracy corresponding to profile information in the actual dose change gradient map is higher than accuracy corresponding to profile information in the actual dose distribution map;
the training unit is used for carrying out iterative training on the first encoder and the first decoder according to the small sample data set, the actual isodose line region map and the actual dose change gradient map corresponding to the first reference object to obtain a target encoder and a target decoder, wherein the first encoder and the first decoder are the encoders and the decoders which are obtained through training according to N medical images and dose distribution diagrams corresponding to the second reference objects, N is an integer greater than 1, and N is greater than M;
And the processing unit is used for taking the target encoder and the target decoder as a dose prediction model.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program, when run, controls a device in which the computer readable storage medium is located to perform the method of generating a dose prediction model according to any one of claims 1 to 8.
11. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of generating a dose prediction model of any of claims 1-8.
CN202311123150.7A 2023-09-01 2023-09-01 Method and device for generating dose prediction model, electronic equipment and storage medium Active CN116844734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311123150.7A CN116844734B (en) 2023-09-01 2023-09-01 Method and device for generating dose prediction model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311123150.7A CN116844734B (en) 2023-09-01 2023-09-01 Method and device for generating dose prediction model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116844734A true CN116844734A (en) 2023-10-03
CN116844734B CN116844734B (en) 2024-01-16

Family

ID=88165611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311123150.7A Active CN116844734B (en) 2023-09-01 2023-09-01 Method and device for generating dose prediction model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116844734B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456315A (en) * 2023-12-26 2024-01-26 福建自贸试验区厦门片区Manteia数据科技有限公司 Training method and device for dose prediction model and computer readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122082A1 (en) * 2016-11-02 2018-05-03 General Electric Company Automated segmentation using deep learned priors
CN108717866A (en) * 2018-04-03 2018-10-30 陈辛元 A kind of method, apparatus, equipment and the storage medium of the distribution of prediction radiotherapy planning dosage
CN109621228A (en) * 2018-12-12 2019-04-16 上海联影医疗科技有限公司 The calculating unit and storage medium of radiological dose
CN111833988A (en) * 2020-07-14 2020-10-27 北京安德医智科技有限公司 Radiation parameter determination method and device, electronic equipment and storage medium
CN111898324A (en) * 2020-08-13 2020-11-06 四川大学华西医院 Segmentation task assistance-based nasopharyngeal carcinoma three-dimensional dose distribution prediction method
CN112233200A (en) * 2020-11-05 2021-01-15 福建自贸试验区厦门片区Manteia数据科技有限公司 Dose determination method and device
CN112635023A (en) * 2020-12-16 2021-04-09 福建医科大学附属第一医院 Generation method of dose prediction model of nasopharyngeal carcinoma, dose prediction method and device
CN112820377A (en) * 2021-02-02 2021-05-18 中国科学技术大学 Radiotherapy plan automatic generation method based on deep learning
CN114236589A (en) * 2021-12-17 2022-03-25 深圳市联影高端医疗装备创新研究院 Method and device for joint prediction of radiotherapy dose distribution and dose volume histogram
CN114681813A (en) * 2020-12-28 2022-07-01 北京医智影科技有限公司 Automatic planning system, automatic planning method and computer program product for radiation therapy
CN115938591A (en) * 2023-02-23 2023-04-07 福建自贸试验区厦门片区Manteia数据科技有限公司 Radiotherapy-based dose distribution interval determination device and electronic equipment
CN116030938A (en) * 2023-03-29 2023-04-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Determination device for radiotherapy dosage distribution interval and electronic equipment
CN116071660A (en) * 2023-03-10 2023-05-05 广西师范大学 Target detection method based on small sample
CN116072263A (en) * 2023-03-06 2023-05-05 福建自贸试验区厦门片区Manteia数据科技有限公司 Planning parameter prediction device based on radiotherapy
CN116152373A (en) * 2023-02-21 2023-05-23 中北大学 Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180122082A1 (en) * 2016-11-02 2018-05-03 General Electric Company Automated segmentation using deep learned priors
CN108717866A (en) * 2018-04-03 2018-10-30 陈辛元 A kind of method, apparatus, equipment and the storage medium of the distribution of prediction radiotherapy planning dosage
CN109621228A (en) * 2018-12-12 2019-04-16 上海联影医疗科技有限公司 The calculating unit and storage medium of radiological dose
CN111833988A (en) * 2020-07-14 2020-10-27 北京安德医智科技有限公司 Radiation parameter determination method and device, electronic equipment and storage medium
CN111898324A (en) * 2020-08-13 2020-11-06 四川大学华西医院 Segmentation task assistance-based nasopharyngeal carcinoma three-dimensional dose distribution prediction method
CN112233200A (en) * 2020-11-05 2021-01-15 福建自贸试验区厦门片区Manteia数据科技有限公司 Dose determination method and device
WO2022095167A1 (en) * 2020-11-05 2022-05-12 福建自贸试验区厦门片区Manteia数据科技有限公司 Dose determination method and device
CN112635023A (en) * 2020-12-16 2021-04-09 福建医科大学附属第一医院 Generation method of dose prediction model of nasopharyngeal carcinoma, dose prediction method and device
CN114681813A (en) * 2020-12-28 2022-07-01 北京医智影科技有限公司 Automatic planning system, automatic planning method and computer program product for radiation therapy
CN112820377A (en) * 2021-02-02 2021-05-18 中国科学技术大学 Radiotherapy plan automatic generation method based on deep learning
CN114236589A (en) * 2021-12-17 2022-03-25 深圳市联影高端医疗装备创新研究院 Method and device for joint prediction of radiotherapy dose distribution and dose volume histogram
CN116152373A (en) * 2023-02-21 2023-05-23 中北大学 Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
CN115938591A (en) * 2023-02-23 2023-04-07 福建自贸试验区厦门片区Manteia数据科技有限公司 Radiotherapy-based dose distribution interval determination device and electronic equipment
CN116072263A (en) * 2023-03-06 2023-05-05 福建自贸试验区厦门片区Manteia数据科技有限公司 Planning parameter prediction device based on radiotherapy
CN116071660A (en) * 2023-03-10 2023-05-05 广西师范大学 Target detection method based on small sample
CN116030938A (en) * 2023-03-29 2023-04-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Determination device for radiotherapy dosage distribution interval and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈辛元;易俊林;戴建荣;: "卷积神经网络放疗计划剂量预测:两种解码器对比", 中国医学物理学杂志, no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456315A (en) * 2023-12-26 2024-01-26 福建自贸试验区厦门片区Manteia数据科技有限公司 Training method and device for dose prediction model and computer readable storage medium
CN117456315B (en) * 2023-12-26 2024-04-19 福建自贸试验区厦门片区Manteia数据科技有限公司 Training method and device for dose prediction model and computer readable storage medium

Also Published As

Publication number Publication date
CN116844734B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
US11386557B2 (en) Systems and methods for segmentation of intra-patient medical images
Harms et al. Paired cycle‐GAN‐based image correction for quantitative cone‐beam computed tomography
Liu et al. MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method
JP6884853B2 (en) Image segmentation using neural network method
US10621724B2 (en) System and method for image segmentation
US10902597B2 (en) Comparing medical images
CN106133790B (en) Method and device for generating one or more computed tomography images based on magnetic resonance images with the aid of tissue type separation
JP2021035502A (en) System and methods for image segmentation using convolutional neural network
Feng et al. Segmenting CT prostate images using population and patient‐specific statistics for radiotherapy
US8942445B2 (en) Method and system for correction of lung density variation in positron emission tomography using magnetic resonance imaging
CN102473297B (en) Quantitative perfusion analysis
CN116844734B (en) Method and device for generating dose prediction model, electronic equipment and storage medium
Chen et al. Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images
CN109215014B (en) Training method, device and equipment of CT image prediction model and storage medium
Wang et al. Automated segmentation of CBCT image using spiral CT atlases and convex optimization
CN111145160B (en) Method, device, server and medium for determining coronary artery branches where calcified regions are located
WO2016134125A1 (en) Image segmentation via multi-atlas fusion with context learning
US11040219B2 (en) Clinical target volume delineation method and electronic device
KR102229367B1 (en) Cerebrovascular image displaying apparatus and method for comparison and diagnosis
CN108475419A (en) The method of data processing for computer tomography
CN116168097A (en) Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
Wu et al. Automatic segmentation of ultrasound tomography image
AU2021370630A1 (en) Deep magnetic resonance fingerprinting auto-segmentation
CN109564685B (en) Robust lobe segmentation
CN109410217B (en) Image segmentation method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant