CN116843994A - Model training method and device, storage medium and electronic equipment - Google Patents

Model training method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116843994A
CN116843994A CN202310761866.3A CN202310761866A CN116843994A CN 116843994 A CN116843994 A CN 116843994A CN 202310761866 A CN202310761866 A CN 202310761866A CN 116843994 A CN116843994 A CN 116843994A
Authority
CN
China
Prior art keywords
prediction model
lesion
image
transfer
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310761866.3A
Other languages
Chinese (zh)
Inventor
石峰
曹泽红
隗英
周翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202310761866.3A priority Critical patent/CN116843994A/en
Publication of CN116843994A publication Critical patent/CN116843994A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The specification discloses a model training method, a model training device, a storage medium and electronic equipment. In the model training method provided by the specification, a sample medical image is acquired, wherein the sample medical image comprises pathological tissues in different growth stages; determining annotation transfer characteristics according to differences among lesion tissues in different growth stages in each sample medical image; inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model; outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model; and training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target.

Description

Model training method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a model training method, a device, a storage medium, and an electronic apparatus.
Background
A tumor is a very dangerous pathological tissue that has a high probability of dying the patient without first time treatment after tumor metastasis. Therefore, assessment and prognosis of whether a tumor will metastasize is very important.
Currently, the results of immunohistochemical examination indexes such as Ki67, P53, CEA, CYFRA21-1 and the like are mostly adopted in clinical treatment to evaluate the malignancy of tumors, predict the metastasis risk and the like. However, the above-mentioned immunohistochemical examination items need to be examined by local tissue biopsy, which is an invasive examination. In addition, most immunohistochemical examination projects only characterize the possibility of tumorigenic metastasis, and do not give a good prognosis.
In order to solve the above problems, the present specification provides a model training method for training a model capable of safely and effectively predicting the risk of a lesion tissue.
Disclosure of Invention
The present disclosure provides a model training method, apparatus, storage medium and electronic device, so as to partially solve the above-mentioned problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a model training method, comprising:
acquiring a sample medical image, the sample medical image comprising diseased tissue at different stages of growth;
Determining annotation transfer characteristics according to differences among lesion tissues in different growth stages in each sample medical image;
inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model;
outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model;
and training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target.
Optionally, the lesion development prediction model further comprises an attention layer;
before outputting the predicted transfer characteristic of the diseased tissue, the method further comprises:
acquiring a preset constraint image, wherein the constraint image comprises an interested region in the sample medical image;
determining, by the attention layer, an attention image of the lesion development prediction model when processing the sample medical image according to the image features;
and adjusting parameters of an extraction layer and an attention layer in the lesion development prediction model by taking the minimum difference between the constraint image and the attention image as an optimization target.
Optionally, the constraining image is predetermined, specifically including:
determining a lesion tissue region contained in the sample medical image;
expanding the edge of the pathological tissue region to the outside of the pathological tissue region by a specified length to obtain a region of interest, so that the region of interest comprises the pathological tissue region and other risk regions around the pathological tissue region;
the sample medical image containing the region of interest is taken as a constraint image.
Optionally, the annotation transfer feature comprises an annotation transfer direction, and the predicted transfer feature comprises a predicted transfer direction;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the predicted transfer direction and the labeling transfer direction as an optimization target.
Optionally, the predicted transfer direction of the diseased tissue includes a predicted transfer direction of a next growth stage of the diseased tissue relative to a current growth stage and/or a predicted transfer direction of a final growth stage of the diseased tissue relative to the current growth stage.
Optionally, the annotation transfer feature comprises an annotation transfer mode, and the prediction transfer feature comprises a prediction transfer mode;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the prediction transfer mode and the labeling transfer mode as an optimization target.
Optionally, the annotation transfer feature comprises an annotation growth stage, and the predictive transfer feature comprises a predictive growth stage;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the prediction growth stage and the labeling growth stage as an optimization target.
The present disclosure provides a risk prediction method, which adopts a pre-training lesion development prediction model in the method as shown in fig. 1, and the method includes:
acquiring a medical image containing a lesion tissue;
inputting the medical image into a pre-trained lesion development prediction model, and extracting image features of the medical image through an extraction layer in the lesion development prediction model;
And outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model.
The present specification provides a model training apparatus comprising:
an acquisition module for acquiring a sample medical image containing lesion tissue at different growth stages;
the labeling module is used for determining labeling transfer characteristics according to differences among lesion tissues in different growth stages in each sample medical image;
the input module is used for inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model;
the output module is used for outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model;
and the training module is used for training the lesion development prediction model by taking the minimum difference between the prediction transfer characteristic and the labeling transfer characteristic as an optimization target.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the model training method described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above model training method when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the model training method provided by the specification, a sample medical image is acquired, wherein the sample medical image comprises pathological tissues in different growth stages; determining annotation transfer characteristics according to differences among lesion tissues in different growth stages in each sample medical image; inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model; outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model; and training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target.
When the model training method provided by the specification is used for training the lesion development prediction model, medical images of lesion tissues in different growth stages are used as training samples, and training labels are determined according to differences among the lesion tissues in different growth stages; and taking the sample medical image as the input of a lesion development prediction model, taking the minimum difference between the predicted transfer characteristic output by the lesion development prediction model and the marked transfer characteristic as the mark as an optimization target, and training the lesion development prediction model. The method can be used for training to obtain a lesion development prediction model capable of predicting the transfer risk of lesion tissues in each stage, and has a good training effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. Attached at
In the figure:
FIG. 1 is a schematic flow chart of a model training method in the present specification;
FIG. 2 is a schematic structural diagram of a model for predicting lesion development in the present specification;
FIG. 3 is a schematic illustration of a constraint image in the present specification;
FIG. 4 is a schematic flow chart of a risk prediction method in the present specification;
FIG. 5 is a schematic diagram of a model training apparatus provided in the present specification;
FIG. 6 is a schematic diagram of a risk prediction apparatus provided in the present disclosure;
fig. 7 is a schematic view of the electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In addition, it should be noted that, all actions of acquiring signals, information or data are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a model training method provided in the present specification.
S100: a sample medical image is acquired, the sample medical image comprising diseased tissue at different stages of growth.
All steps in the model training method provided in the present specification may be implemented by any electronic device having a computing function, such as a terminal, a server, or the like.
In the model training method provided in the present specification, the sample medical image may be a medical image obtained by any medical means, including but not limited to X-ray imaging, computed tomography (Computed Tomography, CT), magnetic resonance imaging (Magnetic Resonance Imaging, MRI), ultrasound images, radionuclide imaging, and the like. Also, the lesion tissue included in the medical image may be a lesion occurring in a human body part, including, but not limited to, glioma, lung cancer, bone cancer, breast cancer, stomach cancer, lymphoma, and the like. Meanwhile, lesion symptoms include, but are not limited to, nodules, benign tumors, malignant tumors, and the like.
In the medical field, for most pathological tissues, different growth stages are defined academically to distinguish the duration and the risk degree of the pathological tissues, namely early and late descriptions of various symptoms common in clinical stages. Taking the example of diseased tissue of the tumor type produced in cancer, the growth of a tumor can generally be divided into five growth stages: pre-cancerous stage, in situ cancerous stage, invasive cancerous stage, metastatic stage, and disseminated stage. Wherein the tumor at the stage of dissemination, i.e. the late stage of cancer in spoken language. For different growth stages, the tissue scale, growth speed and transmission risk of the lesion tissue are different, so that the lesion tissue of different growth stages needs to be considered respectively.
Based on this, in this step, a sample medical image containing lesion tissue at different growth stages may be acquired. The acquired sample medical image may contain medical images of all the different growth phases of the lesion tissue that need to be predicted.
S102: and determining the annotation transfer characteristics according to the difference between lesion tissues in different growth stages in each sample medical image.
The model for training of the model training method provided by the specification is a lesion development prediction model, and in the actual application stage, the input of the lesion development prediction model is a medical image containing lesion tissues, and the output is the transfer characteristics of the lesion tissues in the medical image. Wherein the metastasis feature is used to characterize the risk of metastasis of diseased tissue. It is conceivable that in order to obtain the metastatic characteristics of the lesion tissue, comprehensive analysis of the condition of the same lesion tissue at different growth stages is required. Thus, in this step, the annotation transfer characteristics can be determined from the differences between the lesion tissues at different growth phases in each of the sample medical images.
It is worth mentioning that, for the acquisition of the annotation transfer characteristics, the training party can obtain the annotation transfer characteristics according to the acquired sample medical image by itself; the annotation transfer feature can also be determined in advance directly according to the medical image generated in the medical stage in the actual medical process, and the medical image is taken as a sample medical image and provided for a training party together with the annotation transfer feature for use, which is not particularly limited in the specification.
S104: and inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model.
Fig. 2 shows a schematic structural diagram of a lesion development prediction model used in the model training method provided in the present specification. As shown in fig. 2, the lesion development prediction model may include at least an extraction layer and an output layer.
The extraction layer is used for extracting image features of the image according to the input image. In the process of training the lesion development prediction model, a sample medical image can be input into the lesion development prediction model to be trained, and image features of the sample medical image are extracted through an extraction layer to be used in a subsequent step.
S106: and outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model.
And the output layer in the lesion development prediction model is used for outputting the prediction transfer characteristics of the corresponding lesion tissues according to the image characteristics extracted by the extraction layer. In this step, the output layer may output the predicted transfer characteristic of the lesion tissue included in the sample medical image according to the image characteristic of the sample medical image extracted in step S104.
S108: and training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target.
In the model training method provided by the specification, a lesion development prediction model is trained by adopting a supervised learning mode. The training label is the label transfer feature determined in step S102. When training the lesion development prediction model, it is desirable that the output of the lesion development prediction model be as close as possible to the labeling. Based on the method, the minimum difference between the predicted transfer characteristic and the marked transfer characteristic output by the lesion development prediction model can be used as a training target, and the lesion development prediction model is trained.
When the model training method provided by the specification is used for training the lesion development prediction model, medical images of lesion tissues in different growth stages are used as training samples, and training labels are determined according to differences among the lesion tissues in different growth stages; and taking the sample medical image as the input of a lesion development prediction model, taking the minimum difference between the predicted transfer characteristic output by the lesion development prediction model and the marked transfer characteristic as the mark as an optimization target, and training the lesion development prediction model. The method can be used for training to obtain a lesion development prediction model capable of predicting the transfer risk of lesion tissues in each stage, and has a good training effect.
Additionally, as shown in fig. 2, the model training method provided in the present disclosure may further include an attention layer in the lesion development prediction model. The extraction layer can be further optimized by the attention layer in combination with a pre-set constraint image. Specifically, a preset constraint image can be obtained, wherein the constraint image comprises an interested region in the sample medical image; determining, by the attention layer, an attention image of the lesion development prediction model when processing the sample medical image according to the image features; and adjusting parameters of an extraction layer and an attention layer in the lesion development prediction model by taking the minimum difference between the constraint image and the attention image as an optimization target.
The constraint image is an image obtained by highlighting a region containing lesion tissues in the sample medical image. Specifically, a constraint image is predetermined, and a lesion tissue region contained in the sample medical image can be determined; expanding the edge of the pathological tissue region to the outside of the pathological tissue region by a specified length to obtain a region of interest, so that the region of interest comprises the pathological tissue region and other risk regions around the pathological tissue region; the sample medical image containing the region of interest is taken as a constraint image. In the model training methods provided herein, the region of interest may include, but is not limited to, tumors, peri-neoplastic regions, and surrounding diseased or non-diseased organs, tissues, air cavities, and the like. Other risk areas around the lesion tissue area may be, for example, organ areas such as lymph nodes, pleura, etc. that are prone to become a carrier of metastasis of the lesion tissue.
Fig. 3 shows a schematic representation of a constraint image employed in the present description. As shown in fig. 3, this is a constraint image corresponding to a sample medical image of a malignant lung nodule in which a visceral pleural violation occurs. In fig. 3, the lower left white area is the visceral pleura, and the irregular shape that leads to the visceral pleura is the lung nodule, i.e., the diseased tissue. And the dark area comprising the pathological tissue is the determined region of interest. It can be seen that the region of interest extends a certain range outwards additionally on the basis of the lesion tissue region, so as to ensure that the region of interest can completely cover the lesion tissue region, and is difficult to be influenced by errors and model fluctuation during training. On the other hand, by expanding the lesion tissue region outward, the region of interest can include other risk regions around the lesion tissue region that may be affected by lesion tissue due to the deterioration and metastasis of the condition. This makes it possible to better predict and cope with the risk of metastasis of a lesion tissue.
During training, the region of interest is the region in which the lesion development prediction model is expected to pay attention to when extracting the image features of the sample image. Based on this, the attention image of the sample medical image can be additionally determined by the attention layer from the image features extracted by the model extraction layer. Taking a constraint image containing the region of interest as a reference, taking the minimum difference between the attention image and the constraint image as an optimization target, and adjusting parameters in an extraction layer and an attention layer in a lesion development prediction model, so as to realize optimization of the extraction layer and the attention layer. It should be noted that the attention layer is only used for training the lesion development prediction model, and does not participate in the actual application of the lesion development prediction model to predict the metastasis characteristics of the lesion tissue.
Taking fig. 3 as an example of a constraint image, description will be given. In fig. 3, the lesion tissue region is contained as a lung nodule around which there are pleura, i.e., other risk regions, that are easily eroded and transferred by the lung nodule. Correspondingly, in case fig. 3 can be used as a constraint image, the sample medical image should also be an image of a lung nodule containing a lesion tissue area. After the attention image of the sample medical image is obtained through the extraction layer and the attention layer in the lesion development prediction model, the difference between the constraint image and the attention image can be calculated, and the obtained difference is used for returning and adjusting the parameters of the extraction layer in the model. In fact, the constrained image shown in fig. 3 is used for guiding the extraction of the lesion development prediction model to the sample medical image characteristics when the lesion tissue region is the lung nodule, so that the extracted image characteristics can accurately contain the characterization of the lung nodule region and can additionally contain the characterization of the pleura region around the lung nodule, namely other risk regions. Therefore, the lesion development prediction model can output more accurate and perfect transfer characteristics according to the image characteristics of the lung nodule area and the image characteristics of the pleura area around the lung nodule in the sample medical image so as to better predict the follow-up development and transfer condition of the lung nodule.
Of course, after the constrained image is changed, the guiding effect on the lesion development prediction model is also changed. For example, in the case where the other risk region included in the constraint image is a lymph node, the extraction layer in the lesion development prediction model may additionally extract characteristics of the lymph node around the lesion tissue region.
Additionally, the lesion development prediction model can output more specific prediction transfer characteristics, and correspondingly, more specific annotation transfer characteristics can be obtained when annotations are obtained.
For example, in a specific embodiment, it may be preferable that the annotation transfer feature comprises an annotation transfer direction and the predictive transfer feature comprises a predictive transfer direction. When the lesion development prediction model is trained, the difference between the prediction transfer direction and the labeling transfer direction is the minimum as an optimization target, and the lesion development prediction model is trained.
Wherein the direction of metastasis is used to characterize the specific direction of the diseased tissue when metastasis occurs. Still further, the predicted transfer direction of the diseased tissue includes a predicted transfer direction of a next stage of growth of the diseased tissue relative to a current stage of growth and/or a predicted transfer direction of a final stage of growth of the diseased tissue relative to the current stage of growth. That is, the lesion development prediction model may output a possible transfer direction of the lesion tissue when it enters the next growth stage, or may output a possible transfer direction of the lesion tissue when it develops to the final growth stage, when it predicts the transfer direction of the lesion tissue. Depending on the output, the predetermined label transfer direction may also be different. When the outputted predicted transfer direction is a possible transfer direction of the lesion tissue when entering the next growth stage, the labeling transfer direction may be a direction of a difference between the lesion tissue in the sample medical image and the sample medical image of the lesion tissue at the next growth stage; likewise, when the output predicted transfer direction is a possible transfer direction of the lesion tissue at the time of growing to the final growing stage, the labeling transfer direction may be a direction of a difference between the lesion tissue in the sample medical image and the sample medical image of the lesion tissue at the final growing stage. In addition, the lesion development prediction model may output the predicted transition direction in the next growth stage and the predicted transition direction in the final growth stage at the same time, or may output only one of them, which is not particularly limited in this specification.
For another example, in a specific embodiment, it may be preferable that the annotation transfer feature comprises an annotation transfer mode and the predictive transfer feature comprises a predictive transfer mode. When the lesion development prediction model is trained, the lesion development prediction model can be trained by taking the minimum difference between the prediction transfer mode and the labeling transfer mode as an optimization target.
Wherein, the transfer mode is used for representing the mode of pathological tissues when the transfer occurs. In general, the transfer means may include, but are not limited to, direct spread, lymphatic transfer, blood line transfer, and planting, which is not particularly limited in this specification. The labeling transfer mode is a transfer mode of pathological change tissues in a sample medical image when the pathological change tissues are actually transferred.
For another example, in a specific embodiment, it may be preferable that the annotation transfer feature comprises an annotation growth stage and the predictive transfer feature comprises a predictive growth stage. When the lesion development prediction model is trained, the minimum difference between the prediction growth stage and the labeling growth stage can be used as an optimization target, and the lesion development prediction model is trained.
Wherein the growth phase is used to characterize the period in which the diseased tissue is located. There are different stages of growth in medicine for different diseased tissues. Taking the example of diseased tissue of the tumor type produced in cancer, the growth of a tumor can generally be divided into five growth stages: pre-cancerous stage, in situ cancerous stage, invasive cancerous stage, metastatic stage, and disseminated stage. Other pathological tissues can also determine different growth stages according to the professional division mode in the medical field. The labeling growth stage is the growth stage in which the pathological tissue in the sample medical image is actually located.
In combination with the above model training method, the present disclosure further provides a risk prediction method, as shown in fig. 4.
Fig. 4 is a flow chart of a risk prediction method provided in the present disclosure.
S200: a medical image is acquired that contains diseased tissue.
The risk prediction method provided by the specification is realized by using a lesion development prediction model trained by the model training method provided by the specification.
When risk prediction is performed on pathological tissues, medical images containing pathological tissues with risk to be predicted can be acquired first.
S202: inputting the medical image into a pre-trained lesion development prediction model, and extracting image features of the medical image through an extraction layer in the lesion development prediction model.
In this step, the acquired medical image of the lesion tissue may be input into a lesion development prediction model that has been trained in advance, wherein the lesion development prediction model is trained using the model training method provided in the present specification. Through an extraction layer in the lesion development prediction model, image features of the medical image can be extracted.
S204: and outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model.
In this step, the image features of the medical image extracted in step S202 may be input to an output layer of the lesion development prediction model, and the predicted transition feature of the lesion tissue included in the medical image may be output. The predicted transfer characteristics which can be output by the lesion development prediction model after training can better reflect the possible transfer condition of lesion tissues.
The above method for implementing model training and risk prediction for one or more embodiments of the present disclosure is based on the same concept, and the present disclosure further provides a corresponding report information processing device, as shown in fig. 5 and 6.
Fig. 5 is a schematic diagram of a model training device provided in the present specification, specifically including:
an acquisition module 300 for acquiring a sample medical image containing lesion tissue at different growth stages;
the labeling module 302 is configured to determine a label transfer feature according to differences between lesion tissues in different growth phases in each sample medical image;
the input module 304 is configured to input the sample medical image into a lesion development prediction model to be trained, and extract image features of the sample medical image through an extraction layer in the lesion development prediction model;
An output module 306, configured to output, according to the image feature, a predicted transition feature of the lesion tissue through an output layer in the lesion development prediction model;
the training module 308 is configured to train the lesion development prediction model with a minimum difference between the predicted transition feature and the labeled transition feature as an optimization target.
Optionally, the lesion development prediction model further comprises an attention layer;
the device further includes a constraint module 310, specifically configured to obtain a preset constraint image, where the constraint image includes a region of interest in the sample medical image; determining, by the attention layer, an attention image of the lesion development prediction model when processing the sample medical image according to the image features; and adjusting parameters of an extraction layer and an attention layer in the lesion development prediction model by taking the minimum difference between the constraint image and the attention image as an optimization target.
Optionally, the constraint module 310 is specifically configured to determine a lesion tissue area contained in the sample medical image; expanding the edge of the pathological tissue region to the outside of the pathological tissue region by a specified length to obtain a region of interest, so that the region of interest comprises the pathological tissue region and other risk regions around the pathological tissue region; the sample medical image containing the region of interest is taken as a constraint image.
Optionally, the annotation transfer feature comprises an annotation transfer direction, and the predicted transfer feature comprises a predicted transfer direction;
the training module 308 is specifically configured to train the lesion development prediction model with a minimum difference between the predicted transition direction and the labeled transition direction as an optimization target.
Optionally, the predicted transfer direction of the diseased tissue includes a predicted transfer direction of a next growth stage of the diseased tissue relative to a current growth stage and/or a predicted transfer direction of a final growth stage of the diseased tissue relative to the current growth stage.
Optionally, the annotation transfer feature comprises an annotation transfer mode, and the prediction transfer feature comprises a prediction transfer mode;
the training module 308 is specifically configured to train the lesion development prediction model with the minimum difference between the prediction transfer mode and the labeling transfer mode as an optimization target.
Optionally, the annotation transfer feature comprises an annotation growth stage, and the predictive transfer feature comprises a predictive growth stage;
the training module 308 is specifically configured to train the lesion development prediction model with a minimum difference between the predicted growth stage and the labeling growth stage as an optimization target.
Fig. 6 is a schematic diagram of a risk prediction apparatus provided in the present disclosure, which specifically includes:
an image acquisition module 400 for acquiring a medical image containing a lesion tissue;
an image input module 402, configured to input the medical image into a pre-trained lesion development prediction model, and extract image features of the medical image through an extraction layer in the lesion development prediction model;
and the feature output module 404 is configured to output, through an output layer in the lesion development prediction model, a predicted transition feature of the lesion tissue according to the image feature.
The present specification also provides a computer readable storage medium storing a computer program operable to perform the model training method described above and shown in fig. 1.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 7. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 7, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement the model training method shown in fig. 1 described above. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (10)

1. A method of model training, the method comprising:
acquiring a sample medical image, the sample medical image comprising diseased tissue at different stages of growth;
determining annotation transfer characteristics according to differences among lesion tissues in different growth stages in each sample medical image;
inputting the sample medical image into a lesion development prediction model to be trained, and extracting image features of the sample medical image through an extraction layer in the lesion development prediction model;
Outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model;
and training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target.
2. The method of claim 1, wherein the lesion development prediction model further comprises an attention layer;
before outputting the predicted transfer characteristic of the diseased tissue, the method further comprises:
acquiring a preset constraint image, wherein the constraint image is marked with an interested region in the sample medical image;
determining, by the attention layer, an attention image of the lesion development prediction model when processing the sample medical image according to the image features;
and adjusting parameters of an extraction layer and an attention layer in the lesion development prediction model by taking the minimum difference between the constraint image and the attention image as an optimization target.
3. The method according to claim 2, wherein the constraining image is predetermined, in particular comprising:
determining a lesion tissue region contained in the sample medical image;
Expanding the edge of the pathological tissue region to the outside of the pathological tissue region by a specified length to obtain a region of interest, so that the region of interest comprises the pathological tissue region and other risk regions around the pathological tissue region;
the sample medical image containing the region of interest is taken as a constraint image.
4. The method of claim 1, wherein the annotation transfer feature comprises an annotation transfer direction and the predictive transfer feature comprises a predictive transfer direction;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the predicted transfer direction and the labeling transfer direction as an optimization target.
5. The method of claim 4, wherein the predicted transition direction of the diseased tissue comprises a predicted transition direction of a next stage of growth of the diseased tissue relative to a current stage of growth and/or a predicted transition direction of a final stage of growth of the diseased tissue relative to a current stage of growth.
6. The method of claim 1, wherein the annotation transfer feature comprises an annotation transfer mode and the predictive transfer feature comprises a predictive transfer mode;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the prediction transfer mode and the labeling transfer mode as an optimization target.
7. The method of claim 1, wherein the annotation transfer feature comprises an annotation growth stage and the predictive transfer feature comprises a predictive growth stage;
training the lesion development prediction model by taking the minimum difference between the predicted transfer characteristic and the marked transfer characteristic as an optimization target, wherein the method specifically comprises the following steps of:
and training the lesion development prediction model by taking the minimum difference between the prediction growth stage and the labeling growth stage as an optimization target.
8. A risk prediction method, characterized in that a lesion development prediction model is pre-trained by using the method according to any one of claims 1-7, the method comprising:
Acquiring a medical image containing a lesion tissue;
inputting the medical image into a pre-trained lesion development prediction model, and extracting image features of the medical image through an extraction layer in the lesion development prediction model;
and outputting the predicted transfer characteristics of the lesion tissues according to the image characteristics through an output layer in the lesion development prediction model.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-8.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-8 when executing the program.
CN202310761866.3A 2023-06-26 2023-06-26 Model training method and device, storage medium and electronic equipment Pending CN116843994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310761866.3A CN116843994A (en) 2023-06-26 2023-06-26 Model training method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310761866.3A CN116843994A (en) 2023-06-26 2023-06-26 Model training method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116843994A true CN116843994A (en) 2023-10-03

Family

ID=88159221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310761866.3A Pending CN116843994A (en) 2023-06-26 2023-06-26 Model training method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116843994A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036830A (en) * 2023-10-07 2023-11-10 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036830A (en) * 2023-10-07 2023-11-10 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment
CN117036830B (en) * 2023-10-07 2024-01-09 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CA3047564C (en) Method and system for determining tumor burden in medical images
CN109448003A (en) A kind of entocranial artery blood-vessel image dividing method and system
CN116843994A (en) Model training method and device, storage medium and electronic equipment
CN110534193A (en) A kind of aneurysm rupture methods of risk assessment and system
CN109448004A (en) A kind of intercept method and system of the intracranial vessel image based on center line
Suh et al. MRI predictors of malignant transformation in patients with inverted papilloma: a decision tree analysis using conventional imaging features and histogram analysis of apparent diffusion coefficients
CN117333529A (en) Template matching-based vascular ultrasonic intima automatic measurement method and system
CN116030247B (en) Medical image sample generation method and device, storage medium and electronic equipment
Bliznakova et al. Computer aided preoperative evaluation of the residual liver volume using computed tomography images
CN116524295A (en) Image processing method, device, equipment and readable storage medium
Sun et al. Brain tumor image segmentation based on improved FPN
CN116258679A (en) Information recommendation method and device, storage medium and electronic equipment
CN114511599B (en) Model training method and device, medical image registration method and device
CN112927815B (en) Method, device and equipment for predicting intracranial aneurysm information
CN114299046A (en) Medical image registration method, device, equipment and storage medium
CN116912224A (en) Focus detection method, focus detection device, storage medium and electronic equipment
CN116152246B (en) Image recognition method, device, equipment and storage medium
CN117036830B (en) Tumor classification model training method and device, storage medium and electronic equipment
CN116188469A (en) Focus detection method, focus detection device, readable storage medium and electronic equipment
CN117457186A (en) Prediction method and prognosis method
CN116229218B (en) Model training and image registration method and device
CN116881725B (en) Cancer prognosis prediction model training device, medium and electronic equipment
CN116312981A (en) Image processing method, device and readable storage medium
CN117788955A (en) Image identification method and device, storage medium and electronic equipment
CN117252831A (en) Focus transfer prediction method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination