CN116168097A - Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image - Google Patents

Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image Download PDF

Info

Publication number
CN116168097A
CN116168097A CN202211364596.4A CN202211364596A CN116168097A CN 116168097 A CN116168097 A CN 116168097A CN 202211364596 A CN202211364596 A CN 202211364596A CN 116168097 A CN116168097 A CN 116168097A
Authority
CN
China
Prior art keywords
image
model
training
cbct
sketching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211364596.4A
Other languages
Chinese (zh)
Inventor
杨碧凝
刘宇翔
门阔
戴建荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cancer Hospital and Institute of CAMS and PUMC
Original Assignee
Cancer Hospital and Institute of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cancer Hospital and Institute of CAMS and PUMC filed Critical Cancer Hospital and Institute of CAMS and PUMC
Priority to CN202211364596.4A priority Critical patent/CN116168097A/en
Publication of CN116168097A publication Critical patent/CN116168097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure relates to a method, apparatus, device and medium for constructing a CBCT sketch model and sketching a CBCT image, the method comprising: acquiring training data and a training label, wherein the training data comprises CBCT images and CT images of a plurality of training subjects, and the training label comprises: the quality of a reference image of the CT image, and a reference sketching result of sketching a target area aiming at the CT image; the CBCT image is input into a first model for training, the output of the first model is a pseudo CT image, the training is finished under the condition that the difference between the image quality of the pseudo CT image and the quality of a reference image is smaller than a first set threshold value, and the trained first model is an image generation model; inputting the CT image into a second model for training, wherein the output of the second model is a prediction sketch result, and training is finished under the condition that the difference between the prediction sketch result and a reference sketch result is smaller than a second set threshold value, and the trained second model is a segmentation model; and generating a CBCT sketch model according to the image generation model and the segmentation model.

Description

Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
Technical Field
The present disclosure relates to the fields of image processing, medical technology, and computer technology, and in particular, to a method, apparatus, device, and medium for constructing a CBCT sketch model and sketching a CBCT image.
Background
Cone beam computed tomography (CBCT, cone-beam computed tomography) is a current image-guided device in widespread use today, and can be used for positioning correction of patients before radiotherapy to quantify the effects caused by factors such as tumor movement and organ movement within a treatment fraction; but also can implement on-line self-adaptive radiotherapy and has great effect on improving the precision of radiotherapy.
In each fractionated adaptive radiotherapy, a physician is required to delineate a tumor target region and organs at risk on CBCT images.
Disclosure of Invention
In order to solve or at least partially solve the technical problems found below: the region of interest is sketched on the CBCT image manually, so that the time is very long, the total duration of the self-adaptive radiotherapy is greatly prolonged, the waiting time of a patient is prolonged, and the clinical operation efficiency is reduced; moreover, CBCT images have more artifacts and poor image quality, and direct sketching on the CBCT images is greatly dependent on experience and level of doctors, and subjective differences of sketching by different doctors lead to great differences of results obtained by sketching; embodiments of the present disclosure provide a method, apparatus, device, and medium for constructing a CBCT sketch model and sketching a CBCT image.
In a first aspect, embodiments of the present disclosure provide a method of constructing a CBCT sketch model. The method for constructing the CBCT sketch model comprises the following steps: acquiring training data and training labels, wherein the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training label comprising: the quality of the reference image of the CT image is used for carrying out the reference sketching result of the sketching of the target area aiming at the CT image; inputting the CBCT image into a first model for training, wherein the output of the first model is a pseudo CT image, and the training is finished when the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a first set threshold value, and the trained first model is an image generation model; inputting the CT image into a second model for training, wherein the output of the second model is a prediction sketch result, and training is finished when the difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold value, and the trained second model is a segmentation model; and generating a CBCT sketch model according to the image generation model and the segmentation model.
According to an embodiment of the present disclosure, generating a CBCT sketch model according to the image generation model and the segmentation model includes: taking the output of the image generation model as the input of the segmentation model to obtain a clustered CBCT sketch model comprising the image generation model and the segmentation model; performing parameter fine adjustment on the segmentation model according to the CT image of the first object and a reference sketching result of the CT image of the first object to obtain a personalized segmentation model suitable for the first object; and taking the output of the image generation model as the input of the personalized segmentation model to obtain a personalized CBCT sketch model comprising the image generation model and the personalized segmentation model.
According to an embodiment of the present disclosure, the performing parameter fine adjustment on the segmentation model according to the CT image of the first object and the reference sketching result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object includes: inputting the CT image of the first object into the segmentation model, and outputting a prediction sketch result of the CT image of the first object; and fine-tuning parameters of the segmentation model so that the difference between the reference sketching result and the prediction sketching result of the CT image of the first object is smaller than a second set threshold value, wherein the segmentation model subjected to parameter fine-tuning is a personalized segmentation model suitable for the first object.
According to embodiments of the present disclosure, differences in image quality are analyzed from four dimensions, noise level, artifacts, tissue boundary sharpness, gray values; the first set threshold includes: noise error threshold, image error threshold, sharpness error threshold, and gray error threshold; and when the difference between the image quality of the pseudo CT image and the reference image quality with respect to the noise level is smaller than a noise error threshold, the difference between the image quality of the pseudo CT image and the reference image quality with respect to the artifact is smaller than an image error threshold, the difference between the image quality of the pseudo CT image and the reference image quality with respect to the tissue boundary definition is smaller than a definition error threshold, and the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold, the image quality of the pseudo CT image and the reference image quality are consistent, and the condition that training is finished is reached.
In a second aspect, embodiments of the present disclosure provide a method of delineating a CBCT image. The method for outlining the CBCT image comprises the following steps: acquiring a CBCT image to be sketched of a target object; inputting the CBCT image to be sketched into a pre-trained target image generation model, and outputting to obtain a pseudo CT image; inputting the pseudo CT image into a pre-trained target segmentation model, and outputting to obtain a CBCT sketching result of the target object; wherein the target image generation model comprises first network parameters mapped from the CBCT image to the pseudo CT image; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in the training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
According to an embodiment of the present disclosure, the first network parameter of the target image generation model is obtained by: and taking CBCT images of a plurality of training objects as training data of a first model, taking reference image quality of the CT images of the plurality of training objects as a training label of the first model, generating a model for the target image by the trained first model, and taking the trained parameters of the first model as the first network parameters. The second network parameter of the target segmentation model is obtained by one of the following ways: taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of sketching a target area aiming at the CT images of the plurality of training objects as a training label of the second model, wherein the trained second model is the target segmentation model, and the trained parameters of the second model are the second network parameters; or, taking the CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of sketching a target area aiming at the CT images of the plurality of training objects as a training label of the second model, and taking a parameter obtained by training as an intermediate parameter of the second model; and fine-tuning the intermediate parameters of the second model according to the CT image of the target object and the reference sketching result of the CT image of the target object, wherein the fine-tuned second model is the target segmentation model, and the fine-tuned parameters are the second network parameters.
In a third aspect, embodiments of the present disclosure provide an apparatus for constructing a CBCT sketch model. The device for constructing the CBCT sketch model comprises: training data and label acquisition module, first training module, second training module and sketch model generation module. The training data and label obtaining module is used for obtaining training data and training labels, wherein the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training label comprising: and the reference image quality of the CT image is used for carrying out the reference sketching result of the sketching of the target area aiming at the CT image. The first training module is configured to input the CBCT image into a first model for training, output of the first model is a pseudo CT image, and if a difference between an image quality of the pseudo CT image and a quality of the reference image is smaller than a first set threshold, the training is completed, and the trained first model is an image generation model. The second training module is configured to input the CT image to a second model for training, output of the second model is a prediction sketch result, and training is completed when a difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold, and the trained second model is a segmentation model. The sketch model generation module is used for generating a CBCT sketch model according to the image generation model and the segmentation model.
In a fourth aspect, embodiments of the present disclosure provide an apparatus for delineating a CBCT image. The device comprises: the device comprises a data acquisition module, a first processing module and a second processing module. The data acquisition module is used for acquiring a CBCT image to be sketched of the target object. The first processing module is used for inputting the CBCT image to be sketched into a pre-trained target image generation model and outputting to obtain a pseudo CT image. The second processing module is used for inputting the pseudo CT image into a pre-trained target segmentation model and outputting a CBCT sketching result of the target object. Wherein the target image generation model comprises first network parameters mapped from the CBCT image to the pseudo CT image; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in the training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
In a fifth aspect, embodiments of the present disclosure provide an electronic device. The electronic equipment comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; a memory for storing a computer program; and the processor is used for realizing the method for constructing the CBCT sketching model or the method for sketching the CBCT image when executing the program stored in the memory.
In a sixth aspect, embodiments of the present disclosure provide a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements a method of constructing a CBCT delineation model or a method of delineating a CBCT image as described above.
The technical scheme provided by the embodiment of the disclosure at least has part or all of the following advantages:
considering that the CBCT image and the CT image have similar image structures but different quality due to different scanning time, and due to more artifacts of the CBCT image, if the supervision training of the CBCT image sketching is carried out by adopting the sketching result of the CT image, the problems of sketching deformation, inaccurate sketching and the like can be generated; in the method for constructing the CBCT sketching model and the method for sketching the CBCT image provided by the embodiment of the disclosure, an image generation model for representing the mapping relation between the CBCT image and the pseudo CT image and a segmentation model for representing the mapping relation between the CT image and the prediction sketching result are obtained by training the first model and the second model, and the CBCT sketching model is generated according to the image generation model and the segmentation model; in the supervision training process of the first model, the real image quality of the CT image is used as the reference image quality corresponding to the training label, so that the image quality of the pseudo CT image obtained by the CBCT image through image generation model correspondence is consistent with the reference image quality of the real CT image, and the sketching result of the CT image can be applied to the pseudo CT image because the image quality of the pseudo CT image is consistent with the image quality of the real CT image, and meanwhile, the sketching on the pseudo CT image is equivalent to matching sketching of the tissue structure corresponding to the CBCT image, so that the sketching result based on the real CT is relatively accurate and efficient.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the related art will be briefly described below, and it will be apparent to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 schematically illustrates a flow chart of a method of constructing a CBCT sketch model provided by an embodiment of the present disclosure;
FIG. 2 schematically illustrates a training process diagram of an image generation model according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a training process diagram of a segmentation model according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a process diagram of generating a CBCT sketch model according to an embodiment of the disclosure;
fig. 5 schematically shows a comparison graph of the sketching effect of sketching a clinical target area (CTV) of a tumor on a CBCT image of a nasopharyngeal carcinoma patient, respectively illustrating (a) a CBCT image corresponding to a nasopharyngeal carcinoma patient X, for which (a 1) a real CTV sketching result, (a 2) a CTV sketching result using a clustered CBCT sketching model, and (a 3) a CTV sketching result using a personalized CBCT sketching model; (b) A CBCT image corresponding to the nasopharyngeal carcinoma patient Y, aiming at (b 1) a real CTV sketching result, (b 2) a CTV sketching result adopting a group CBCT sketching model and (b 3) a CTV sketching result adopting a personalized CBCT sketching model of the CBCT image corresponding to the nasopharyngeal carcinoma patient Y;
Fig. 6 schematically shows a comparison graph of the sketching effect of sketching a nasopharyngeal tumor target area (GTVnx) on a CBCT image of a nasopharyngeal carcinoma patient, respectively showing (a) a CBCT image corresponding to a nasopharyngeal carcinoma patient X, and (a 1) a real GTVnx sketching result, (a 2) a GTVnx sketching result of a clustered CBCT sketching model, and (a 3) a GTVnx sketching result of a personalized CBCT sketching model for the CBCT image corresponding to the nasopharyngeal carcinoma patient X; (b) A CBCT image corresponding to the nasopharyngeal carcinoma patient Y, aiming at (b 1) a real GTVnx sketch result, (b 2) a GTVnx sketch result adopting a group CBCT sketch model and (b 3) a GTVnx sketch result adopting a personalized CBCT sketch model of the CBCT image corresponding to the nasopharyngeal carcinoma patient Y;
FIG. 7 schematically illustrates a flow chart of a method of delineating CBCT images, in accordance with an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of an apparatus for constructing a CBCT sketch model provided by an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of an apparatus for delineating CBCT images provided by an embodiment of the present disclosure; and
fig. 10 schematically shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some, but not all, embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the disclosure, are within the scope of the disclosure.
A first exemplary embodiment of the present disclosure provides a method of constructing a CBCT sketch model. The above-described method may be performed by an electronic device having computing capabilities.
Fig. 1 schematically illustrates a flowchart of a method of constructing a CBCT sketch model provided by an embodiment of the present disclosure.
Referring to fig. 1, a method for constructing a CBCT sketch model according to an embodiment of the present disclosure includes the following steps: s110, S120, S130, and S140.
In step S110, training data and training labels are acquired, where the training data includes: a CBCT image and a CT image of a plurality of training subjects, the training label comprising: and the reference image quality of the CT image is used for carrying out the reference sketching result of the sketching of the target area aiming at the CT image.
The training object is an object for acquiring training set data, for example, a CBCT image and a CT image of a patient in a medical system database, and the same body part, organ, tissue region, and the like of the same patient have the CBCT image and the CT image at the same time.
For example, for a certain cancer patient P 1 A head region, a stomach region, a chest region, an abdomen region, or the like, and simultaneously has a CBCT image and a CT image obtained by shooting; all or part of the same type of cancer patient (e.g., nasopharyngeal or lung cancer patient, etc.) in the medical system database CBCT images and CT images may be used as training data. The CBCT image and the CT image are image data having the same pixel size and registered (having a mapping matching relationship) with each other.
The target region is, for example, a CBCT image of the training object, a target region in the CT image, a target position, or the like.
For each CT image, the real image quality of the CT image can be obtained; for descriptive distinction from the image quality of the pseudo CT image obtained subsequently via the first model output and for illustrating the effect of the labels, the true image quality of the CT image is described as the reference image quality.
For each CT image, a sketching result of the current CT image after sketching the target area can be obtained; the sketching result refers to a boundary line or contour line of the framed target area. In order to distinguish the description from the prediction sketching result obtained through the second model output and to illustrate the effect of the label, the sketching result of the target area sketching aiming at the CT image is described as a reference sketching result.
Because the CT image has the characteristics of quick scanning time, clear image and the like, and the medical system database stores rich CT images and corresponding accurate reference sketching results, the sketching reference can be carried out on the CBCT image by considering the reference sketching results of the CT image; further considering that the CBCT image and the CT image have similar image structures but different quality due to different scanning time, and because the CBCT image has more artifacts, if the sketching result of the CT image is adopted to perform the supervised training of the CBCT image sketching, the problems of sketching deformation, inaccurate sketching and the like are generated, in the embodiment of the disclosure, the output of the first model and the input of the second model are associated based on the consistency of the image quality through the training process of constructing the two models, the image generation model for representing the mapping relationship from the CBCT image to the pseudo CT image and the segmentation model for representing the mapping relationship from the CT image to the prediction sketching result are obtained through training the first model and the second model, and the generated pseudo CT image and the CBCT image have consistent anatomy structures (or are described as image structures), so that the pseudo CT image and the CT image have consistent image quality, and the accurate sketching of the CBCT image is realized according to the conversion of the image generation model and the sketching process of the segmentation model.
In step S120, the CBCT image is input to a first model, training is performed, the output of the first model is a pseudo CT image, and when the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a first set threshold, training is completed, and the trained first model is an image generation model.
Fig. 2 schematically illustrates a training process diagram of an image generation model according to an embodiment of the present disclosure.
Referring to fig. 2, the image generation model of the training stage is described as a first model, and the trained first model is the image generation model. In some embodiments, the first model may be a neural network model, for example, an image generation model may be trained based on a deep learning approach.
The input of the first model is a CBCT image in training data, and the output is a pseudo CT image. The pseudo CT image is a CT image generated by simulation of a computing device or a CT image generated by a neural network model, and is different from a CT image obtained by scanning by an instrument.
In an embodiment, training of the image generation model may be implemented based on a portion of the structure of the CycleGAN network. The CycleGAN network includes two generators and two discriminants, described as generator a and generator B, respectively, discriminant C and discriminant D. The data input and output flow direction is expressed as: the training process comprises a real CBCT image, a generator A, a pseudo CT, a generator B, a reconstructed CBCT image, a discriminator C and a discriminator D, wherein the discriminator C is used for judging whether the image quality between the generated pseudo CT image and the real CT image is consistent, and the discriminator D is used for judging whether the reconstructed CBCT image and the real CBCT image are consistent. The network corresponding to the part comprising the generator A and the discriminator C can be selected for training to obtain an image generation model.
In the training process, the training is finished when the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a first set threshold value.
In one embodiment, the difference in image quality is analyzed from four dimensions, noise level, artifacts, tissue boundary sharpness, gray values; the first set threshold includes: noise error threshold, image error threshold, sharpness error threshold, and gray error threshold; and when the difference between the image quality of the pseudo CT image and the reference image quality with respect to the noise level is smaller than a noise error threshold, the difference between the image quality of the pseudo CT image and the reference image quality with respect to the artifact is smaller than an image error threshold, the difference between the image quality of the pseudo CT image and the reference image quality with respect to the tissue boundary definition is smaller than a definition error threshold, and the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold, the image quality of the pseudo CT image and the reference image quality are consistent, and the condition that training is finished is reached.
In step S130, the CT image is input to a second model for training, the output of the second model is a prediction sketch result, and if the difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold, the training is ended, and the trained second model is a segmentation model.
Fig. 3 schematically illustrates a training process diagram of a segmentation model according to an embodiment of the present disclosure.
Referring to fig. 3, the segmentation model of the training phase is described as a second model, and the trained second model is the segmentation model. In some embodiments, the second model may be a neural network model, for example, the segmentation model may be trained based on a deep learning approach.
The input of the second model is a CT image in training data, and the output is a prediction sketch result; the training label is a reference sketching result for sketching the target area aiming at the CT image.
In some embodiments, the second model may be, but is not limited to being, a deep V3 network model.
In step S140, a CBCT delineation model is generated from the image generation model and the segmentation model.
Fig. 4 schematically illustrates a process diagram of generating a CBCT sketch model according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, referring to fig. 4, in the step S140, a CBCT delineation model is generated according to the image generation model and the segmentation model, including:
taking the output of the image generation model as the input of the segmentation model, obtaining a clustered CBCT sketch model 410 comprising the image generation model and the segmentation model;
Performing parameter fine adjustment on the segmentation model according to the CT image of the first object and a reference sketching result of the CT image of the first object to obtain a personalized segmentation model suitable for the first object;
and taking the output of the image generation model as the input of the personalized segmentation model to obtain a personalized CBCT sketch model 420 comprising the image generation model and the personalized segmentation model.
In some embodiments, the image generation model and the segmentation model may be combined to obtain a clustered CBCT delineation model, with the output of the image generation model being the input of the segmentation model.
Considering that the segmentation model trained by the second model is a generalization model adapted to the population, although the focal regions of all patients are similar in general structure, there are individual differences between the details of the targeting construct of different patients. That is, based on a clustered segmentation model trained on a limited data scale, it is difficult to accurately predict the delineated shape of its region of interest (target region) for individuals with large anatomical changes or body types. In other embodiments, therefore, the CBCT image of the first object (which may also be described as the target object) to be delineated is used as a personalized input of the segmentation model to fine-tune the parameters of the segmentation model, so that the fine-tuned segmentation model and the image generation model are combined to obtain a personalized CBCT delineated model, and the output of the image generation model is used as the input of the fine-tuned segmentation model. By establishing the personalized segmentation model, the individual region-of-interest structure with higher precision can be rapidly and objectively segmented on the CBCT image.
According to an embodiment of the present disclosure, the performing parameter fine adjustment on the segmentation model according to the CT image of the first object and the reference sketching result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object includes: inputting the CT image of the first object into the segmentation model, and outputting a prediction sketch result of the CT image of the first object; and fine-tuning parameters of the segmentation model so that the difference between the reference sketching result and the prediction sketching result of the CT image of the first object is smaller than a second set threshold value, wherein the segmentation model subjected to parameter fine-tuning is a personalized segmentation model suitable for the first object.
In the application scene, the automatic sketching of the CBCT image can be performed by using the clustered CBCT sketching model, and the automatic sketching of the CBCT image can also be performed by using the personalized CBCT sketching model.
In the embodiment including the steps S110 to S140, by training the first model and the second model, an image generation model for representing a mapping relationship between the CBCT image and the pseudo CT image and a segmentation model for representing a mapping relationship between the CT image and the prediction sketch result are obtained, and the CBCT sketch model is generated according to the image generation model and the segmentation model; in the supervision training process of the first model, the real image quality of the CT image is used as the reference image quality corresponding to the training label, so that the image quality of the pseudo CT image obtained by the CBCT image through image generation model correspondence is consistent with the reference image quality of the real CT image, and the sketching result of the CT image can be applied to the pseudo CT image because the image quality of the pseudo CT image is consistent with the image quality of the real CT image, and meanwhile, the sketching on the pseudo CT image is equivalent to matching sketching of the tissue structure corresponding to the CBCT image, so that the sketching result based on the real CT is relatively accurate and efficient.
For example, in creating an image generation model from CBCT images to pseudo CT images, a localization CT image of the patient is selected and the CBCT images acquired during treatment form a training dataset. The CT image of the patient is selected from a CT scanner model Philip Brilliance CT Big Bore or a CT detector model SOMATOM Definition AS CT-Sim. The CT image is registered (also understood as map pairing) onto the CBCT image using medical image processing software (e.g., medical image processing MIM software), constituting CT-CBCT pairing data of the same pixel size as the CBCT image. And then automatically drawing the outline of the target area by using MIM software, and extracting the part in the outline range for deep learning training and testing. Before the CT image and the CBCT image are input into the neural network, normalization processing can be carried out on the CT image and the CBCT image, so that the range of the CT image and the CBCT image is within the range of [ -1,1 ]. Then, a deep learning network is adopted to learn the mapping from the CBCT image to the pseudo CT image, the task is to input the CBCT image with a single channel, output the pseudo CT image with the single channel, and the supervised training is adopted, wherein the training label is the image quality of the CT image.
In the application process of the method, the CBCT sketching model is constructed by means of a deep learning method, the CT positioning image and the corresponding sketching result which are actually measured by each patient before radiotherapy are utilized to conduct personalized fine adjustment on the clustered CBCT automatic sketching model, manual sketching data on the CBCT image are not required to be collected in the training process, and the automatic sketching precision of a region of interest can be effectively improved.
For example, after a new patient is admitted, a doctor firstly performs CT scanning on a target area to obtain a CT image, and performs sketching work on a region of interest corresponding to the CT image. Before the patient performs the first CBCT scan, the localization CT image and the delineation are input into a segmentation model, which is fine-tuned to obtain a personalized segmentation model (which may also be described as an individualized segmentation model). After the patient performs CBCT scanning on the same day, the CBCT image is input into an image generation model to generate a corresponding pseudo CT image, and the pseudo CT image is input into a personalized segmentation model after fine adjustment to obtain a high-precision structure for automatically delineating the region of interest. Doctors and physicists can observe anatomical changes of patients based on the structure rapidly sketched out by the structure, or carry out self-adaptive radiotherapy and the like, the precision is higher relative to a group model, and compared with manual sketching, the deep learning method can also greatly improve the efficiency, save the waiting time of patients and improve the clinical operation efficiency.
Fig. 5 schematically shows a comparison graph of the sketching effect of sketching a clinical target area (CTV) of a tumor on a CBCT image of a nasopharyngeal carcinoma patient, respectively illustrating (a) a CBCT image corresponding to a nasopharyngeal carcinoma patient X, for which (a 1) a real CTV sketching result, (a 2) a CTV sketching result using a clustered CBCT sketching model, and (a 3) a CTV sketching result using a personalized CBCT sketching model; (b) A CBCT image corresponding to the nasopharyngeal carcinoma patient Y, aiming at (b 1) a real CTV sketching result, (b 2) a CTV sketching result adopting a group CBCT sketching model and (b 3) a CTV sketching result adopting a personalized CBCT sketching model of the CBCT image corresponding to the nasopharyngeal carcinoma patient Y.
Taking the image of a nasopharyngeal carcinoma patient and the delineation of a tumor clinical target area (CTV) as an example. Referring to fig. 5 (a) and (b), CBCT images of two nasopharyngeal carcinoma patients X and Y are respectively illustrated, and comparing the results of fig. 5 (a 1) and (a 2), or comparing the results of fig. 5 (b 1) and (b 2), it can be known that the CTV sketching result sketched by the clustered CBCT sketching model can basically sketch the core part of the real tumor clinical target area, has acceptable accuracy, and performs automatic sketching based on the clustered CBCT sketching model, thereby saving manpower and time cost for manually sketching CBCT, and eliminating the need of sketching CBCT data in advance, and only needs to use a large amount of CT images and corresponding CT sketching data (as reference sketching results) in the medical system database; comparing the results of (a 1) and (a 3) in fig. 5 or (b 1) and (b 3) in fig. 5 shows that the CTV sketching result obtained based on the personalized CBCT sketching model is almost similar to the real sketching result, and besides the advantages of the clustered CBCT sketching model, the accuracy of the sketching result is improved.
Fig. 6 schematically shows a comparison graph of the sketching effect of sketching a nasopharyngeal tumor target area (GTVnx) on a CBCT image of a nasopharyngeal carcinoma patient, respectively showing (a) a CBCT image corresponding to a nasopharyngeal carcinoma patient X, and (a 1) a real GTVnx sketching result, (a 2) a GTVnx sketching result of a clustered CBCT sketching model, and (a 3) a GTVnx sketching result of a personalized CBCT sketching model for the CBCT image corresponding to the nasopharyngeal carcinoma patient X; (b) Aiming at (b 1) a real GTVnx sketch result, (b 2) a GTVnx sketch result of a group CBCT sketch model and (b 3) a GTVnx sketch result of a personalized CBCT sketch model of the CBCT image corresponding to the nasopharyngeal cancer patient Y.
Taking the image of a patient with nasopharyngeal carcinoma and the sketch of a target area of nasopharyngeal tumor (GTVnx) as an example. Referring to the results of (a) and (b) in fig. 6, which respectively illustrate CBCT images of two nasopharyngeal carcinoma patients X and Y, comparing the results of (a 1) and (a 2) in fig. 6 or (b 1) and (b 2) in fig. 6, it can be known that the GTVnx sketch result sketched by using the clustered CBCT sketch model can basically sketch the core part of the real nasopharyngeal carcinoma target area, has acceptable accuracy, and performs automatic sketch based on the clustered CBCT sketch model, thereby saving manpower and time cost for manually sketching CBCT, and eliminating the need of sketching and labeling CBCT data in advance, and only needing to use a large amount of CT images and corresponding CT sketch labeling data (as reference sketch results) in the medical system database; comparing the results of (a 1) and (a 3) in fig. 6 or (b 1) and (b 3) in fig. 6, it can be seen that the GTVnx sketch result obtained based on the personalized CBCT sketch model is almost similar to the real sketch result, and besides the advantages of the clustered CBCT sketch model, the accuracy of the sketch result is improved.
A second exemplary embodiment of the present disclosure provides a method of delineating a CBCT image. The above-described method may be performed by an electronic device having computing capabilities.
In some embodiments, the present embodiment may directly utilize the clustered CBCT sketch model or the personalized CBCT sketch model obtained in the first embodiment to perform data processing, input a CBCT image to be sketched of the target object into the clustered CBCT sketch model or the personalized CBCT sketch model, and output a CBCT sketch result corresponding to the target object.
Fig. 7 schematically illustrates a flowchart of a method of delineating CBCT images, in accordance with an embodiment of the present disclosure.
Referring to fig. 7, a method for outlining a CBCT image according to an embodiment of the present disclosure includes the following steps: s710, S720 and S730.
In step S710, a CBCT image of the target object to be delineated is acquired.
In step S720, the CBCT image to be sketched is input into a pre-trained target image generation model, and a pseudo CT image is output.
In step S730, the pseudo CT image is input to a pre-trained target segmentation model, and a CBCT delineation result of the target object is output.
Wherein the target image generation model comprises first network parameters mapped from the CBCT image to the pseudo CT image; the object segmentation model comprises second network parameters of a delineation result of the object region mapped from the CT image to the CT image. In the training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
According to an embodiment of the present disclosure, the first network parameter of the target image generation model is obtained by: and taking CBCT images of a plurality of training objects as training data of a first model, taking reference image quality of the CT images of the plurality of training objects as a training label of the first model, generating a model for the target image by the trained first model, and taking the trained parameters of the first model as the first network parameters.
In some embodiments, the second network parameter of the target segmentation model is obtained by one of:
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of sketching a target area aiming at the CT images of the plurality of training objects as a training label of the second model, wherein the trained second model is the target segmentation model, and the trained parameters of the second model are the second network parameters; or alternatively, the process may be performed,
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of sketching a target area aiming at the CT images of the plurality of training objects as a training label of the second model, and taking a parameter obtained by training as an intermediate parameter of the second model; and fine-tuning the intermediate parameters of the second model according to the CT image of the target object and the reference sketching result of the CT image of the target object, wherein the fine-tuned second model is the target segmentation model, and the fine-tuned parameters are the second network parameters.
In an embodiment, generating a CBCT delineation model according to the image generation model and the segmentation model includes: taking the output of the image generation model as the input of the segmentation model to obtain a clustered CBCT sketch model comprising the image generation model and the segmentation model; performing parameter fine adjustment on the segmentation model according to the CT image of the first object and a reference sketching result of the CT image of the first object to obtain a personalized segmentation model suitable for the first object; taking the output of the image generation model as the input of the personalized segmentation model, a personalized CBCT sketch model comprising the image generation model and the personalized segmentation model is obtained, and reference is made to the detailed description of fig. 4 in the first embodiment.
The details of the second embodiment may also refer to the descriptions related to the first embodiment, which are not repeated here.
A third exemplary embodiment of the present disclosure provides an apparatus for constructing a CBCT sketch model.
Fig. 8 schematically shows a block diagram of an apparatus for constructing a CBCT sketch model provided by an embodiment of the present disclosure.
Referring to fig. 8, an apparatus 800 for constructing a CBCT sketch model according to an embodiment of the present disclosure includes: training data and label acquisition module 801, first training module 802, second training module 803, and sketch model generation module 804.
The training data and tag obtaining module 801 is configured to obtain training data and a training tag, where the training data includes: a CBCT image and a CT image of a plurality of training subjects, the training label comprising: and the reference image quality of the CT image is used for carrying out the reference sketching result of the sketching of the target area aiming at the CT image.
The first training module 802 is configured to input the CBCT image into a first model for training, wherein an output of the first model is a pseudo CT image, and the training is completed when a difference between an image quality of the pseudo CT image and a reference image quality is smaller than a first set threshold, and the trained first model is an image generation model.
The second training module 803 is configured to input the CT image to a second model for training, output of the second model is a prediction sketch result, and if a difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold, training is completed, and the trained second model is a segmentation model.
The sketch model generating module 804 is configured to generate a CBCT sketch model according to the image generating model and the segmentation model.
The details of the implementation of the functional modules of this embodiment may refer to the descriptions related to the first embodiment, which are not repeated here.
A fourth exemplary embodiment of the present disclosure provides an apparatus for delineating CBCT images.
Fig. 9 schematically shows a block diagram of an apparatus for delineating CBCT images provided by an embodiment of the present disclosure.
Referring to fig. 9, an apparatus 900 for delineating a CBCT image according to an embodiment of the present disclosure includes: a data acquisition module 901, a first processing module 902 and a second processing module 903.
The data acquisition module 901 is configured to acquire a CBCT image to be sketched of a target object.
The first processing module 902 is configured to input the CBCT image to be sketched into a pre-trained target image generation model, and output the CBCT image to obtain a pseudo CT image. In one embodiment, the first processing module includes a target image generation model; in another embodiment, the first processing module is capable of communicating with a device storing a target image generation model, invoking the target image generation model to perform the processing steps to delineate the CBCT image.
The second processing module 903 is configured to input the pseudo CT image to a pre-trained target segmentation model, and output a CBCT delineation result of the target object. In an embodiment, the second processing module includes a target segmentation model. In another embodiment, the second processing module is capable of communicating with a device storing the object segmentation model, invoking the object segmentation model to perform the processing steps on the pseudo-CT image. In some embodiments, the target image generation model and the target segmentation model are integrated in the same device as a monolithic CBCT delineation model; in other embodiments, the object image generation model and the object segmentation model may be dispersed in different devices, with the training phases being related to each other based on the quality of the CT image.
Wherein the target image generation model comprises first network parameters mapped from the CBCT image to the pseudo CT image; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in the training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
Any of the functional modules included in the apparatus 800 or the apparatus 900 may be combined and implemented in one module, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. At least one of the functional modules included in the apparatus 800 or the apparatus 900 described above may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware, such as any other reasonable manner of integrating or packaging the circuits, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the functional modules included in the apparatus 800 or the apparatus 900 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
A fifth exemplary embodiment of the present disclosure provides an electronic device.
Fig. 10 schematically shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
Referring to fig. 10, an electronic device 1000 provided in an embodiment of the present disclosure includes a processor 1001, a communication interface 1002, a memory 1003, and a communication bus 1004, where the processor 1001, the communication interface 1002, and the memory 1003 complete communication with each other through the communication bus 1004; a memory 1003 for storing a computer program; the processor 1001 is configured to implement the method for constructing a CBCT sketch model or the method for sketching a CBCT image as described above when executing the program stored in the memory.
The sixth exemplary embodiment of the present disclosure also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements a method of constructing a CBCT delineation model or a method of delineating a CBCT image as described above.
The computer-readable storage medium may be embodied in the apparatus/means described in the above embodiments; or may exist alone without being assembled into the apparatus/device. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of constructing a CBCT delineation model, comprising:
acquiring training data and training labels, wherein the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training tag comprising: the reference image quality of the CT image is used for carrying out a reference sketching result of target area sketching aiming at the CT image;
inputting the CBCT image into a first model for training, wherein the output of the first model is a pseudo CT image, and training is finished under the condition that the difference between the image quality of the pseudo CT image and the quality of the reference image is smaller than a first set threshold value, and the trained first model is an image generation model;
Inputting the CT image into a second model for training, wherein the output of the second model is a prediction sketch result, and training is finished under the condition that the difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold value, and the trained second model is a segmentation model;
and generating a CBCT sketch model according to the image generation model and the segmentation model.
2. The method of claim 1, wherein generating a CBCT delineation model from the image generation model and the segmentation model comprises:
taking the output of the image generation model as the input of the segmentation model to obtain a clustered CBCT sketch model comprising the image generation model and the segmentation model;
performing parameter fine adjustment on the segmentation model according to the CT image of the first object and a reference sketching result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object;
and taking the output of the image generation model as the input of the personalized segmentation model to obtain a personalized CBCT sketch model comprising the image generation model and the personalized segmentation model.
3. The method according to claim 2, wherein the performing parameter fine-tuning on the segmentation model according to the CT image of the first object and the reference delineation result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object comprises:
Inputting the CT image of the first object into the segmentation model, and outputting a prediction sketch result of the CT image of the first object;
and fine-tuning parameters of the segmentation model, so that the difference between the reference sketching result and the prediction sketching result of the CT image of the first object is smaller than a second set threshold value, and the segmentation model subjected to parameter fine-tuning is a personalized segmentation model which is suitable for the first object.
4. The method of claim 1, wherein the difference in image quality is analyzed from four dimensions of noise level, artifacts, tissue boundary sharpness, gray values; the first set threshold includes: noise error threshold, image error threshold, sharpness error threshold, and gray error threshold;
and when the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a noise error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than an image error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a definition error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold, and the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold.
5. A method of delineating a CBCT image, comprising:
acquiring a CBCT image to be sketched of a target object;
inputting the CBCT image to be sketched into a pre-trained target image generation model, and outputting to obtain a pseudo CT image;
inputting the pseudo CT image into a pre-trained target segmentation model, and outputting to obtain a CBCT sketching result of the target object;
wherein the target image generation model comprises first network parameters mapped from CBCT images to pseudo CT images; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in a training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
the first network parameters of the target image generation model are obtained by: taking CBCT images of a plurality of training objects as training data of a first model, taking reference image quality of the CT images of the plurality of training objects as a training label of the first model, generating a model for the target image by the trained first model, wherein the trained parameters of the first model are the first network parameters;
The second network parameter of the target segmentation model is obtained by one of the following means:
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of target area sketching aiming at the CT images of the plurality of training objects as a training label of the second model, wherein the trained second model is the target segmentation model, and the trained parameters of the second model are the second network parameters; or alternatively, the process may be performed,
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of target area sketching aiming at the CT images of the plurality of training objects as a training label of the second model, and taking a parameter obtained by training as an intermediate parameter of the second model; and fine-tuning the intermediate parameters of the second model according to the CT image of the target object and the reference sketching result of the CT image of the target object, wherein the fine-tuned second model is the target segmentation model, and the fine-tuned parameters are the second network parameters.
7. An apparatus for constructing a CBCT delineation model, comprising:
the training data and label acquisition module is used for acquiring training data and training labels, and the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training tag comprising: the reference image quality of the CT image is used for carrying out a reference sketching result of target area sketching aiming at the CT image;
The first training module is used for inputting the CBCT image into a first model for training, the output of the first model is a pseudo CT image, the training is finished under the condition that the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a first set threshold value, and the first model after the training is an image generation model;
the second training module is used for inputting the CT image into a second model for training, the output of the second model is a prediction sketching result, the training is finished under the condition that the difference between the prediction sketching result and the reference sketching result is smaller than a second set threshold value, and the trained second model is a segmentation model;
and the sketch model generating module is used for generating a CBCT sketch model according to the image generating model and the segmentation model.
8. An apparatus for delineating CBCT images, comprising:
the data acquisition module is used for acquiring a CBCT image to be sketched of the target object;
the first processing module is used for inputting the CBCT image to be sketched into a pre-trained target image generation model and outputting to obtain a pseudo CT image;
the second processing module is used for inputting the pseudo CT image into a pre-trained target segmentation model and outputting a CBCT sketching result of the target object;
Wherein the target image generation model comprises first network parameters mapped from CBCT images to pseudo CT images; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in a training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1-6 when executing a program stored on a memory.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-6.
CN202211364596.4A 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image Pending CN116168097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211364596.4A CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211364596.4A CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image

Publications (1)

Publication Number Publication Date
CN116168097A true CN116168097A (en) 2023-05-26

Family

ID=86420748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211364596.4A Pending CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image

Country Status (1)

Country Link
CN (1) CN116168097A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476219A (en) * 2023-12-27 2024-01-30 四川省肿瘤医院 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476219A (en) * 2023-12-27 2024-01-30 四川省肿瘤医院 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
CN117476219B (en) * 2023-12-27 2024-03-12 四川省肿瘤医院 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Similar Documents

Publication Publication Date Title
JP6567179B2 (en) Pseudo CT generation from MR data using feature regression model
JP7030050B2 (en) Pseudo-CT generation from MR data using tissue parameter estimation
CN106133790B (en) Method and device for generating one or more computed tomography images based on magnetic resonance images with the aid of tissue type separation
CN107038728B (en) Contour automated determination based on iterative reconstruction
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
US8787648B2 (en) CT surrogate by auto-segmentation of magnetic resonance images
CN112508965B (en) Automatic outline sketching system for normal organs in medical image
KR102251830B1 (en) Systems and methods for registration of ultrasound and ct images
JP6626344B2 (en) Image processing apparatus, control method for image processing apparatus, and program
RU2589461C2 (en) Device for creation of assignments between areas of image and categories of elements
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
CN106462974B (en) Parameter optimization for segmenting images
Tiago et al. A data augmentation pipeline to generate synthetic labeled datasets of 3D echocardiography images using a GAN
CN113159040A (en) Method, device and system for generating medical image segmentation model
US9355454B2 (en) Automatic estimation of anatomical extents
US10453224B1 (en) Pseudo-CT generation with multi-variable regression of multiple MRI scans
CN116168097A (en) Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN114820730B (en) CT and CBCT registration method based on pseudo CT
CN112967295B (en) Image processing method and system based on residual network and attention mechanism
CN116012526B (en) Three-dimensional CT image focus reconstruction method based on two-dimensional image
JP2019136444A (en) Information processing apparatus, information processing method, and program
Al-Dhamari et al. Automatic cochlear multimodal 3D image segmentation and analysis using atlas–model-based method
CN109272486A (en) Training method, device, equipment and the storage medium of MR image prediction model
CN117078612B (en) CBCT image-based rapid three-dimensional dose verification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination