WO2024092996A1 - 医学图像处理方法、装置、计算机设备及存储介质 - Google Patents

医学图像处理方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2024092996A1
WO2024092996A1 PCT/CN2022/142427 CN2022142427W WO2024092996A1 WO 2024092996 A1 WO2024092996 A1 WO 2024092996A1 CN 2022142427 W CN2022142427 W CN 2022142427W WO 2024092996 A1 WO2024092996 A1 WO 2024092996A1
Authority
WO
WIPO (PCT)
Prior art keywords
image sequence
reconstructed image
error
image
reconstructed
Prior art date
Application number
PCT/CN2022/142427
Other languages
English (en)
French (fr)
Inventor
李印生
梁栋
刘新
郑海荣
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2024092996A1 publication Critical patent/WO2024092996A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the embodiments of the present application relate to the field of medical image processing, and to medical image processing methods, devices, computer equipment and storage media.
  • the image processing model is usually trained first. Specifically, the training samples are input into the image processing model, and the error between the label or reference image in the training sample and the estimated image output by the image processing model is minimized by adjusting the network parameters of the image processing model; then the trained image processing model is used to process the input image to obtain the target image.
  • the stability of the above-mentioned trained image processing model depends on the comprehensiveness of the training samples. If the comprehensiveness of the training samples is low, the robustness of the trained image processing model will be poor, and it will be impossible to guarantee that each target image has a high image quality. That is, the universality of the existing trained image processing model is low.
  • the embodiments of the present application provide a medical image processing method, apparatus, computer device and storage medium, which solve the problem of low universality of existing trained image processing models.
  • an embodiment of the present application provides a medical image processing method, comprising:
  • an embodiment of the present application further provides a medical image processing device, comprising:
  • An acquisition module used for acquiring an initial image sequence corresponding to the measurement data
  • An image processing module used for inputting the initial image sequence into a trained image processing model to obtain a reconstructed image sequence, wherein the time resolution of the reconstructed image sequence is higher than the time resolution of the initial image sequence;
  • an error determination module configured to determine whether a first error between the predicted data corresponding to the reconstructed image sequence and the measured data is greater than a first error threshold
  • an output module for, if not, using the reconstructed image sequence as a target image sequence
  • a back propagation module is used to adjust the network parameters of the trained image processing model according to the first error if yes; input the trained image processing model with adjusted parameters of the initial image sequence to update the reconstructed image sequence; return to the step of determining whether the first error between the predicted data corresponding to the reconstructed image sequence and the measured data is greater than a first error threshold.
  • an embodiment of the present application further provides a computer device, the computer device comprising:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the medical image processing method as described in any embodiment.
  • an embodiment of the present application further provides a storage medium comprising computer executable instructions, which, when executed by a computer processor, are used to execute the medical image processing method as described in any embodiment.
  • the technical solution of the medical image processing method provided in this embodiment reconstructs the initial image sequence obtained by measurement, and determines whether the image processing model needs to be further adjusted by introducing the error between the predicted data and the measured data corresponding to the reconstructed image sequence, thereby effectively improving the accuracy of the image processing model.
  • the measured data is the scanning data collected by the medical image system and is the basis for image reconstruction
  • adjusting the network parameters of the trained image processing model based on the first error is actually adjusting the network parameters of the trained image processing model with reference to the measured data, thereby achieving the purpose of adjusting the network parameters of the trained image processing model according to the specific situation, improving the flexibility, accuracy and generalizability of the network parameter setting of the trained image processing model, and can ensure that when receiving different types of input images, it will output a target image with higher image quality.
  • FIG1 is a flow chart of a medical image processing method provided by an embodiment of the present application.
  • FIG2 is a schematic flow chart of another medical image processing method provided in an embodiment of the present application.
  • FIG3 is a schematic diagram of the structure of an image processing model provided in an embodiment of the present application.
  • FIG4 is a flow chart of a first error determination method provided in an embodiment of the present application.
  • FIG5 is a flow chart of a method for training an image processing model provided in an embodiment of the present application.
  • FIG6 is a block diagram of a medical image processing device provided in an embodiment of the present application.
  • FIG7 is a structural block diagram of another medical image processing device provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of the structure of a C-arm CT imaging system provided in an embodiment of the present application.
  • FIG9 is a schematic diagram of the structure of a diagnostic CT imaging system provided in an embodiment of the present application.
  • FIG10 is a schematic diagram of the structure of another diagnostic CT imaging system provided in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the structure of a computer device in a CT imaging system provided in an embodiment of the present application.
  • FIG1 is a flow chart of a medical image processing method provided by an embodiment of the present application.
  • the technical solution of this embodiment is applicable to the case of improving the temporal resolution of medical images.
  • the method can be performed by a medical image processing device provided by an embodiment of the present application, which can be implemented in software and/or hardware and configured in a processor of an electronic computer device. As shown in FIG1 and FIG2, the method specifically includes the following steps:
  • the measurement data may be scanned data of any part of the human body collected by a medical imaging system, wherein the medical imaging system may be a CT imaging system, a PET imaging system or an MRI imaging system.
  • the CT imaging system may be a diagnostic CT imaging system, a C-arm CT imaging system, or the like.
  • the C-arm CT collects projection data frames at 8 sampling time points to obtain measurement data, that is, the temporal resolution of the prior art is 6 seconds.
  • the initial image sequence may include at least one initial image.
  • the measurement data may include at least one complete data set, wherein the complete data set is a data set that can be acquired by a medical imaging system and can accurately reconstruct an initial image using an image reconstruction method such as filtered back projection or Fourier transform.
  • an initial image sequence may be obtained, wherein the initial image sequence may include at least one frame of initial image.
  • any initial image is reconstructed from at least one measurement data frame collected by the CT imaging system at at least one projection angle.
  • the projection angle is actually an angle range, and the moment when the system moves to the middle position of the at least one projection angle is taken as the target moment or the aforementioned sampling time point.
  • the initial image sequence can be used to describe the change of the spatial distribution of the X-ray attenuation coefficient in the scanned object (such as the human head) over time.
  • the trained image processing model can be trained by a deep convolutional model and is used to upsample the initial image sequence so that the temporal resolution of the reconstructed image sequence is higher than the temporal resolution of the initial image sequence.
  • the number of reconstructed images included in the reconstructed image sequence is twice the number of initial images included in the initial image sequence, and therefore the temporal resolution of the reconstructed image sequence is half of that of the initial image sequence.
  • the image processing model is a neural network as shown in FIG3.
  • the neural network may have 24 convolutional neural network layers, which include three types of convolutional network layers, and the parameters in these convolutional network layers are all learnable.
  • the first type of convolutional network layer uses a 3 ⁇ 3 convolution kernel with an interval of 1, which is marked as "Conv, 3 ⁇ 3, S1" in FIG3, followed by a batch normalization operation (Bnorm) and a rectified linear unit (ReLu) activation function.
  • the second type of convolutional network layer uses a 3 ⁇ 3 convolution kernel with an interval of 2, and is marked as "Conv, 3 ⁇ 3, S2" in FIG3, followed by Bnorm and ReLu.
  • the third type of convolutional network layer uses a 1 ⁇ 1 convolution kernel with an interval of 1, and is marked as "Conv, 1 ⁇ 1, S1" in FIG3, followed by a linear activation function (Linear). All convolutional layers have corresponding learnable bias terms. Each convolutional layer keeps the input and output of the layer in the same spatial dimension.
  • the sampling layer uses a 2 ⁇ 2 convolution kernel and is marked as Up-sample 2 ⁇ 2 in FIG3. All sampling layers use bilinear interpolation algorithm. Shortcut links (Skip+Concatenate, black solid arrows in Figure 3) are used to facilitate the network training process.
  • the convolution kernel in the network parameters uses Glorot uniformly distributed random numbers as initialization, and the bias term uses 0 as the initialization value. The rest of the parameter settings and initialization values are selected by default.
  • S130 Determine whether a first error between the predicted data and the measured data corresponding to the reconstructed image sequence is greater than a first error threshold.
  • the predicted data is projection data determined based on the reconstructed image sequence, and the specific determination method is shown in the following embodiments.
  • an error between predicted data and measured data corresponding to the reconstructed image sequence is determined, the error is taken as a first error, and it is determined whether the error is greater than a first error threshold.
  • the first error is less than or equal to the first error threshold, it means that the first error between the predicted data and the measured data corresponding to the reconstructed image sequence is within an acceptable range, that is, the image quality of the reconstructed image sequence has reached the image quality standard expected by the user, and therefore the reconstructed image sequence is used as the target image sequence.
  • the network parameters of the trained image processing model are adjusted according to the first error, and then the trained image processing model with adjusted input parameters of the initial image sequence is used to update the reconstructed image sequence; and the step of determining whether the first error between the predicted data and the measured data corresponding to the reconstructed image sequence is greater than the first error threshold is returned, that is, whether the first error between the predicted data and the measured data corresponding to the updated reconstructed image sequence is greater than the first error threshold is determined, and corresponding steps are executed according to the determination result.
  • the first error is configured as a modifiable item, that is, the user can adjust the size of the first error within a set adjustable error range according to actual needs.
  • the measurement data is the scanning data collected by the medical imaging system and is the basis for image reconstruction
  • the first error determined based on the measurement data can reflect the accuracy of the predicted data, and then reflect the accuracy of the reconstructed image. Therefore, the network parameters of the trained image processing model are adjusted based on the first error, thereby achieving the purpose of adjusting the network parameters of the trained image processing model with reference to the predicted data, improving the accuracy of the network parameters of the trained image processing model, and thus improving the accuracy of the updated reconstructed image sequence.
  • the target image sequence includes a region of interest, which may include but is not limited to soft tissue, blood vessels, bones, etc. Since the target image sequence has a higher temporal resolution, the target image sequence can provide richer soft tissue change information.
  • the disclosed embodiment allows the frequency range of the measurement data to be increased by reducing the time resolution of the initial image sequence when acquiring MR measurement data.
  • the time resolution of the MR image for clinical diagnosis is determined, and the time resolution is used as the target time resolution, that is, the time resolution of the reconstructed image; based on the target time resolution and the time resolution improvement ratio corresponding to the trained image processing model, the time resolution of the initial image sequence is determined, and the time resolution is used as the initial time resolution, and the MR measurement data of the subject is acquired at the initial time resolution.
  • the target time resolution is set equal to the time resolution of the existing clinical diagnosis MR images, and the time resolution improvement ratio is 5, then the initial time resolution is 1/6 of the target time resolution.
  • the acquisition time that can be allocated to the partial measurement data used to reconstruct any initial image is 6 times the existing data acquisition time, so the user is allowed to increase the frequency range of the partial measurement data by increasing the acquisition time of the partial measurement data corresponding to the reconstruction of a single initial image.
  • measurement data with a larger frequency range corresponds to an initial image with a higher spatial resolution
  • the spatial resolution of the reconstructed image is the same as the spatial resolution of the initial image, that is, the disclosed embodiment can indirectly improve the spatial resolution of the MR image without reducing the time resolution of the MR image for clinical diagnosis.
  • the disclosed embodiment allows the cumulative number of photons in the measurement data to be increased by reducing the time resolution of the initial image sequence when collecting PET measurement data.
  • the time resolution of the PET image for clinical diagnosis is determined, and the time resolution is used as the target time resolution, that is, the time resolution of the reconstructed image; based on the target time resolution and the time resolution improvement ratio corresponding to the trained image processing model, the time resolution of the initial image sequence is determined, and the time resolution is used as the initial time resolution, and the PET measurement data of the subject is collected at the initial time resolution.
  • the target time resolution is set equal to the time resolution of the existing clinical diagnosis PET image, and the time resolution improvement ratio is 5, then the initial time resolution is 1/6 of the target time resolution.
  • the acquisition time that can be allocated to the partial measurement data used to reconstruct any initial image is 6 times the existing data acquisition time, thereby allowing the user to increase the cumulative number of photons of the partial measurement data by increasing the acquisition time of the partial measurement data corresponding to the reconstruction of a single initial image.
  • the disclosed embodiment can indirectly improve the contrast resolution of the PET image without reducing the time resolution of the clinical diagnosis PET image.
  • the technical solution of the medical image processing method since the measurement data is the scanning data collected by the medical imaging system and is the basis for image reconstruction, the network parameters of the trained image processing model are adjusted based on the first error.
  • the network parameters of the trained image processing model are adjusted with reference to the measurement data, thereby achieving the purpose of adjusting the network parameters of the trained image processing model according to the specific situation, improving the flexibility, accuracy and generalizability of the network parameter setting of the trained image processing model, and ensuring that it will output a target image with higher image quality when receiving different types of input images.
  • FIG4 is a flow chart of a first error determination method provided in an embodiment of the present application. This embodiment is used to determine a first error between the predicted data and the measured data corresponding to the reconstructed image sequence in the aforementioned embodiment.
  • the method comprises:
  • Step a1 when the reconstructed image sequence is an MR image sequence, determine at least one radial line data corresponding to the reconstructed image; and use the at least one radial line data as partial prediction data corresponding to the reconstructed image.
  • the starting azimuth and the azimuth interval corresponding to the measurement data are determined, and the azimuth set corresponding to the reconstructed image is determined; at least one radial line data corresponding to the reconstructed image is determined according to the azimuth set.
  • the azimuth interval and the starting azimuth in the K-space data corresponding to the measurement data are first determined, thereby determining the radial line distribution corresponding to the measurement data, determining at least one radial line data corresponding to the reconstructed image under the radial line distribution, and using the at least one radial line data as the corresponding partial prediction data.
  • Step a2 When the reconstructed image sequence is a CT image sequence or a PET image sequence, determine at least one projection angle corresponding to the reconstructed image; calculate partial prediction data corresponding to the reconstructed image along at least one projection angle according to a set signal model; when the reconstructed image sequence is a PET image sequence, determine a time window of a target time resolution corresponding to the reconstructed image; calculate partial prediction data of the current reconstructed image corresponding to the time window according to the set signal model.
  • the projection angle may be the middle value of the rotation angle range of the gantry when the ray source of the CT imaging system outputs rays. For example, if the ray source emits a beam at 125-126 degrees, then 125-126 degrees is a projection angle.
  • one or more projection angles corresponding to the reconstructed image may be determined by the following steps, including:
  • Step b1 Determine a target time set corresponding to the reconstructed image according to the initial acquisition time corresponding to the measurement data and the time resolution corresponding to the trained image processing model.
  • the time resolution of the initial image sequence is the initial time resolution
  • the time resolution of the reconstructed image sequence is the expected time resolution.
  • the moment when the CT imaging system moves to the middle position of the projection angle is taken as the target moment. For example, if the projection angle is 125 degrees to 126 degrees, the moment when it moves to 125.5 degrees is taken as the target moment corresponding to the projection angle.
  • the target moment set is a combination of the target moments corresponding to the one or more projection angles.
  • Step b2 determining a starting projection angle, and determining at least one projection angle corresponding to the reconstructed image according to the starting projection angle, the gantry rotation speed of the imaging system and the target time set.
  • the starting projection angle is the first projection angle of the CT imaging system corresponding to the reconstructed image.
  • the initial projection angle can adopt the first projection angle of the corresponding initial image. Specifically, if the current reconstructed image is an image with an even number mark, the first projection angle of any initial image with an even number mark is adopted; if the current reconstructed image is an image with an odd number mark, the first projection angle of any initial image with an odd number mark is adopted.
  • the initial image marked as 1 is reconstructed from the projection data collected by the CT imaging system during the forward rotation process, wherein the median value of the starting projection angle is 0 degrees, and the median value of the last projection angle is 180 degrees;
  • the initial image marked as 2 is reconstructed from the scanning data collected by the CT imaging system during the reverse rotation process, and the median value of the starting projection angle is 180 degrees, and the median value of the last projection angle is 0 degrees.
  • the rotation angle of the gantry corresponding to each target time can be obtained, that is, the middle value of the projection angle.
  • the projection angle corresponding to each target time can be obtained, thereby obtaining at least one projection angle corresponding to the target time set, and the at least one projection angle is the at least one projection angle corresponding to the reconstructed image.
  • this step may not consider the rotation direction of the CT imaging system, for example, the reconstructed images marked as odd numbers correspond to the forward rotation of the gantry of the CT imaging system, and the reconstructed images marked as even numbers correspond to the reverse rotation of the gantry of the CT imaging system.
  • equal-interval sampling is adopted, regardless of whether the gantry of the CT imaging system rotates forward or reversely, the positions of the projection angles of the reconstructed images are coincident, and it is sufficient to determine at least one projection angle corresponding to any reconstructed image in the reconstructed image sequence, and the other reconstructed images can reuse the at least one projection angle to calculate the corresponding partial prediction data.
  • the partial measurement data corresponding to each reconstructed image sequence in the reconstructed image sequence is determined based on the ratio of the time resolution of the reconstructed image sequence to the time resolution of the initial image sequence, wherein the time resolution unit is frames per second, and illustratively, the ratio is M:1, wherein M is a natural number greater than or equal to 2.
  • the time required for the system to acquire a complete measurement data set is T
  • any reconstructed image corresponds to the measurement data acquired within the T/M time period.
  • the kth reconstructed image corresponds to the partial measurement data within the time period from (k-1)T/M to kT/M.
  • the corresponding partial prediction data is determined, and based on the consistency of the projection angle, the correspondence between at least one prediction data frame included in the partial prediction data and at least one measurement data frame included in the corresponding partial measurement data is determined, and the difference between the two corresponding projection data frames is calculated, so as to obtain the error between the partial prediction data corresponding to each reconstructed image and the corresponding partial measurement data.
  • the projection data of the reconstructed image along each projection angle is calculated to obtain the partial prediction projection data corresponding to the reconstructed image.
  • the set signal model is an existing projection model, such as a forward projection model.
  • the errors corresponding to the partial predicted projection data corresponding to all reconstructed image sequences are accumulated to obtain a total error, and the total error is used as the first error.
  • This embodiment is used to determine the predicted data corresponding to the reconstructed image sequence according to the signal model of a medical imaging system, and calculate the error between the predicted data and the measured data to obtain a first error. Since the measured data is a reference for image reconstruction, the first error can reflect the accuracy of the predicted data, thereby reflecting the accuracy of the reconstructed image.
  • Fig. 5 is a flow chart of the training method of the image processing model provided in the embodiment of the present application.
  • the embodiment of the present application is used to illustrate the training method of the image processing model in the above-mentioned embodiment by using a CT image sequence as a training sample.
  • the method of this embodiment includes:
  • S510 Acquire training samples, where the training samples include a first image sequence and a second image sequence corresponding to the first image sequence, and a time resolution of the first image sequence is smaller than a time resolution of the second image sequence.
  • the first image sequence includes a region of interest, which may include but is not limited to at least one of a soft tissue area, a blood vessel, and a bone.
  • the second image sequence is acquired by the following steps, including:
  • Step b1 obtaining CT cerebral vascular images of a set number of patients and CT cerebral perfusion parameter images corresponding to the CT cerebral vascular images.
  • Step b2 for any patient's CT cerebral vascular image and the CT cerebral perfusion parameter image corresponding to the CT cerebral vascular image, determine the arterial input function and the venous output function based on the CT cerebral vascular image; and establish the patient's second image sequence based on the perfusion convolution model, the CT cerebral perfusion parameter image, the arterial input function, the venous output function and the expected time resolution.
  • Step b3 determining one or more projection angles corresponding to each image in the second image sequence; and calculating one or more projection data frames corresponding to each image in the second image sequence along the corresponding one or more projection angles.
  • Step b4 determining projection data frames for reconstructing each image in the first image sequence according to the set initial time resolution, and performing image reconstruction on the determined projection data frames to obtain the first image sequence, wherein the initial time resolution is less than the expected time resolution.
  • This embodiment aims to first determine the second image sequence, and then determine the first image sequence corresponding to the second image sequence based on the projection data corresponding to the second image sequence and the initial time resolution.
  • the first image sequence and its corresponding second image sequence are used as a training sample.
  • images are extracted from the second image sequence according to the initial time resolution to obtain a first image sequence, and the first image sequence and its corresponding second image sequence are used as a training sample.
  • S520 Input the first image sequence in the training sample into the image processing model, and adjust the network parameters of the image processing model to minimize the error between the first estimated image sequence and the second image sequence, wherein the first estimated image sequence is an estimated image sequence obtained by upsampling the first image sequence by the image processing model.
  • the first image sequence in the training sample is input into the image processing model so that the image processing model calculates the error between the second image sequence and the first estimated image sequence with the second image sequence as a reference, and adjusts the network parameters of the image processing model based on the error until the error between the second image sequence and the first estimated image sequence is within a set error range, thereby obtaining a trained image processing model.
  • the error between the second image sequence and the first estimated image sequence is a two-norm error.
  • the image processing model can be trained to be a trained image processing model with different upsampling capabilities; that is, the image processing model can be trained to be a trained image processing model with a multiple of the set temporal resolution of the image sequence. Therefore, the user can select a trained image processing model with a multiple of the corresponding set temporal resolution of the image sequence according to actual needs to achieve the purpose of improving the temporal resolution of the corresponding image sequence.
  • the image processing model is trained using the first image sequence as the input image of the image processing model and the second image sequence as the reference image sequence. This allows the image processing model to learn upsampling experience in the temporal resolution dimension during the training process, so that the trained image processing model can improve the temporal resolution of the input image sequence and obtain an estimated image sequence, i.e., a reconstructed image sequence, that is, a higher temporal resolution than the input image sequence.
  • FIG6 is a block diagram of a medical image processing device provided by another embodiment of the present application.
  • the device is used to execute the medical image processing method provided by any of the above embodiments, and the device can be implemented in software or hardware.
  • the device includes:
  • An acquisition module 51 is used to acquire an initial image sequence corresponding to the measurement data
  • An image processing module 52 configured to input the initial image sequence into a trained image processing model to obtain a reconstructed image sequence, wherein the time resolution of the reconstructed image sequence is higher than the time resolution of the initial image sequence;
  • An error determination module 53 used to determine whether a first error between the predicted data corresponding to the reconstructed image sequence and the measured data is greater than a first error threshold
  • an output module 54 configured to, if not, use the reconstructed image sequence as a target image sequence
  • a back propagation module 55 is used to adjust the network parameters of the trained image processing model according to the first error if yes; input the trained image processing model with adjusted parameters of the initial image sequence to update the reconstructed image sequence; and return to the step of determining whether the first error between the predicted data corresponding to the reconstructed image sequence and the measured data is greater than a first error threshold.
  • the error determination module 53 includes:
  • an error value determination unit configured to perform the following operations on each reconstructed image in the reconstructed image sequence to obtain an error value corresponding to each reconstructed image: determining partial prediction data corresponding to the current reconstructed image; obtaining partial measurement data corresponding to the current reconstructed image; and calculating an error value between the partial prediction data and the partial measurement data as the error value corresponding to the current reconstructed image;
  • the error determination unit is used to accumulate and calculate the error values corresponding to each reconstructed image in the reconstructed image sequence to obtain the first error.
  • the reconstructed image sequence is an MR image sequence
  • the reconstructed image sequence is a CT image sequence
  • determining at least one projection angle corresponding to the current reconstructed image and calculating partial prediction data corresponding to the reconstructed image along at least one projection angle according to a set signal model.
  • a target time set corresponding to the reconstructed image is determined according to the initial acquisition time of the measurement data and the expected time resolution corresponding to the trained image processing model; a starting angle is determined, and at least one projection angle corresponding to the current reconstructed image is determined according to the starting angle, the gantry rotation speed of the imaging system and the target time set.
  • the reconstructed image sequence is a PET image sequence
  • determining a time window of a target time resolution corresponding to the current reconstructed image determining a time window of a target time resolution corresponding to the current reconstructed image; and calculating partial prediction data of the current reconstructed image corresponding to the time window according to a set signal model.
  • the target time window set corresponding to the reconstructed image is determined according to the initial acquisition moment of the measurement data and the expected time resolution corresponding to the trained image processing model; and partial prediction data corresponding to the current reconstructed image is determined according to the target time window set.
  • the back propagation module 55 implements adjusting the network parameters of the trained image processing model according to the first error, specifically: determining a second error corresponding to the first error in the image domain; and adjusting the network parameters of the image processing model according to the second error.
  • the target image sequence includes a region of interest, which is a soft tissue area.
  • the first error threshold is set as a configurable item.
  • the device may further include a training module 50, wherein the training module is used to:
  • the training sample includes a first image sequence and a second image sequence corresponding to the first image sequence, where a time resolution of the first image sequence is smaller than a time resolution of the second image sequence;
  • a first image sequence in a training sample is input into an image processing model, and a network parameter of the image processing model is adjusted to minimize an error between a first estimated image sequence and the second image sequence, wherein the first estimated image sequence is an estimated image sequence obtained by upsampling the first image sequence by the image processing model.
  • the technical solution of the medical image processing device since the measurement data is the scanning data collected by the CT imaging system and is the basis for image reconstruction, the network parameters of the trained image processing model are adjusted based on the first error.
  • the network parameters of the trained image processing model are adjusted with reference to the measurement data, thereby achieving the purpose of adjusting the network parameters of the trained image processing model according to the specific situation, improving the flexibility, accuracy and generalizability of the network parameter setting of the trained image processing model, and ensuring that it will output a target image with higher image quality when receiving different types of input images.
  • the medical image processing device provided in the embodiments of the present application can execute the medical image processing method provided in any embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
  • FIG8 is a schematic diagram of the structure of a C-arm CT imaging system provided by another embodiment of the present application.
  • the system includes a frame 1211, a detector 1212, a bed 1214, an X-ray tube 1215, a C-arm drive shaft 1216, a rotating shaft 1217, and a base 1219.
  • the X-ray tube 1215 and the detector 1212 are equipped at both ends of the C-type frame 1211, and the center connection line between the two is perpendicular to the rotation axis 1218.
  • the C-type frame 1211 rotates around the rotation axis 1218, so as to capture the image data of the patient 1213 on the bed at different projection angles.
  • the X-ray tube 1215 is controlled by the X-ray generator 123 in terms of current, voltage, exposure time, etc.
  • the projection data collected by the detector 1212 is transmitted to the computer device by the communication system 126.
  • the frame 1211 is connected to the C-arm drive shaft 1216, and its power is provided by the rotating shaft 1217.
  • the base 1219 is responsible for bearing weight.
  • the C-arm control unit 121 controls the rotation speed, angle, position, etc. of the gantry 1211.
  • the spindle control unit 122 is connected to the base 1219 and provides power support for the entire C-arm system.
  • the X-ray generator 123 controls the current, voltage and exposure time of the X-ray tube 1215.
  • the data acquisition system 124 coordinates the gantry 1211, the detector 1212 and the X-ray generator 1215, and collects the collected data.
  • the bed control system 125 controls the position and movement speed of the bed 1214 to achieve different scanning trajectories for the patient 1213.
  • the communication system 126 connects the C-arm control unit 121, the spindle control unit 122, the X-ray generator 123, the data acquisition system 124 and the bed control system 125, and transmits the collected projection data to the memory of the computer device 2.
  • FIGS. 9 and 10 show a schematic diagram of the structure of another CT imaging system.
  • This CT imaging system is a diagnostic CT.
  • its frame 1211 is annular
  • the detector 1212 and the X-ray tube 1215 are both arranged on the frame and are relatively distributed
  • the bed plate 1214 enters and exits the frame aperture under the control of the bed plate controller 125
  • the frame drives the detector 1212 and the X-ray tube 1215 to move around the bed plate 1214.
  • Figure 11 is a structural diagram of a computer device provided in another embodiment of the present application.
  • the computer device 2 includes a processor 201, a memory 202, an input device 203 and an output device 204; the number of processors 201 in the device can be one or more, and Figure 11 takes one processor 201 as an example; the processor 201, memory 202, input device 203 and output device 204 in the device can be connected via a bus or other methods, and Figure 11 takes the connection via a bus as an example.
  • the memory 202 can be used to store software programs, computer executable programs and modules, such as program instructions/modules corresponding to the medical image processing method in the embodiment of the present application (for example, the acquisition module 51, the image processing module 52, the error determination module 53, the output module 54 and the back propagation module 55).
  • the processor 201 executes various functional applications and data processing of the device by running the software programs, instructions and modules stored in the memory 202, that is, realizes the above-mentioned medical image processing method.
  • the memory 202 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system and at least one application required for a function; the data storage area may store data created according to the use of the terminal, etc.
  • the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 202 may further include a memory remotely arranged relative to the processor 201, and these remote memories may be connected to the device via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 203 can be used to receive input digital or character information and generate key signal input related to user settings and function control of the device.
  • the input device can be configured in an operating workstation through which an operator controls the operation of the CT imaging system.
  • the output device 204 may include a display device such as a display screen, for example, a display screen of an operating workstation.
  • Another embodiment of the present application further provides a storage medium containing computer executable instructions, wherein the computer executable instructions are used to perform a medical image processing method when executed by a computer processor, the method comprising:
  • the network parameters of the trained image processing model are adjusted according to the first error, and the trained image processing model with adjusted input parameters of the initial image sequence is input to update the reconstructed image sequence until the first error between the predicted data corresponding to the updated reconstructed image sequence and the measured data is no greater than a first error threshold.
  • the storage medium containing computer executable instructions provided in the embodiments of the present application is not limited to the method operations described above, and can also execute related operations in the medical image processing method provided in any embodiment of the present application.
  • the technicians in the relevant field can clearly understand that the present application can be implemented with the help of software and necessary general hardware, and of course it can also be implemented by hardware, but in many cases the former is a better implementation method.
  • the technical solution of the present application, or the part that contributes to the prior art can be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a computer floppy disk, read-only memory (ROM), random access memory (RAM), flash memory (FLASH), hard disk or optical disk, etc., including a number of instructions to enable a computer device (which can be a personal computer, server, or network device, etc.) to execute the medical image processing method described in each embodiment of the present application.
  • a computer-readable storage medium such as a computer floppy disk, read-only memory (ROM), random access memory (RAM), flash memory (FLASH), hard disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请实施例公开了一种医学图像处理方法、装置、计算机设备及存储介质,该方法包括:获取测量数据对应的初始图像序列;将初始图像序列输入已训练的图像处理模型以得到重构图像序列;确定重构图像序列对应的预测数据与测量数据之间的第一误差是否大于第一误差阈值;如果否,则将重构图像序列作为目标图像序列;如果是,则根据第一误差调整已训练的图像处理模型的网络参数;将初始图像序列输入参数调整后的已训练的图像处理模型以更新重构图像序列;返回确定重构图像序列对应的预测数据与测量数据之间的第一误差是否大于第一误差阈值的步骤。解决了现有已训练的图像处理模型存在准确性和可泛化性较低的问题,以及实现了提高医学图像分辨率的目的。

Description

医学图像处理方法、装置、计算机设备及存储介质
本申请要求2022年11月04日递交的申请号为2022113792075、发明名称为“CT图像处理方法、装置、计算机设备及存储介质”和2022年12月01日递交的申请号为2022115376657、发明名称为“医学图像处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及医学图像处理领域,涉及医学图像处理方法、装置、计算机设备及存储介质。
背景技术
目前,在采用图像处理模型进行图像处理时,通常先对该图像处理模型进行模型训练,具体为:将训练样本输入图像处理模型,通过调整图像处理模型的网络参数最小化训练样本中的标签或参考图像与该图像处理模型输出的估计图像之间的误差;然后采用该已训练的图像处理模型对输入图像进行处理,以得到目标图像。
显然,上述已训练图像处理模型的稳定性依赖于训练样本的全面性,如果训练样本的全面性较低,那么该已训练的图像处理模型的鲁棒性较差,无法保证每个目标图像均具有较高的图像质量,即现有已训练的图像处理模型的普适性较低。
发明内容
本申请实施例提供了一种医学图像处理方法、装置、计算机设备及存储介质,解决了现有已训练的图像处理模型存在普适性较低的问题。
第一方面,本申请实施例提供了一种医学图像处理方法,包括:
获取测量数据对应的初始图像序列;
将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
如果否,则将所述重构图像序列作为目标图像序列;
如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数;将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列;返回确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
第二方面,本申请实施例还提供了一种医学图像处理装置,包括:
获取模块,用于获取测量数据对应的初始图像序列;
图像处理模块,用于将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
误差确定模块,用于确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
输出模块,用于如果否,则将所述重构图像序列作为目标图像序列;
反向传播模块,用于如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数;将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列;返回确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
第三方面,本申请实施例还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如任意实施例所述的医学图像处理方法。
第四方面,本申请实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如任意实施例所述的医学图像处理方法。
相较于现有技术,本实施例提供的医学图像处理方法的技术方案,对测量得到的初始图像序列进行重构,通过引入重构图像序列对应的预测数据与测量数据之间的误差来确定图像处理模型是否需要继续进行调整,从而有效提高了图像处理模型的准确度,进一步的,由于测量数据是医学图像系统采集的扫描数据,是图像重建的依据,因此基于第一误差调整已训练的图像处理模型的网络参数,实际上是以测量数据为参考来调整已训练的图像处理模型的网络参数,实现了根据具体情况调整已训练的图像处理模型的网络参数的目的,提高了已训练的图像处理模型的网络参数设置的灵活性、准确性和可泛化性,可以保证其在接收到不同类型的输入图像时,均会输出较高图像质量的目标图像。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图做一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的医学图像处理方法的流程图;
图2是本申请实施例提供的又一医学图像处理方法的流程示意图;
图3是本申请实施例提供的图像处理模型的结构示意图;
图4是本申请实施例提供的第一误差确定方法的流程图;
图5是本申请实施例提供的图像处理模型的训练方法的流程图;
图6是本申请实施例提供的医学图像处理装置的结构框图;
图7是本申请实施例提供的又一医学图像处理装置的结构框图;
图8是本申请实施例提供的C臂CT成像系统的结构示意图;
图9是本申请实施例提供的诊断CT成像系统的结构示意图;
图10是本申请实施例提供的又一诊断CT成像系统的结构示意图;
图11是本申请实施例提供的CT成像系统中的计算机设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,以下将参照本申请实施例中的附图,通过实施方式清楚、完整地描述本申请的技术方案,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例
图1是本申请实施例提供的医学图像处理方法的流程图。本实施例的技术方案适用于提高医学图像时间分辨率的情况。该方法可以由本申请实施例提供的医学图像处理装置来执行,该装置可以采用软件和/或硬件的方式实现,并配置在电子计算机设备的处理器中应用。如图1和图2所示,该方法具体包括如下步骤:
S110、获取测量数据对应的初始图像序列。
其中,测量数据可以是医学成像系统采集的任意人体部位的扫描数据,其中,医学成像系统可以是CT成像系统、PET成像系统或MRI成像系统。其中,CT成像系统可以是诊断CT成像系统、C臂CT成像系统等。以C臂CT成像系统采集的扫描数据为例。由于C臂CT成像系统的旋转速度很慢,导致C臂CT图像的时间分辨率很低、时间采样点数目很少。在一个实施例中,以脑组织动态成像为例,在30秒到1分钟的血液循环时间范围内,C臂CT采集8个采样时间点的投影数据帧以得到测量数据,即现有技术的时间分辨率为6秒。
其中,初始图像序列可以包括至少一幅初始图像。测量数据可以包括至少一个完备数据集,其中,完备数据集为可由医学成像系统采集,使用例如滤波反投影或傅里叶变换等图像重建方法能够精准重建出一副初始图像的数据集。
通过对测量数据进行图像重建,可以得到初始图像序列,其中,初始图像序列可以包括至少一帧初始图像。
其中,在医学成像系统为CT成像系统的情况下,任一初始图像由CT成像系统在至少一个投影角度采集的至少一个测量数据帧重建而成。其中,该投影角度实际上是一个角度范围,将系统运动至该至少一个投影角度的中间位置的时刻作为目标时刻或前述采样时间点。可以理解的是,该初始图像序列(参见图2)可用于描述被扫物体(比如人体头部)内的X射线衰减系数空间分布随时间的变化情况。
S120、将初始图像序列输入已训练的图像处理模型以得到重构图像序列,重构图像序列的时间分辨率高于初始图像序列的时间分辨率。
在一个实施例中,该已训练的图像处理模型可以是由深度卷积模型训练而成的,用于对初始图像序列进行升采样,以使重构图像序列的时间分辨率高于初始图像序列的时间分辨率,如图2所示,重构图像序列包括的重构图像的数量是初始图像序列包括的初始图像的数量的2倍,因此重构图像序列的时间分辨率是初始图像序列的一半。
在一个实施例中,图像处理模型为图3示出的神经网络。该神经网络可以有24层卷积神经网络层,该24层卷积神经网络层包括三类卷积网络层,这些卷积网络层中的参数均为可学习的。第一类卷积网络层使用3×3卷积核,间隔为1,在图3中标记为“Conv,3×3,S1”,之后为批归一化操作(Bnorm)和整流线性单元(ReLu)激活函数。第二类卷积网络层使用3×3卷积核,间隔为2,并在图3中标记为“Conv,3×3,S2”,之后为Bnorm和ReLu。第三类卷积网络层使用1×1卷积核,间隔为1,并在图3中标记为“Conv,1×1,S1”,之后为线性激活函数(Linear)。所有卷积层都有相应的可学偏差项。每个卷积层均保持该层的输入输出为同样的空间维度。采样层使用2×2卷积核,并在图3中标记为Up-sample 2×2。所有采样层使用双线性插值算法。捷径链接(Skip+Concatenate,图3中黑色实箭头)用以促进网络训练过程。网络参数中的卷积核使用Glorot均匀分布随机数作为初始化,偏差项使用0作为初始化值。其余参数设置与初始化值均选择默认方式。
S130、确定重构图像序列对应的预测数据与测量数据之间的第一误差是否大于第一误差阈值。
其中,预测数据为基于重构图像序列确定的投影数据,具体确定方式参见后面实施例。
在检测到已训练的图像处理模型输出重构图像序列的情况下,确定该重构图像序列对应的预测数据与测量数据之间的误差,将该误差作为第一误差,并确定其是否大于第一误差阈值。
S140、如果否,则将该重构图像序列作为目标图像序列。
如果第一误差小于或等于第一误差阈值,则表示重构图像序列对应的预测数据与测量数据之间的第一误差在可接受范围内,即重构图像序列的图像质量已达到用户期望的图像质量标准,因此将重构图像序列作为目标图像序列。
S150、如果是,则根据第一误差调整已训练的图像处理模型的网络参数;将初始图像序列输入参数调整后的已训练的图像处理模型以更新重构图像序列;返回确定重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
如果第一误差大于第一误差阈值,则表示重构图像序列对应的预测数据与测量数据之间的第一误差超过可接受误差范围,即重构图像质量无法达到用户期望的图像质量,因此根据该第一误差调整已训练的图像处理模型的网络参数,然后将初始图像序列输入参数调整后的已训练的图像处理模型以更新重构图像序列;并返回确定重构图像序列对应的预测数据与测量数据之间的第一误差是否大于第一误差阈值的步骤,即判断更新后的重构图像序列对应的预测数据与测量数据之间的第一误差是否大于第一误差阈值,并根据判定结果执行相应步骤。
在一个实施例中,第一误差被配置为可修改项。也就是说,用户可以根据实际需求在设定的误差可调节范围内调节第一误差的大小。
由于测量数据是医学成像系统采集的扫描数据,是图像重建的依据,因此基于该测量数据确定的第一误差可以反映预测数据的准确性,继而反映重构图像的准确性,因此基于第一误差调整已训练的图像处理模型的网络参数,实现了以预测数据为参考调整已训练的图像处理模型的网络参数的目的,提高了已训练的图像处理模型的网络参数的准确性,从而提高了更新后的重构图像序列的准确性。
在一个实施例中,目标图像序列包括感兴趣区,该感兴趣区可以包括但不限于软组织、血管、骨骼等。由于目标图像序列具有较高的时间分辨率,该目标图像序列可以提供更为丰富的软组织变化信息。
由于重构图像序列的时间分辨率高于初始图像序列的时间分辨率,且重构图像序列用于临床诊断,因此本公开实施例允许在采集MR测量数据时,通过降低初始图像序列的时间分辨率来增加测量数据的频率范围。具体地,确定临床诊断用MR图像的时间分辨率,并将该时间分辨率作为目标时间分辨率,即重构图像的时间分辨率;基于该目标时间分辨率和已训练的图像处理模型对应的时间分辨率提升比值,确定初始图像序列的时间分辨率,并将该时间分辨率作为初始时间分辨率,以该初始时间分辨率采集被测者的MR测量数据。
在一个实施例中,设定目标时间分辨率等于现有临床诊断用MR图像的时间分辨率,时间分辨率提升比值为5,那么初始时间分辨率为目标时间分辨率的1/6,相较于现有技术来说,根据该初始时间分辨率采集被测者的MR测量数据时,用于重建任一初始图像的部分测量数据可被分配的采集时间为现有数据采集时间的6倍,因此允许用户通过增加用于重建单幅初始图像对应的部分测量数据的采集时间的方式,增加该部分测量数据的频率范围。可以理解的是,对于MR图像来说,频率范围更大的测量数据对应空间分辨率更高的初始图像,重构图像的空间分辨率与初始图像的空间分辨率相同,即本公开 实施例可以在不降低临床诊断用MR图像的时间分辨率的情况下,间接提高MR图像空间分辨率。
以此类推,本公开实施例允许在采集PET测量数据时,通过降低初始图像序列的时间分辨率来增加测量数据的累计光子数。具体地,确定临床诊断用PET图像的时间分辨率,并将该时间分辨率作为目标时间分辨率,即重构图像的时间分辨率;基于该目标时间分辨率和已训练的图像处理模型对应的时间分辨率提升比值,确定初始图像序列的时间分辨率,并将该时间分辨率作为初始时间分辨率,以该初始时间分辨率采集被测者的PET测量数据。
在一个实施例中,设定目标时间分辨率等于现有临床诊断用PET图像的时间分辨率,时间分辨率提升比值为5,那么初始时间分辨率为目标时间分辨率的1/6,相较于现有技术来说,根据该初始时间分辨率采集被测者的PET测量数据时,用于重建任一初始图像的部分测量数据可被分配的采集时间为现有数据采集时间的6倍,因此允许用户通过增加用于重建单幅初始图像对应的部分测量数据的采集时间的方式,增加该部分测量数据的累计光子数。可以理解的是,对于PET图像来说,累计光子数更多的测量数据对应对比度分辨率更高的初始图像,重构图像的对比度分辨率与初始图像的对比度分辨率相同,即本公开实施例可以在不降低临床诊断用PET图像的时间分辨率的情况下,间接提高PET图像对比度分辨率。
相较于现有技术,本实施例提供的医学图像处理方法的技术方案,由于测量数据是医学成像系统采集的扫描数据,是图像重建的依据,因此基于第一误差调整已训练的图像处理模型的网络参数,实际上是以测量数据为参考来调整已训练的图像处理模型的网络参数,实现了根据具体情况调整已训练的图像处理模型的网络参数的目的,提高了已训练的图像处理模型的网络参数设置的灵活性、准确性和可泛化性,可以保证其在接收到不同类型的输入图像时,均会输出较高图像质量的目标图像。
图4是本申请实施例提供的第一误差确定方法的流程图。该实施例用于确定前述实施例中的重构图像序列对应的预测数据与测量数据之间的第一误差。该方法包括:
S410、对所述重构图像序列中的各重构图像执行如下操作,以得到各重构图像对应的误差值:确定当前重构图像对应的部分预测数据;获取当前重构图像对应的部分测量数据;计算所述部分预测数据和所述部分测量数据之间的误差值,作为当前重构图像对应的误差值;
步骤a1、在重构图像序列为MR图像序列的情况下,确定重构图像对应的至少一个径向线数据;将至少一个径向线数据作为重构图像对应的部分预测数据。
在一个实施例中,在重构图像序列为MR图像序列的情况下,确定起始方位角、测量数据对应的方位角间隔,确定重构图像对应的方位角集合;根据方位角集合确定重构 图像对应的至少一个径向线数据。该实施例先确定测量数据对应的K空间数据中的方位角间隔以及起始方位角,从而确定测量数据对应的径向线分布,确定重构图像在该径向线分布情况下对应的至少一个径向线数据,将该至少一个径向线数据作为对应的部分预测数据。
步骤a2、在重构图像序列为CT图像序列或PET图像序列的情况下,确定重构图像对应的至少一个投影角度;根据设定信号模型,沿至少一个投影角度计算该重构图像对应的部分预测数据;在所述重构图像序列为PET图像序列的情况下,确定所述重构图像对应的目标时间分辨率的时间窗;根据设定信号模型,计算所述当前重构图像对应所述时间窗的部分预测数据。
其中,上述投影角度可以是CT成像系统的射线源输出射线时机架的旋转角度范围的中间值,比如,射线源在125度-126度出束,则125-126度即为一个投影角度。
在一个实施例中,在重构图像序列为CT图像序列的情况下,可以通过以下步骤确定重构图像对应的一个或多个投影角度,包括:
步骤b1、根据测量数据对应的初始采集时刻、已训练的图像处理模型对应的时间分辨率,确定重构图像对应的目标时刻集合。
其中,初始图像序列的时间分辨率为初始时间分辨率,重构图像序列的时间分辨率为期望时间分辨率。将CT成像系统运动至投影角度中间位置的时刻作为目标时刻,比如,投影角度为125度-126度,则将运动至125.5度的时刻作为该投影角度对应的目标时刻。目标时刻集合为该一个或多个投影角度对应的目标时刻的组合。
步骤b2、确定起始投影角度,根据起始投影角度、成像系统的机架旋转速度与目标时刻集合,确定重构图像对应的至少一个投影角度。
其中,起始投影角度为重构图像对应的CT成像系统的第一个投影角度。在一个实施例中,该初始投影角度可采用对应初始图像的第一个投影角度。具体可选为,如果当前重构图像为偶数标识的图像,则采用任一偶数标识的初始图像的第一个投影角度;如果当前重构图像为奇数标识的图像,则采用任一奇数标识的初始图像的第一个投影角度。示例性的,标识为1的初始图像是由CT成像系统在正向旋转过程中采集的投影数据重建而成,其中,起始投影角度的中间值为0度,最后一个投影角度的中间值为180度;标识为2的初始图像是由CT成像系统在反向旋转过程中采集的扫描数据重建而成,起始投影角度的中间值为180度,最后一个投影角度的中间值为0度。
针对任一重构图像,起始投影角度、CT成像系统机架的旋转速度和目标时刻集合确定后,即可得到每个目标时刻对应的机架的旋转角度,即投影角度的中间值,根据该中间值以及设定投影阈值,即可得到每个目标时刻对应的投影角度,从而得到目标时刻集合对应的至少一个投影角度,该至少一个投影角度即为重构图像对应的至少一个投影角度。
在一个实施例中,该步骤可以无需考虑CT成像系统的旋转方向,比如标识为奇数的重构图像对应CT成像系统机架的正向旋转,标识为偶数的重构图像对应CT成像系统机架的反向旋转。当采用等间隔采样时,无论CT成像系统机架正向旋转,还是反向旋转,其各重构图像的各投影角度的位置是重合的,只要确定重构图像序列中的任一重构图像对应的至少一个投影角度即可,其他重构图像复用该至少一个投影角度计算其对应的部分预测数据即可。
在确定重构图像对应的部分预测数据对应的部分测量数据时,基于重构图像序列的时间分辨率与初始图像序列的时间分辨率的比值,确定重构图像序列中各重构图像序列对应的部分测量数据,其中,时间分辨率单位为帧每秒,示例性的,该比值为M:1,其中,M为大于或等于2的自然数。设系统采集完备测量数据集所需时间为T,则任一重构图像对应T/M时间段内采集的测量数据。重构图像序列中,第k个重构图像对应(k-1)T/M到kT/M时间段内的部分测量数据。
在重构图像序列为CT图像序列的情况下,针对任一重构图像,确定其对应的部分预测数据,基于投影角度的一致性,确定部分预测数据包括的至少一个预测数据帧与对应部分测量数据包括的至少一个测量数据帧的对应关系,并计算两对应投影数据帧之间的差值,从而得到各重构图像对应的部分预测数据与对应部分测量数据之间的误差。根据设定信号模型,计算重构图像沿各投影角度的投影数据,以得到重构图像对应的部分预测投影数据。其中,设定信号模型为现有投影模型,比如前向投影模型。
S420、将所述重构图像序列中各重构图像对应的误差值进行累加计算,得到所述第一误差。
累计所有重构图像序列对应的部分预测投影数据对应的误差,以得到总误差,将该总误差作为第一误差。
本实施例用于根据某个医学成像系统的信号模型,确定重构图像序列对应的预测数据,并计算该预测数据与测量数据之间的误差,以得到第一误差。由于测量数据为图像重建的基准,因此第一误差可以反映出预测数据的准确性,从而反映出重构图像的准确性。
图5是本申请实施例提供的图像处理模型的训练方法的流程图。本申请实施例用于以训练样本采用CT图像序列来阐述前述实施例中的图像处理模型的训练方法。
相应地,本实施例的方法包括:
S510、获取训练样本,训练样本包括第一图像序列以及第一图像序列对应的第二图像序列,第一图像序列的时间分辨率小于第二图像序列的时间分辨率。
其中,第一图像序列包括感兴趣区,该感兴趣区可以包括但不限于软组织区域、血管、骨骼中的至少一项。
在一个实施例中,通过以下步骤获取第二图像序列,包括:
步骤b1、获取设定数量的患者的CT脑血管图像以及CT脑血管图像对应的CT脑灌注参数图像。
步骤b2、针对任一患者的CT脑血管图像以及CT脑血管图像对应的CT脑灌注参数图像,基于CT脑血管图像确定动脉输入函数和静脉输出函数;基于灌注卷积模型、CT脑灌注参数图像、动脉输入函数、静脉输出函数和期望时间分辨率建立患者的第二图像序列。
步骤b3、确定第二图像序列中各图像对应的一个或多个投影角度;对第二图像序列中的各图像沿对应的一个或多个投影角度计算各图像对应的一个或多个投影数据帧。
步骤b4、根据设定的初始时间分辨率,确定用于重建第一图像序列中的各图像的投影数据帧,并对所确定的投影数据帧进行图像重建以得到第一图像序列,其中,初始时间分辨率小于期望时间分辨率。
该实施例旨在先确定第二图像序列,然后基于第二图像序列对应的投影数据和初始时间分辨率确定该第二图像序列对应的第一图像序列。将第一图像序列及其对应的第二图像序列作为一个训练样本。
在一个实施例中,在通过上述实施例确定了各患者的第二图像序列之后,根据初始时间分辨率从该第二图像序列中提取图像,以得到第一图像序列,将第一图像序列及其对应的第二图像序列作为一个训练样本。
S520、将训练样本中的第一图像序列输入图像处理模型,调整图像处理模型的网络参数以最小化第一估计图像序列与第二图像序列之间的误差,其中,第一估计图像序列是图像处理模型对第一图像序列进行升采样处理得到的估计图像序列。
将训练样本中的第一图像序列输入图像处理模型,以使图像处理模型以第二图像序列为参考计算该第二图像序列与第一估计图像序列之间的误差,并基于该误差调整图像处理模型的网络参数,直至第二图像序列与第一估计图像序列之间的误差在设定误差范围内,得到训练后的图像处理模型。
在一个实施例中,第二图像序列与第一估计图像序列之间的误差为二范数误差。
可以理解的是,通过配置第一图像序列与第二图像序列的时间分辨率组合,可将图像处理模型训练成具有不同升采样能力的已训练的图像处理模型;也就是说,图像处理模型可被训练成具有提升图像序列设定时间分辨率倍数的已训练的图像处理模型。因此,用户可根据实际需求选择具有提升图像序列相应设定时间分辨率倍数的已训练的图像处理模型,完成提升相应图像序列的时间分辨率的目的。
由于第一图像序列具有较低时间分辨率,第二图像序列为对应于第一图像序列的高时间分辨率的图像序列,因此以第一图像序列为图像处理模型的输入图像,以第二图像序列为参考图像序列对图像处理模型进行训练,可使图像处理模型在训练过程中学习在 时间分辨率维度上的升采样经验,从而使训练后的图像处理模型可以提升输入图像序列的时间分辨率,得到高于输入图像序列时间分辨率的估计图像序列,即重构图像序列。
图6是本申请又一实施例提供的医学图像处理装置的结构框图。该装置用于执行上述任意实施例所提供的医学图像处理方法,该装置可选为软件或硬件实现。该装置包括:
获取模块51,用于获取测量数据对应的初始图像序列;
图像处理模块52,用于将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
误差确定模块53,用于确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
输出模块54,用于如果否,则将所述重构图像序列作为目标图像序列;
反向传播模块55,用于如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数;将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列;返回确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
可选地,所述误差确定模块53包括:
误差值确定单元,用于对所述重构图像序列中的各重构图像执行如下操作,以得到各重构图像对应的误差值:确定当前重构图像对应的部分预测数据;获取当前重构图像对应的部分测量数据;计算所述部分预测数据和所述部分测量数据之间的误差值,作为当前重构图像对应的误差值;
误差确定单元,用于将所述重构图像序列中各重构图像对应的误差值进行累加计算,得到所述第一误差。
在实现的时候,针对重构图像序列的不同情况,可以采用不同的确定部分预测数据的方式:
1)在所述重构图像序列为MR图像序列的情况下,确定所述重构图像对应的至少一个径向线数据;将所述至少一个径向线数据作为所述当前重构图像对应的部分预测数据;
具体的,在所述重构图像序列为MR图像序列的情况下,确定起始方位角、所述测量数据对应的方位角间隔,确定所述重构图像对应的方位角集合;根据所述方位角集合确定所述重构图像对应的至少一个径向线数据。
2)在所述重构图像序列为CT图像序列的情况下,确定所述当前重构图像对应的至少一个投影角度;根据设定信号模型,沿至少一个投影角度计算该重构图像对应的部分预测数据。
具体的,在所述重构图像序列为CT图像序列的情况下,根据所述测量数据的初始采集时刻、已训练的图像处理模型对应的期望时间分辨率,确定所述重构图像对应的目 标时刻集合;确定起始角度,根据所述起始角度、成像系统的机架旋转速度与所述目标时刻集合,确定所述当前重构图像对应的至少一个投影角度。
3)在所述重构图像序列为PET图像序列的情况下,确定所述当前重构图像对应的目标时间分辨率的时间窗;根据设定信号模型,计算所述当前重构图像对应所述时间窗的部分预测数据。
具体的,在所述重构图像序列为PET图像序列的情况下,根据所述测量数据的初始采集时刻、已训练的图像处理模型对应的期望时间分辨率,确定所述重构图像对应的目标时间窗集合;根据所述目标时间窗集合,确定所述当前重构图像对应的部分预测数据。
可选地,反向传播模块55在实现根据所述第一误差调整所述已训练的图像处理模型的网络参数时,具体为:确定所述第一误差在图像域对应的第二误差;根据所述第二误差调整所述图像处理模型的网络参数。
可选地,所述目标图像序列包括感兴趣区,所述感兴趣区为软组织区域。
可选地,所述第一误差阈值被设置为可配置项。
可选地,如图7所示,该装置还可以包括训练模块50,所述训练模块用于:
获取训练样本,所述训练样本包括第一图像序列以及所述第一图像序列对应的第二图像序列,所述第一图像序列的时间分辨率小于所述第二图像序列的时间分辨率;
将训练样本中的第一图像序列输入图像处理模型,调整所述图像处理模型的网络参数以最小化第一估计图像序列与所述第二图像序列之间的误差,其中,所述第一估计图像序列是所述图像处理模型对所述第一图像序列进行升采样处理得到的估计图像序列。
相较于现有技术,本实施例提供的医学图像处理装置的技术方案,由于测量数据是CT成像系统采集的扫描数据,是图像重建的依据,因此基于第一误差调整已训练的图像处理模型的网络参数,实际上是以测量数据为参考来调整已训练的图像处理模型的网络参数,实现了根据具体情况调整已训练的图像处理模型的网络参数的目的,提高了已训练的图像处理模型的网络参数设置的灵活性、准确性和可泛化性,可以保证其在接收到不同类型的输入图像时,均会输出较高图像质量的目标图像。
本申请实施例所提供的医学图像处理装置可执行本申请任意实施例所提供的医学图像处理方法,具备执行方法相应的功能模块和有益效果。
图8是本申请又一实施例提供的C臂CT成像系统的结构示意图。该系统包括机架1211、探测器1212、床板1214、X线光管1215、C臂驱动轴1216、转轴1217以及基座1219。X线光管1215和探测器1212装备在C型机架1211的两端,两者中心连接线与其旋转中轴线1218垂直。C型机架1211绕旋转中轴线1218旋转,从而在不同的投影角度拍摄床板上的患者1213的影像数据。X线光管1215由X线发生器123控制其电流、电压和曝光时间等,探测器1212采集到的投影数据由通信系统126传输给计算机设 备,机架1211与C臂驱动轴1216相连,其动力由转轴1217提供。基座1219负责承重。C臂控制单元121控制机架1211的旋转速度、角度、位置等。主轴控制单元122连接基座1219,并为整个C臂系统提供电力支持。X线发生器123控制X线光管1215的电流、电压和曝光时间。数据采集系统124协调机架1211、探测器1212和X线发生器1215,并收集采集到的数据。床板控制系统125控制床板1214的位置和运动速度,以实现对患者1213的不同扫描轨道。通信系统126连接C臂控制单元121、主轴控制单元122、X线发生器123、数据采集系统124和床板控制系统125,并将采集到的投影数据传输给计算机设备2的存储器。
图9和图10示出了又一CT成像系统的结构示意图。该CT成像系统为诊断CT,相较于C臂CT,其机架1211为环形,探测器1212和X线光管1215均设置于机架上,且相对分布,床板1214在床板控制器125的控制下进出机架孔径,机架带动探测器1212和X线光管1215绕床板1214运动。
图11为本申请又一实施例提供的计算机设备的结构示意图,如图11所示,该计算机设备2包括处理器201、存储器202、输入装置203以及输出装置204;设备中处理器201的数量可以是一个或多个,图11中以一个处理器201为例;设备中的处理器201、存储器202、输入装置203以及输出装置204可以通过总线或其他方式连接,图11中以通过总线连接为例。
存储器202作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的医学图像处理方法对应的程序指令/模块(例如,获取模块51、图像处理模块52、误差确定模块53、输出模块54以及反向传播模块55)。处理器201通过运行存储在存储器202中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的医学图像处理方法。
存储器202可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器202可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器202可进一步包括相对于处理器201远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置203可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。该输入装置可配置在操作工作站,操作师通过该操作工作站控制CT成像系统的运行。
输出装置204可包括显示屏等显示设备,例如,操作工作站的显示屏。
本申请又一实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种医学图像处理方法,该方法包括:
获取测量数据对应的初始图像序列;
将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
如果否,则将所述重构图像序列作为目标图像序列;
如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数,并将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列,直至更新后的重构图像序列对应的预测数据与所述测量数据之间的第一误差不大于第一误差阈值。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的医学图像处理方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的医学图像处理方法。
值得注意的是,上述医学图像处理装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。
注意,上述仅为本申请的较佳实施例及所运用技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由所附的权利要求范围决定。

Claims (12)

  1. 一种医学图像处理方法,包括:
    获取测量数据对应的初始图像序列;
    将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
    确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
    如果否,则将所述重构图像序列作为目标图像序列;
    如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数;将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列;返回确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
  2. 根据权利要求1所述的方法,其中,通过以下步骤确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差,包括:
    针对所述重构图像序列中的任一重构图像,确定所述重构图像对应的部分预测数据;
    确定所述部分预测数据与对应的部分测量数据的误差;
    将所述重构图像序列对应的所有部分预测数据对应的误差之和作为第一误差。
  3. 根据权利要求2所述的方法,其中,所述确定所述重构图像对应的部分预测数据,包括:
    在所述重构图像序列为MR图像序列的情况下,确定所述重构图像对应的至少一个径向线数据;将所述至少一个径向线数据作为所述重构图像对应的部分预测数据;
    在所述重构图像序列为CT图像序列的情况下,确定所述重构图像对应的至少一个投影角度;根据设定信号模型,沿所述至少一个投影角度计算所述重构图像对应的部分预测数据;
    在所述重构图像序列为PET图像序列的情况下,确定所述重构图像对应的目标时间分辨率的时间窗;根据设定信号模型,计算所述重构图像对应所述时间窗的部分预测数据。
  4. 根据权利要求3所述的方法,其中,在所述重构图像序列为MR图像序列的情况下,确定所述重构图像对应的至少一个径向线数据,包括:
    在所述重构图像序列为MR图像序列的情况下,确定起始方位角、所述测量数据对应的方位角间隔,确定所述重构图像对应的方位角集合;
    根据所述方位角集合确定所述重构图像对应的至少一个径向线数据。
  5. 根据权利要求3所述的方法,其中,所述在所述重构图像序列为CT图像序列的情况下,确定所述重构图像对应的至少一个投影角度,包括:
    在所述重构图像序列为CT图像序列的情况下,根据所述测量数据对应的初始采集时刻、已训练的图像处理模型对应的时间分辨率,确定所述重构图像对应的目标时刻集合;
    确定起始角度,根据所述起始角度、成像系统的机架旋转速度与所述目标时刻集合,确定所述重构图像对应的至少一个投影角度。
  6. 根据权利要求3所述的方法,其中,所述在所述重构图像序列为PET图像序列的情况下,确定所述重构图像对应的时间窗,包括:
    根据所述测量数据对应的初始采集时刻、已训练的图像处理模型对应的时间分辨率,确定所述重构图像对应的目标时刻集合;
    根据所述初始采集时刻、成像系统的数据读取速度与所述目标时刻集合,确定所述重构图像对应的时间窗。
  7. 根据权利要求1所述的方法,其中,所述根据所述第一误差调整所述已训练的图像处理模型的网络参数,包括:
    确定所述第一误差在图像域对应的第二误差;
    根据所述第二误差调整所述已训练的图像处理模型的网络参数。
  8. 根据权利要求1所述的方法,其中,所述第一误差阈值被设置为可修改项。
  9. 根据权利要求1-8任一所述的方法,其中,通过以下步骤完成所述图像处理模型的预训练,包括:
    获取训练样本,所述训练样本包括第一图像序列以及所述第一图像序列对应的第二图像序列,所述第一图像序列的时间分辨率小于所述第二图像序列的时间分辨率;
    将训练样本中的第一图像序列输入图像处理模型,调整所述图像处理模型的网络参数以最小化第一估计图像序列与所述第二图像序列之间的误差,其中,所述第一估计图像序列是所述图像处理模型对所述第一图像序列进行升采样处理得到的估计图像序列。
  10. 一种医学图像处理装置,包括:
    获取模块,用于获取测量数据对应的初始图像序列;
    图像处理模块,用于将所述初始图像序列输入已训练的图像处理模型以得到重构图像序列,所述重构图像序列的时间分辨率高于所述初始图像序列的时间分辨率;
    误差确定模块,用于确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值;
    输出模块,用于如果否,则将所述重构图像序列作为目标图像序列;
    反向传播模块,用于如果是,则根据所述第一误差调整所述已训练的图像处理模型的网络参数;将所述初始图像序列输入参数调整后的已训练的图像处理模型以更新所述重构图像序列;返回确定所述重构图像序列对应的预测数据与所述测量数据之间的第一误差是否大于第一误差阈值的步骤。
  11. 一种计算机设备,所述计算机设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一所述的医学图像处理方法。
  12. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-9中任一所述的医学图像处理方法。
PCT/CN2022/142427 2022-11-04 2022-12-27 医学图像处理方法、装置、计算机设备及存储介质 WO2024092996A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202211379207 2022-11-04
CN202211379207.5 2022-11-04
CN202211537665.7A CN116051463A (zh) 2022-11-04 2022-12-01 医学图像处理方法、装置、计算机设备及存储介质
CN202211537665.7 2022-12-01

Publications (1)

Publication Number Publication Date
WO2024092996A1 true WO2024092996A1 (zh) 2024-05-10

Family

ID=86121146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/142427 WO2024092996A1 (zh) 2022-11-04 2022-12-27 医学图像处理方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN116051463A (zh)
WO (1) WO2024092996A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218004B (zh) * 2023-09-26 2024-05-14 烟台大学 一种T1 mapping快速成像方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060196A1 (ko) * 2018-09-18 2020-03-26 서울대학교산학협력단 3차원 영상 재구성 장치 및 그 방법
CN111292322A (zh) * 2020-03-19 2020-06-16 中国科学院深圳先进技术研究院 医学图像处理方法、装置、设备及存储介质
CN111383741A (zh) * 2018-12-27 2020-07-07 深圳先进技术研究院 医学成像模型的建立方法、装置、设备及存储介质
CN112102428A (zh) * 2020-11-23 2020-12-18 南京安科医疗科技有限公司 Ct锥形束扫描图像重建方法、扫描系统计及存储介质
CN112419303A (zh) * 2020-12-09 2021-02-26 上海联影医疗科技股份有限公司 神经网络训练方法、系统、可读存储介质和设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060196A1 (ko) * 2018-09-18 2020-03-26 서울대학교산학협력단 3차원 영상 재구성 장치 및 그 방법
CN111383741A (zh) * 2018-12-27 2020-07-07 深圳先进技术研究院 医学成像模型的建立方法、装置、设备及存储介质
CN111292322A (zh) * 2020-03-19 2020-06-16 中国科学院深圳先进技术研究院 医学图像处理方法、装置、设备及存储介质
CN112102428A (zh) * 2020-11-23 2020-12-18 南京安科医疗科技有限公司 Ct锥形束扫描图像重建方法、扫描系统计及存储介质
CN112419303A (zh) * 2020-12-09 2021-02-26 上海联影医疗科技股份有限公司 神经网络训练方法、系统、可读存储介质和设备

Also Published As

Publication number Publication date
CN116051463A (zh) 2023-05-02

Similar Documents

Publication Publication Date Title
US20220117570A1 (en) Systems and methods for contrast flow modeling with deep learning
US10383590B2 (en) Methods and systems for adaptive scan control
US6765983B2 (en) Method and apparatus for imaging a region of dynamic tissue
US11399787B2 (en) Methods and systems for controlling an adaptive contrast scan
US20180049714A1 (en) Methods and systems for computed tomography
US10085698B2 (en) Methods and systems for automated tube current modulation
US6977984B2 (en) Methods and apparatus for dynamical helical scanned image production
US20070053503A1 (en) Methods and systems for automatic patient table positioning
JP2005342511A (ja) 自動プロトコル選択のための方法及び装置
US11141079B2 (en) Systems and methods for profile-based scanning
JP5544148B2 (ja) コンピュータ断層撮影方法およびシステム
US10902585B2 (en) System and method for automated angiography utilizing a neural network
US20210128820A1 (en) Methods and systems for adaptive timing of a second contrast bolus
US11160523B2 (en) Systems and methods for cardiac imaging
US9042512B2 (en) Multi-sector computed tomography image acquisition
US11963815B2 (en) Methods and systems for an adaptive contrast scan
US20180000438A1 (en) X-ray diagnostic apparatus and control method
WO2024092996A1 (zh) 医学图像处理方法、装置、计算机设备及存储介质
CN110866959A (zh) 图像重建方法、系统、装置及存储介质
US20080089465A1 (en) X-Ray Ct Apparatus
US9858688B2 (en) Methods and systems for computed tomography motion compensation
US11452490B2 (en) Methods and systems for an adaptive perfusion scan
CN111449668A (zh) 三维扫描重建中实时几何校正的标记装置、方法及系统
US20230145920A1 (en) Systems and methods for motion detection in medical images
US11523792B2 (en) Methods and systems for an adaptive multi-zone perfusion scan

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964290

Country of ref document: EP

Kind code of ref document: A1