WO2021159234A1 - 图像处理方法、装置及计算机可读存储介质 - Google Patents

图像处理方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021159234A1
WO2021159234A1 PCT/CN2020/074622 CN2020074622W WO2021159234A1 WO 2021159234 A1 WO2021159234 A1 WO 2021159234A1 CN 2020074622 W CN2020074622 W CN 2020074622W WO 2021159234 A1 WO2021159234 A1 WO 2021159234A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
sample data
image processing
loss function
Prior art date
Application number
PCT/CN2020/074622
Other languages
English (en)
French (fr)
Inventor
胡战利
杨永峰
薛恒志
郑海荣
梁栋
刘新
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2020/074622 priority Critical patent/WO2021159234A1/zh
Publication of WO2021159234A1 publication Critical patent/WO2021159234A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • This application relates to the field of big data technology, and in particular to an image processing method, device, and computer-readable storage medium.
  • PET Positron Emission Computed Tomography
  • PET refers to injecting radioactive tracers into the human body to observe the molecular level activity in human tissues to achieve the purpose of diagnosis and screening of diseases.
  • PET is a more commonly used medical imaging method, but it also has many problems, such as: radiation to the patient's body, high cost of tracers, and so on.
  • the current solutions mainly include reducing the tracer injection dose or reducing the scanning time.
  • both of the above methods will lead to a decrease in the imaging quality of PET images, which is not conducive to disease diagnosis and screening.
  • the application discloses an image processing method, device and computer readable storage medium, which can reduce the injection dose of tracer and reduce radiation while improving the imaging quality of positron emission tomography, which is conducive to the diagnosis and screening of diseases. check.
  • this application provides an image processing method, including:
  • the image processing request includes to-be-processed projection data
  • the image processing request is used to request to reconstruct an image according to the to-be-processed projection data
  • an image processing model to perform image reconstruction processing on the image data to obtain an image result
  • the image processing model being obtained by training the image processing model according to the image sample data and the target sample data
  • the client sends an image processing request including the projection data to be processed to the server, so that the server performs domain transformation on the projection data to be processed to obtain image data, and the image data is paired with the trained image processing model Perform image reconstruction on the image data to obtain an image result, and return the image result to the client.
  • the PET images with poor imaging quality can be reconstructed to generate high-definition images, so that users can diagnose and screen diseases based on the high-definition images.
  • an image processing device including:
  • a transceiver unit configured to receive an image processing request from a client, where the image processing request includes projection data to be processed, and the image processing request is used to request reconstruction of an image according to the projection data to be processed;
  • the processing unit is configured to perform domain transformation processing on the to-be-processed projection data to obtain image data; call an image processing model to perform image reconstruction processing on the image data to obtain an image result, and the image processing model is based on image sample data and Target sample data is obtained by training the image processing model;
  • the transceiver unit is also used to send the image result to the client.
  • the present application provides an image processing device, including a processor, a memory, and a communication interface.
  • the processor, the memory, and the communication interface are connected to each other, wherein the memory is used to store a computer program, and
  • the computer program includes program instructions, and the processor is configured to invoke the program instructions to perform the method described in the first aspect.
  • the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores one or more first instructions, and the one or more first instructions are suitable for being executed by a processor Load and execute the method as described in the first aspect.
  • the client sends an image processing request to the server.
  • the image processing request includes projection data to be processed.
  • the server performs domain transformation on the projection data to be processed according to the image processing request to obtain image data.
  • the domain transformation processing of the projection data to be processed before the model can save network resources; the image data is reconstructed from the image data through the trained image processing model, and the image result is obtained, and the image result is returned to the client.
  • the training method of the processing model is: iteratively training at least one set of input image sample data and target sample data through the cyclic consistency generation confrontation network, and the execution equipment of the cyclic consistency generation confrontation network includes: a first generator, a first generator, and a first generator.
  • the second generator, the first discriminator and the second discriminator are used to optimize the loss function of the model, so that the image processing model realizes a more accurate and clear reconstruction of some imaging quality on the basis of reducing the tracer injection dose and reducing the radiation.
  • Low PET images so that users can diagnose and screen diseases based on the clearer images after reconstruction.
  • FIG. 1 is an architecture diagram of an image processing system provided by an embodiment of the present application
  • FIG. 2 is an architecture diagram of another image processing system provided by an embodiment of the present application.
  • Fig. 3 is a flowchart of an image processing method provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a generator network provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a discriminator network provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of another image processing device provided by an embodiment of the present application.
  • the embodiments of the present application provide an image processing method.
  • the image processing method uses a trained image processing model to process a PET image of poor quality to generate a higher-definition image, which is beneficial to users according to the
  • the images with higher definition are used for disease diagnosis and screening.
  • the PET images with poor quality here can also be described as low-count projection images, and can also be described as projection data to be processed in the embodiments of the present application.
  • this embodiment can also be applied to the reconstruction of Single-Photon Emission Computed Tomography (SPECT) and Computed Tomography (CT) images, which is not limited here.
  • SPECT Single-Photon Emission Computed Tomography
  • CT Computed Tomography
  • a number of projection sample data can be processed through the cyclic consistency generation confrontation network to obtain the mapping relationship between the projection sample data and the real image. Based on the mapping relationship, image reconstruction is performed on the to-be-processed projection data to generate a high-definition image .
  • the real image can also be described as target sample data, which is the target image that is expected to be generated, that is, the expected image result when the projection sample data is reconstructed through the image processing model.
  • the aforementioned image processing method can be applied to the image processing system shown in FIG. 1, and the image processing system can include a client 101 and a server 102.
  • the shape and quantity of the client 101 are used for example, and do not constitute a limitation to the embodiment of the present application. For example, two clients 101 may be included.
  • the client 101 may be a client that sends an image processing request to the server 102, or may be a client that is used to provide sample data for the server 102 during image processing model training, and the client may be any of the following: Terminal, independent application, application programming interface (Application Programming Interface, API) or software development kit (Software Development Kit, SDK).
  • the terminal may include, but is not limited to: smart phones (such as Android phones, IOS phones, etc.), tablet computers, portable personal computers, mobile Internet devices (Mobile Internet Devices, MID) and other devices, which are not limited in the embodiment of the present application.
  • the server 102 may include, but is not limited to, a cluster server.
  • the client 101 sends an image processing request to the server 102, and the server 102 reconstructs the image according to the to-be-processed projection data contained in the image processing request, specifically, performs domain transformation processing on the to-be-processed projection data, Obtain image data, process the image data through a pre-trained image processing model to obtain a reconstructed image result, and send the image result to the client 101 so that the operating user 103 of the client 101 can perform operations based on the image result Disease diagnosis and screening.
  • the frame diagram of the image processing system can be seen in FIG. 2, and the frame diagram of the image processing system can include a system framework of an image processing model training process and a system framework of an image reconstruction process using a trained image processing model.
  • the training of the image processing model is mainly based on the cyclic consistency generation of the confrontation network.
  • the training process mainly includes: transform the projection sample data to obtain the image sample data, and input the image sample data into the first generator to obtain the first generation Data, further, the first generated data is input to the second generator to obtain the third generated data; on the other hand, the target sample data is determined, and the target sample data is the target image expected to be achieved through the training of the image processing model.
  • the target sample data is input to the second generator to obtain the second generated data
  • the second generated data is input to the second generator to obtain the fourth generated data.
  • the first discriminator performs the discrimination processing on the image sample data and the second generated data to obtain the first discrimination loss
  • the second discriminator performs discrimination processing on the target sample data and the first generated data to obtain the second discrimination loss.
  • the final model loss function can be obtained. Perform at least one iterative training to optimize the model loss function, obtain the mapping relationship between image sample data and target sample data, and achieve the purpose of training the first generator.
  • the image reconstruction method includes: performing domain transformation processing on the projection data to be processed to obtain image data, inputting the image data to the first generator, and performing image reconstruction processing on the image data through the first generator. Get the reconstructed image result.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the image processing method may include parts 301 to 304, in which:
  • the client 101 sends an image processing request to the server 102.
  • the client 101 sends an image processing request to the server 102.
  • the server 102 receives an image processing request from the client 101.
  • the image processing request includes the projection data to be processed.
  • the projection data is used to reconstruct the image, where the abscissa of the to-be-processed projection data is the detector distance, and the ordinate is the imaging angle.
  • the projection data to be processed is the data of the under-sampled image produced by reducing the injection dose of the radioactive tracer or reducing the scanning time of the patient during the PET imaging process.
  • the under-sampled image has lower imaging quality and better definition. Difference.
  • the server 102 automatically obtains the generated projection data to be processed. This application is not limited.
  • the client 101 may also be connected to the PET imaging device. Imaging client.
  • the server 102 performs domain transformation processing on the projection data to be processed to obtain image data.
  • the server 102 performs domain conversion processing on the projection data to be processed through a domain conversion algorithm to obtain image data.
  • the domain transformation algorithm may include, but is not limited to, Back Propagation (BP) algorithm.
  • BP Back Propagation
  • the generated image data can be a sinusoidal image.
  • the generated image data can be used as input data and input to the trained image processing model to obtain the reconstructed image data. Image result.
  • performing domain transformation processing on the projection data to be processed can save network resources.
  • the server 102 calls the image processing model to perform image reconstruction processing on the image data, and obtains an image result.
  • the server 102 calls the image processing model to perform image reconstruction processing on the image data, and obtains the reconstructed image result.
  • the image processing model is obtained by training the image processing model according to image sample data and target sample data.
  • the image sample data is the input data during the model training process
  • the target sample data is the output data expected through training.
  • the training process can construct the mapping relationship between the input data and the output data, and use the image data as the input data.
  • the output result is the reconstructed image result.
  • the image quality and definition of the image result are higher than the image corresponding to the projection data to be processed.
  • the first generator may be used to perform the above-mentioned image reconstruction steps, and the image processing model may be configured in the first generator. Further, m-layer feature extraction may be performed on the image data obtained through domain transformation to obtain encoded data, where m is a positive integer. And the encoded data is processed by the residual network to obtain the converted data. Perform n-layer upsampling processing on the converted data to obtain an image result, where n is a positive integer.
  • the input image data can be reconstructed by the first generator that has been trained, and the reconstructed image result can be obtained, so as to improve the PET imaging quality, which is beneficial to the diagnosis and screening of diseases.
  • the up-sampling processing result of the x-th layer of the converted data can also be obtained, where x is less than n and less than or A positive integer equal to m; and obtain the feature extraction result of the y-th layer of the image data, y is a positive integer less than n and less than or equal to m; skip connection between the x-th layer upsampling processing result and the y-th layer feature extraction result Process to obtain the jump link result, and input the result to the (x+1)th layer.
  • the up-sampling processing result of the x-th layer is in a mirror relationship with the feature extraction result of the y-th layer, that is, two mirroring results with the same number of feature maps generated in the upsampling and feature extraction process are skip-linked.
  • the problem of image data detail loss due to too deep network layers when performing feature extraction and residual network processing on image data can be solved, and at the same time, the difficulty of training the image processing model can be reduced.
  • the first generator network may include: an encoding part, a transforming part, a decoding part, and a skip linking part.
  • the encoding part is used to perform the above-mentioned step of extracting features of m layers of image data to obtain encoded data.
  • m is equal to 3 as an example;
  • the transforming part is used to perform the above-mentioned encoding data through residual network processing to obtain
  • the residual network includes 5 residual blocks as an example;
  • the decoding part is used to perform the steps of up-sampling the converted data to obtain the image result.
  • n is equal to 4 as an example.
  • the skip link part is used to perform the above steps of performing skip connection processing on the x-th layer upsampling processing result and the y-th layer feature extraction result to obtain image data.
  • each layer is composed of two Convolutional Layer (CIL) blocks, and there are 6 CIL blocks in total.
  • CIL block includes three parts: convolution layer, normalization layer and activation function layer.
  • the size of the convolution kernel can be 3*3.
  • the convolution step size of the first CIL block of the second layer and the first CIL of the third layer is 2. In this way, the size of the image can be reduced by half.
  • the convolution step size of the remaining CIL blocks is 1.
  • the number of convolution kernels of the first CIL block of the second layer and the first CIL of the third layer can also be set to twice the number of the previous CIL block, for example: the number of the second CIL block of the first layer
  • the number of convolution kernels is 32
  • the number of convolution kernels of the first CIL block in the second layer is 64.
  • the feature maps here are the output feature extraction results of each CIL block.
  • the number of feature maps is the same as the number of convolution kernels, and the number of feature maps output by each layer of this application can be 32, 64, or 128.
  • the obtained encoded data may be processed by the transformation part of the first generator network to obtain the transformed data.
  • the transformation part is composed of a residual network, and the residual network includes at least one residual module.
  • each residual module may include but is not limited to Two CIL blocks.
  • the step size of the CIL block may be 1, and the number of convolution kernels may be 128.
  • the input data of each residual module can be described as an input tensor, and the output data can be described as an output tensor.
  • the decoding part includes four CIL blocks and two Deconvolution Layer (DIL) blocks.
  • the first layer and the fourth layer each have only one CIL block, and the second and third layers both include One CIL block and one DIL block.
  • the first layer is connected to the residual module, which is the first input layer of the transform part, and the fourth layer is the final output layer of the first generator network.
  • the CIL block please refer to the above description. Go into details.
  • the DIL block is composed of a deconvolution layer with a step size of 2, a normalization layer, and an activation function layer. Setting the step size to 2 can expand the size of the image to twice the original size. Further, the number of convolution kernels for two DIL blocks can also be set to half of the previous CIL block to reduce the number of feature maps by half. For example, the number of convolution kernels for the first layer of CIL block is 128, and the number of convolution kernels for the second CIL block is 128. The number of convolution kernels for the first DIL block of the layer is 64. For the related description of the number of convolution kernels and the number of feature maps, please refer to the above description, which will not be repeated here. Among them, the last layer sets the number of convolution kernels to 1, and the data output after processing by the last layer of convolution kernels can be used as the final image result.
  • skip link processing can also be added.
  • skip connection processing may be performed on the result of the up-sampling processing of the x-th layer and the feature extraction result of the y-th layer.
  • the x-th layer upsampling processing result and the y-th layer feature extraction result are in a mirror relationship, that is, two mirror results with the same number of feature maps generated in the upsampling and feature extraction process are skip-linked.
  • the number of feature maps obtained by the upsampling of the third layer is 32
  • the number of feature maps obtained by the first layer feature extraction is also 32.
  • the first layer feature extraction result It has a mirror symmetry relationship with the upsampling result of the third layer, and then the upsampling processing result of the third layer and the feature extraction result of the first layer are skip-linked to obtain the skip-linked result.
  • the skip-linked result can be used as the fourth layer in the upsampling process Input data.
  • the server 102 sends the image result to the client 101.
  • the server 102 sends the image result to the client 101, and accordingly, the client 101 receives the image result.
  • the resolution of the image result is higher than the image corresponding to the projection data to be processed sent by the client 101.
  • the server 102 performs domain transformation processing on the to-be-processed data in the image processing request to obtain image data.
  • Performing domain transformation processing on the projection data to be processed first can save network resources; image data is reconstructed through the trained image processing model, the reconstructed image result is generated, and the image result is sent to the client 101.
  • the PET image with poor imaging quality is reconstructed to generate a high-definition image, so that the user can use the high-definition image according to the image reconstruction. Perform disease diagnosis and screening.
  • FIG. 5 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 5, the image processing method may include parts 501 to 505, wherein:
  • the server 102 acquires projection sample data.
  • the server 102 may obtain projection sample data from the client 101 or other data storage platforms.
  • the projection sample data and the projection data to be processed in step 301 above belong to the same type of data.
  • the projection sample data here, refer to step 301.
  • the projection sample data is mainly used as sample data to be input to the image processing model to train the image processing model.
  • the number can be one or more.
  • the sample projection data can also be sent by the client 101 to the server 102, which is not limited here.
  • the server 102 performs domain transformation processing on the projection sample data to obtain image sample data.
  • a domain transformation algorithm can be used to process the projection sample data here to obtain image sample data.
  • the image sample data is obtained by domain transformation of the projection sample data.
  • the server 102 obtains target sample data.
  • the server 102 obtains target sample data, and the target sample data is target data that matches the image sample data. That is, the target sample data here is the target image data that is expected to be achieved after image reconstruction of the image sample data through the image processing model.
  • the server 102 trains the image processing model according to the image sample data and the target sample data to obtain a model loss function.
  • the process of model training is mainly based on the cyclic consistency generation confrontation network.
  • the cyclic consistency generation confrontation network mainly includes two generators and two discriminators.
  • the process of performing this embodiment can be as follows: the server 102 is acquiring In the case of image sample data and target sample data, the image sample data and target sample data are input to the first generator and the second generator, respectively, and the first discriminator and the second discriminator are used to discriminate the generated result, The model loss function is obtained through the generation result of each generator and the discrimination result of each discriminator.
  • the image sample data may be input into the first generator to obtain the first generated data
  • the first generated data may be input into the second generator to obtain the third generated data
  • the target sample data may be input into the second generator to obtain Secondly generate data, and input the second generated data into the second generator to obtain fourth generated data.
  • the first discriminator to discriminate the image sample data and the second generated data the first discriminative loss function can be obtained
  • the second discriminator to discriminate between the target sample data and the first generated data the second discriminant loss can be obtained function.
  • the final model loss function can be obtained.
  • the discriminator network may include eight convolutional layers and two fully connected layers. Among them, each convolutional layer carries a normalization layer and an activation function layer. The size of the convolution kernel of the convolutional layer is 3*3, and the number of convolution kernels from the first layer to the eighth layer in Figure 6 are respectively 32, 32, 64, 64, 128, 128, 256, 256 Take for example.
  • the data input to the discriminator network is processed by eight modules consisting of a convolutional layer, a normalization layer, and an activation function layer, and then input to two fully connected layers. The first one is fully connected.
  • connection layer may have 1024 unit nodes, the first fully connected layer carries an activation function layer; the second fully connected layer may have 1 unit node.
  • the cyclic consistency loss function of the image processing model may be determined first according to the image sample data and the target sample data.
  • the server 102 may obtain the first generation data and the third generation data of the image sample data, and obtain the second generation data and the fourth generation data of the target sample data, the first generation data, the second generation data, and the third generation data.
  • the method for obtaining the generated data and the fourth generated data can be referred to the above description, which will not be repeated here.
  • the cyclic consistency loss function of the image processing model is obtained according to the image sample data, the target sample data, the third generation data, and the fourth generation data.
  • the expression of the loss function of the cyclic consistency network is:
  • x represents the image sample data
  • y represents the target sample data
  • F(G(x)) represents the third generated data
  • G(F(y)) represents the fourth generated data
  • F(G(x)) ⁇ x represents Forward cycle consistency
  • G(F(y)) ⁇ y represents backward cycle consistency
  • ⁇ 2 represents L 2 norm.
  • mapping relationship between the image sample data and the target sample data can be enhanced, and the deterioration of adversarial learning can be prevented.
  • supervise training is performed on the image sample data and the target sample data respectively to determine the supervised loss function of the image processing model.
  • the server 102 may perform supervisory training on the acquired target sample data and the first generation data to obtain the first supervision result; and perform supervisory training on the acquired image sample data and the second generation data to obtain the second supervision result ; According to the first supervision result and the second supervision result, the supervision loss function of the image processing model is obtained.
  • the supervision loss function is:
  • G(x) is an image close to the source image x generated by the generator G, that is, the first generated data
  • F(y) is an image close to the source image y generated by the generator F, that is, the second generation data, Indicates the result of the first supervision, Indicates the second supervision result.
  • the image processing model can be further optimized and the deterioration of adversarial learning can be avoided.
  • the server 102 obtains the first discriminant loss function and the second discriminant loss function of the image processing model, where the first discriminant loss function is generated when the first discriminator performs discrimination processing on the image sample data and the second generated data Loss function.
  • the second discriminant loss function is a loss function generated when the second discriminator discriminates the target sample data and the first generated data.
  • the model loss function of the image processing model can be determined according to the cyclic consistency loss function, the supervision loss function, the first discriminant loss function, and the second discriminant loss function.
  • the expression of the model loss function is:
  • L total L LSGAN (D LC ,G)+L LSGAN (D FC ,F)+ ⁇ 1 L CYC (G,F)+ ⁇ 2 L SUP (G,F)
  • L LSGAN (D LC ,G) represents the first discriminant loss function
  • L LSGAN (D FC ,F) represents the second discriminant loss function
  • L CYC (G,F) is the cyclic consistency loss function
  • L SUP (G , F) is the supervision loss function
  • ⁇ 1 and ⁇ 2 are parameters used to balance different ratios, optional, ⁇ 1 and ⁇ 2 can both be set to 1.
  • the loop consistency loss and the supervision loss can be optimized. Specifically, one round of optimization can be performed every time a cycle of training is completed.
  • the optimization algorithms used in the optimization process include but are not limited to the Adam algorithm.
  • the Adam algorithm is used to optimize the loop consistency loss and supervision loss, and independent adaptive learning rates are designed for different parameters, which can achieve further improvements in the image processing model. optimization.
  • the server 102 constructs an image processing model according to the model loss function.
  • the server 102 may perform at least one iterative training according to the model training process described in step 504 to optimize the model loss function, construct an image processing model, and achieve the purpose of training the first generator, and obtain the first generator According to the mapping relationship F between the image sample data and the target sample data, the first generator can reconstruct the image data in step 302 into a higher definition image result in step 303 according to the mapping relationship F.
  • the server 102 after the server 102 obtains the projection sample data and the target sample data, it performs domain transformation processing on the projection sample data to obtain image sample data, inputs the image sample data to the first generator, and The target sample data is input to the second generator, and at least a set of input image sample data and target sample data are iteratively trained through the cyclic consistency generation confrontation network.
  • the execution device of the cyclic consistency generation confrontation network includes: A generator, a second generator, a first discriminator, and a second discriminator are used to optimize the model loss function and construct an image processing model.
  • the image processing model realizes that on the basis of reducing the tracer injection dose and reducing radiation, The PET image is reconstructed into a higher definition image, so that the user can diagnose and screen diseases based on the reconstructed image.
  • an embodiment of the present application also proposes an image processing device.
  • the smart contract invoking device may be a computer program (including program code) running in a processing device; please refer to Figure 7, the smart contract invoking device may run the following units:
  • the transceiver unit 701 is configured to receive an image processing request from a client, where the image processing request includes to-be-processed projection data, and the image processing request is used to request to reconstruct an image according to the to-be-processed projection data;
  • the processing unit 702 is configured to perform domain transformation processing on the to-be-processed projection data to obtain image data; call an image processing model to perform image reconstruction processing on the image data to obtain an image result, and the image processing model is based on image sample data And target sample data obtained by training the image processing model;
  • the transceiver unit 701 is also configured to send the image result to the client.
  • the processing unit 702 may also be used to obtain projection sample data, and perform domain transformation processing on the projection sample data to obtain the image sample data;
  • Target sample data is target data matching the image sample data
  • the image processing model is constructed.
  • the image processing model is invoked to perform image reconstruction processing on the image data to obtain an image result
  • the processing unit 702 may also be used to perform m-layer feature extraction on the image data to obtain encoded data, m Is a positive integer
  • n-layer upsampling processing on the converted data to obtain the image result, where n is a positive integer.
  • the processing unit 702 may also be used to obtain an upsampling processing result of the x-th layer of the converted data, where x is a positive integer less than n and less than or equal to m;
  • the image processing model is trained according to the image sample data and the target sample data to obtain a model loss function
  • the processing unit 702 can also be used to obtain a model loss function according to the image sample data and the target sample data.
  • the target sample data determine the cyclic consistency loss function of the image processing model
  • the first discriminant loss function is the loss function of the first discriminator
  • the second discriminant loss function is the loss function of the second discriminator Loss function
  • the cyclic consistency loss function of the image processing model is determined according to the image sample data and the target sample data, and the processing unit 702 may also be used to obtain the first image sample data.
  • the processing unit 702 may also be used to obtain the first image sample data. 1. Generated data and second generated data of the target sample data, the first generated data is obtained by processing the image sample data by a first generator, and the second generated data is obtained by a second generator Obtained by processing the target sample data;
  • the third generated data is obtained by processing the first generated data by the second generator, the The fourth generated data is obtained by processing the second generated data by the first generator;
  • the cyclic consistency loss function of the image processing model is obtained.
  • the supervised training is performed on the image sample data and the target sample data to determine the supervised loss function of the image processing model, and the processing unit 702 can also be used to obtain the image sample data For the first generated data, perform supervised training on the target sample data and the first generated data to obtain a first supervised result;
  • the supervision loss function of the image processing model is obtained.
  • part of the steps involved in the image processing method shown in FIGS. 3 and 5 may be executed by the processing unit in the image processing device.
  • steps 301 and 304 shown in FIG. 3 may be executed by the transceiver unit 701; for another example, step 302 shown in FIG. 3 may be executed by the processing unit 702.
  • each unit in the image processing device can be separately or completely combined into one or several additional units to form, or some of the units can be further divided into functionally more functional units. It is composed of multiple small units, which can achieve the same operation without affecting the realization of the technical effects of the embodiments of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing device includes a processor 801, a memory 802, and a communication interface 803.
  • the processor 801, the memory 802, and the communication interface 803 pass through at least one The communication bus is connected, and the processor 801 is configured to support the processing device to perform the corresponding functions of the processing device in the methods of FIG. 3 and FIG. 5.
  • the memory 802 is configured to store at least one instruction suitable for being loaded and executed by the processor, and these instructions may be one or more computer programs (including program codes).
  • the communication interface 803 is used for receiving data and for sending data.
  • the communication interface 803 is used to send image processing requests and the like.
  • the processor 801 may call the program code stored in the memory 802 to perform the following operations:
  • the image processing request includes projection data to be processed
  • the image processing request is used to request to reconstruct an image according to the projection data to be processed
  • an image processing model to perform image reconstruction processing on the image data to obtain an image result
  • the image processing model being obtained by training the image processing model according to the image sample data and the target sample data
  • the image result is sent to the client through the communication interface 803.
  • the processor 801 may call the program code stored in the memory 802 to perform the following operations:
  • Target sample data is target data matching the image sample data
  • the image processing model is constructed.
  • the image processing model is invoked to perform image reconstruction processing on the image data to obtain an image result
  • the processor 801 may invoke the program code stored in the memory 802 to perform the following operations:
  • n-layer upsampling processing on the converted data to obtain the image result, where n is a positive integer.
  • the processor 801 may call the program code stored in the memory 802 to perform the following operations:
  • the image processing model is trained according to the image sample data and the target sample data to obtain a model loss function
  • the processor 801 can call the program code stored in the memory 802 To do the following:
  • the first discriminant loss function is the loss function of the first discriminator
  • the second discriminant loss function is the loss function of the second discriminator Loss function
  • the model loss function of the image processing model is determined according to the cyclic consistency loss function, the supervision loss function, the first discriminant loss function, and the second discriminant loss function.
  • the cyclic consistency loss function of the image processing model is determined according to the image sample data and the target sample data, and the processor 801 can call the program code stored in the memory 802 To do the following:
  • the first generated data is obtained by processing the image sample data by a first generator, and the second The generated data is obtained by processing the target sample data through the second generator;
  • the third generated data is obtained by processing the first generated data by the second generator, the The fourth generated data is obtained by processing the second generated data by the first generator;
  • the cyclic consistency loss function of the image processing model is obtained.
  • the supervised training is performed on the image sample data and the target sample data to determine the supervised loss function of the image processing model
  • the processor 801 can call the stored in the memory 802 Program code to perform the following operations:
  • the supervision loss function of the image processing model is obtained.
  • the embodiment of the present application also provides a computer-readable storage medium (Memory), which can be used to store the computer software instructions used by the processing device in the embodiment shown in FIG. 3 and FIG.
  • these instructions may be one or more computer programs (including program codes).
  • the above-mentioned computer-readable storage medium includes, but is not limited to, flash memory, hard disk, and solid-state hard disk.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium or transmitted through a computer-readable storage medium.
  • Computer instructions can be sent from one website site, computer, server, or data center to another website site, computer via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) , Server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种图像处理方法、装置及计算机可读存储介质。其中,该方法可包括:接收客户端的图像处理请求,该图像处理请求包括待处理投影数据,图像处理请求用于请求根据待处理投影数据重建图像;对待处理投影数据进行域变换处理,得到图像数据;调用图像处理模型对图像数据进行图像重建处理,得到图像结果,图像处理模型是根据图像样本数据及目标样本数据对图像处理模型进行训练得到的;发送图像结果至客户端。采用本申请实施例,在实现减少示踪剂注射剂量,降低辐射的同时,提高了正电子发射断层扫描的成像质量,有利于疾病的诊断与筛查。

Description

图像处理方法、装置及计算机可读存储介质 技术领域
本申请涉及大数据技术领域,尤其涉及一种图像处理方法、装置及计算机可读存储介质。
背景技术
正电子发射断层扫描(Positron Emission Computed Tomography,PET)是指通过向人体注入放射性示踪剂,观测人体组织中的分子水平活性,进而达到诊断及筛查疾病的目的。PET是一种较为常用的医学成像手段,但同时也存在着很多问题,例如:对患者身体造成辐射、示踪剂的成本较高,等等。针对此类问题,目前,采用的解决方法主要包括减少示踪剂注射剂量或减少扫描时间,但是,上述两种方法均会导致PET图像的成像质量下降,不利于疾病的诊断与筛查。
申请内容
本申请公开了一种图像处理方法、装置及计算机可读存储介质,在实现减少示踪剂注射剂量,降低辐射的同时,提高了正电子发射断层扫描的成像质量,有利于疾病的诊断与筛查。
第一方面,本申请提供一种图像处理方法,包括:
接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
对所述待处理投影数据进行域变换处理,得到图像数据;
调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
发送所述图像结果至所述客户端。
在该技术方案中,客户端发送包括待处理投影数据的图像处理请求至服务器,以使服务器对该待处理投影数据进行域变换,得到图像数据,并将图像数据通过已训练的图像处理模型对该图像数据进行图像重建,得到图像结果,将该图像结果返回至客户端。通过这种方法,可以将成像质量较差的PET图像进行图像重建生成清晰度较高图像,以使用户根据该清晰度较高的图像进行疾病 诊断和筛查。
第二方面,本申请提供一种图像处理装置,包括:
收发单元,用于接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
处理单元,用于对所述待处理投影数据进行域变换处理,得到图像数据;调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
所述收发单元,还用于发送所述图像结果至所述客户端。
第三方面,本申请提供一种图像处理装置,包括处理器、存储器和通信接口,所述处理器、所述存储器和所述通信接口相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如第一方面所描述的方法。该处理设备解决问题的实施方式以及有益效果可以参见上述第一方面所描述的方法以及有益效果,重复之处不再赘述。
第四方面,本申请提供一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一条或多条第一指令,所述一条或多条第一指令适于由处理器加载并执行如第一方面所描述的方法。
本申请实施例中,客户端发送图像处理请求至服务器,该图像处理请求包括待处理投影数据,服务器根据该图像处理请求对待处理投影数据进行域变换,得到图像数据,此处在输入至图像处理模型之前先对待处理投影数据进行域变换处理可以节省网络资源;将图像数据通过已训练的图像处理模型对该图像数据进行图像重建,得到图像结果,将该图像结果返回至客户端,其中,图像处理模型的训练方法为:通过循环一致性生成对抗网络对输入的至少一组图像样本数据及目标样本数据进行迭代训练,该循环一致性生成对抗网络的执行设备包括:第一生成器、第二生成器、第一判别器及第二判别器,以优化模型损失函数,使该图像处理模型实现了在减少示踪剂注射剂量,降低辐射的基础上,更准确、清晰的重建一些成像质量低的PET图像,以使用户根据重建后更为清晰的图像进行疾病诊断和筛查。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种图像处理系统的架构图;
图2是本申请实施例提供的另一种图像处理系统的架构图;
图3是本申请实施例提供的一种图像处理方法的流程图;
图4是本申请实施例提供的一种生成器网络的结构示意图;
图5是本申请实施例提供的另一种图像处理方法的流程图;
图6是本申请实施例提供的一种判别器网络的结构示意图;
图7是本申请实施例提供的一种图像处理装置的结构示意图;
图8是本申请实施例提供的另一种图像处理装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或装置没有限定于已列出的步骤或模块,而是可选地还包括没有列出的步骤或模块,或可选地还包括对于这些过程、方法、产品或装置固有的其它步骤或模块。
目前,在使用PET进行疾病筛查及诊断的过程中,通过减少放射性示踪剂的注射剂量或减少对患者的扫描时间,可以带来很多优势,例如:可以减小辐射对患者产生的危害,减低示踪剂成本,减少患者的生理运动在成像时产生的运动伪影及提高PET扫描仪的效率。但减少放射性示踪剂的注射剂量或减少对患者的扫描时间均会造成PET成像质量的下降,例如:图像噪声增加、清晰度不足,等等。
为解决上述问题,本申请实施例提供一种图像处理方法,该图像处理方法 通过已训练的图像处理模型对质量较差的PET图像进行处理,生成清晰度较高的图像,有利于用户根据该清晰度较高的图像进行疾病诊断和筛查,此处的质量较差的PET图像也可以描述为低计数投影图像,在本申请实施例中还可以描述为待处理投影数据。可选的,本实施方式也可以应用于单光子发射计算机断层成像术(Single-Photon Emission Computed Tomography,SPECT)及电子计算机断层扫描(Computed Tomography,CT)图像的重建上,此处不做限制。
具体的,可以通过循环一致性生成对抗网络对若干投影样本数据进行处理,得到投影样本数据与真实图像的映射关系,基于该映射关系,对待处理投影数据进行图像重建,生成清晰度较高的图像。在本申请实施例中,真实图像也可以描述为目标样本数据,该目标样本数据为期望生成的目标图像,即通过图像处理模型对投影样本数据进行图像重建时,期望得到的图像结果。
上述所提及的图像处理方法可应用于如图1所示的图像处理系统中,该图像处理系统可包括客户端101及服务器102。该客户端101的形态和数量用于举例,并不构成对本申请实施例的限定。例如,可以包括两个客户端101。
其中,客户端101可以为向服务器102发送图像处理请求的客户端,也可以为在图像处理模型训练时,用于为服务器102提供样本数据的客户端,该客户端可以为以下任一种:终端、独立的应用程序、应用程序编程接口(Application Programming Interface,API)或者软件开发工具包(Software Development Kit,SDK)。其中,终端可以包括但不限于:智能手机(如Android手机、IOS手机等)、平板电脑、便携式个人计算机、移动互联网设备(Mobile Internet Devices,MID)等设备,本申请实施例不做限定。服务器102可以包括但不限于集群服务器。
在本申请的实施例中,客户端101向服务器102发送图像处理请求,服务器102根据该图像处理请求所包含的待处理投影数据重建图像,具体的,对该待处理投影数据做域变换处理,得到图像数据,通过预先训练好的图像处理模型对该图像数据进行处理,得到重建的图像结果,将该图像结果发送至客户端101,以使客户端101的操作用户103可以根据该图像结果进行疾病诊断和筛查。
该图像处理系统的框架图可参见图2所示,该图像处理系统的框架图可包括图像处理模型训练过程的系统框架及利用训练好的图像处理模型进行图像重建过程的系统框架。其中,图像处理模型的训练主要基于循环一致性生成对抗网络,其训练的过程主要包括:将投影样本数据进行域变换处理,得到图像样本 数据,将图像样本数据输入第一生成器得到第一生成数据,进一步的,将第一生成数据输入第二生成器得到第三生成数据;另一方面,确定目标样本数据,目标样本数据为通过对图像处理模型的训练所期望达到的目标图像,将该目标样本数据输入第二生成器得到第二生成数据,进一步的,将第二生成数据输入第二生成器得到第四生成数据。通过第一判别器对图像样本数据及第二生成数据进行判别处理,可以得到第一判别损失;通过第二判别器对目标样本数据及第一生成数据进行判别处理,可以得到第二判别损失。在得到上述的第一生成数据、第二生成数据、第三生成数据及第四生成数据、第一判别损失及第二判别损失的情况下,可以得到最终的模型损失函数,通过对上述训练过程执行至少一次的迭代训练,可以优化模型损失函数,得到图像样本数据与目标样本数据的映射关系,达到训练第一生成器的目的,则在图像重建过程中,可以利用上述训练完成的第一生成器进行图像重建,该图像重建的方法包括:对待处理投影数据进行域变换处理,得到图像数据,将图像数据输入至第一生成器,经过第一生成器对该图像数据进行图像重建处理,可以得到重建后的图像结果。
请参见图3,图3是本申请实施例提供的一种图像处理方法的流程示意图,如图3所示,该图像处理方法可以包括301~304部分,其中:
301、客户端101发送图像处理请求至服务器102。
具体的,客户端101发送图像处理请求至服务器102,相应的,服务器102接收来自客户端101的图像处理请求,该图像处理请求包括待处理投影数据,该图像处理请求用于请求根据该待处理投影数据重建图像,其中,该待处理投影数据的横坐标为探测器距离,纵坐标为成像角度。待处理投影数据为在PET成像过程中,由于减少了放射性示踪剂的注射剂量或减少对患者的扫描时间而产生的欠采样图像的数据,该欠采样图像的成像质量较低,清晰度较差。可选的,也可以在客户端101在获取到PET图像后,由服务器102自动获取生成的待处理投影数据,本申请不做限制,其中,客户端101也可以为与PET成像设备相连接的成像客户端。
302、服务器102对待处理投影数据进行域变换处理,得到图像数据。
具体的,服务器102通过域变换算法对待处理投影数据进行域变换处理,得到图像数据。其中,该域变换算法可以包括但不限于反向传播(Back Propagation,BP)算法。待处理投影数据的相关描述可以参见步骤301中的描述,生成的图 像数据可以为一种正弦图像,该生成的图像数据可以作为输入数据,输入至训练完成的图像处理模型,以得到重建后的图像结果。此处在输入至图像处理模型之前先对待处理投影数据进行域变换处理可以节省网络资源。
303、服务器102调用图像处理模型对图像数据进行图像重建处理,得到图像结果。
具体的,服务器102调用图像处理模型对图像数据进行图像重建处理,得到重建后的图像结果。该图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的。其中,图像样本数据为在模型训练过程中的输入数据,目标样本数据为通过训练期望得到的输出数据,则训练过程可以构建输入数据与输出数据之间的映射关系,将图像数据作为输入数据输入至图像处理模型的情况下,输出的结果即为重建的图像结果。该图像结果的图像质量与清晰度均高于待处理投影数据所对应的图像。
作为一种可选的实施方式,可以利用第一生成器执行上述图像重建步骤,则该图像处理模型可以被配置于第一生成器中。进一步的,可以对上述经过域变换得到的图像数据进行m层特征提取,得到编码数据,其中,m为正整数。并将编码数据经过残差网络处理,得到转换数据。对该转换数据进行n层上采样处理,得到图像结果,其中,n为正整数。
通过执行本实施方式,可以利用已完成训练的第一生成器对输入的图像数据进行重建,得到重建后的图像结果,以提高PET成像质量,有利于疾病的诊断与筛查。
作为一种可选的实施方式,在执行对转换数据进行n层上采样处理,得到图像结果的过程中,还可以获取转换数据的第x层的上采样处理结果,x为小于n且小于或等于m的正整数;并获取图像数据的第y层的特征提取结果,y为小于n且小于或等于m的正整数;对第x层上采样处理结果与第y层特征提取结果进行跳跃连接处理,得到跳跃链接结果,并将该结果输入至第(x+1)层。可选的,第x层上采样处理结果与第y层的特征提取结果成镜像关系,即对在上采样及特征提取过程中生成的特征图的数量相同的两个镜像结果进行跳跃链接。
通过执行本实施方式,可以解决在对图像数据进行特征提取及残差网络处理时,由于网络层数太深而导致的图像数据细节丢失的问题,同时也可以降低图像处理模型的训练难度。
进一步的,上述可选的实施方式中涉及的第一生成器网络的结构示意图可 参见图4,该第一生成器网络可以包括:编码部分、变换部分、解码部分及跳跃链接部分。其中,编码部分用于执行上述对图像数据进行m层特征提取,得到编码数据的步骤,在图4中以m等于3为例;变换部分用于执行上述将编码数据经过残差网络处理,得到转换数据的步骤,在图4中以残差网络包括5个残差块为例;解码部分用于执行对转换数据进行n层上采样处理,得到图像结果的步骤,在图4中以n等于4为例。跳跃链接部分用于执行上述对第x层上采样处理结果与第y层特征提取结果进行跳跃连接处理,得到图像数据的步骤。
进一步的,在执行本步骤时,将图像数据输入至第一生成器后,可以通过第一生成器网络的编码部分对图像数据进行3层特征提取,得到编码数据。具体的,每层均由两个卷积层(Convolutional Layer,CIL)块组成,则共有6个CIL块。每个CIL块包括卷积层、归一化层及激活函数层三部分。可选的,卷积内核的大小可以为3*3。第二层第一个CIL块及第三层第一个的CIL的卷积步长为2,通过该方式,可以将图像的尺寸减小一半。而其余CIL块的卷积步长为1。进一步的,也可以将第二层第一个CIL块及第三层第一个的CIL的卷积核的数量设置为上一个CIL块的2倍,例如:第一层第二个CIL块的卷积核数为32,第二层第一个CIL块的卷积核数为64,通过该方式,可以扩展特征图的数量,此处的特征图为每个CIL块的输出的特征提取结果,特征图数量与卷积核的数量相同,则本申请每层输出的特征图的数量可以为32个、64个、128个。
进一步的,在完成上述编码部分后,可以通过第一生成器网络的变换部分对得到的编码数据进行处理,得到转换数据。具体的,该变换部分由残差网络构成,该残差网络包括至少一个残差模块,在本申请实施例中以五个残差模块为例,其中,每个残差模块可以包括但不限于两个CIL块,关于CIL块的相关描述可参见上述描述,此处不赘述。可选的,该CIL块的步长可以为1,卷积核数量可以为128。在执行过程中,每个残差模块的输入数据可以描述为输入张量,输出数据可以描述为输出张量。通过引入残差网络执行本实施例所描述的方法,可以使第一生成器网络更深,表达能力更强,输出的重建结果更为准确。
进一步的,在完成上述变换部分后,可以通过第一生成器网络的解码部分对得到的转换数据进行四层上采样处理,得到图像结果。具体的,该解码部分包括四个CIL块及两个反卷积(Deconvolution Layer,DIL)块,其中,第一层及第四层均只有一个CIL块,而第二层及第三层均包括一个CIL块及一个DIL块。且第一层与残差模块相连接,为变换部分的第一个输入层,第四层为第一生成 器网络的最终输出层,此处CIL块的相关描述可参见上述描述,此处不赘述。DIL块则由步长为2反卷积层、归一化层及激活函数层组成,将步长设置为2可以将图像的尺寸扩展为原来的2倍。进一步的,也可以将两个DIL块的卷积核数量设置为上一个CIL块的一半,以将特征图的数量减少一半,例如:第一层CIL块的卷积核数为128,第二层第一个DIL块的卷积核数为64。此处的卷积核数量及特征图数量的相关描述可参见上述描述,此处不赘述。其中,最后一层将卷积核数设置为1,则通过该最后一层的卷积核处理后输出的数据可以作为最终的图像结果。
进一步的,在执行上述解码部分的过程中,也可以加入跳跃链接处理。具体的,可以对第x层上采样处理结果与第y层特征提取结果进行跳跃连接处理。其中第x层上采样处理结果与第y层的特征提取结果成镜像关系,即对在上采样及特征提取过程中生成的特征图的数量相同的两个镜像结果进行跳跃链接。例如:当x为3,y为1时,第三层上采样处理得到的特征图的数量为32,第一层特征提取得到的特征图的数量也为32,此时第一层特征提取结果与第三层上采样结果成镜像对称关系,则对第三层上采样处理结果与第一层特征提取结果进行跳跃链接,得到跳跃链接结果,该跳跃链接结果可以作为上采样过程中第四层的输入数据。
304、服务器102发送图像结果至客户端101。
具体的,在通过第一生成器网络得到重建后的图像结果的情况下,服务器102发送该图像结果至客户端101,相应的,客户端101接收该图像结果。该图像结果的清晰度要高于客户端101发送的待处理投影数据所对应的图像。
可见,通过实施图3所描述的方法,客户端101在发送图像处理请求后,服务器102对图像处理请求中的待处理数据进行域变换处理,得到图像数据,此处在输入至图像处理模型之前先对待处理投影数据进行域变换处理可以节省网络资源;通过已训练好的图像处理模型对图像数据进行图像重建,生成重建后的图像结果,将该图像结果发送至客户端101。通过本实施例的方法,在实现减少示踪剂注射剂量,降低辐射的同时,将成像质量较差的PET图像进行图像重建生成清晰度较高图像,以使用户根据该清晰度较高的图像进行疾病诊断和筛查。
请参见图5,图5是本申请实施例提供的一种图像处理方法的流程示意图,如图5所示,该图像处理方法可以包括501~505部分,其中:
501、服务器102获取投影样本数据。
具体的,服务器102可以从客户端101或其他数据存储平台获取投影样本数据,该投影样本数据与上述步骤301中的待处理投影数据属于一类数据,则此处的投影样本数据可参见步骤301中待处理投影数据的相关描述,该投影样本数据主要用于作为样本数据输入至图像处理模型中,以对该图像处理模型进行训练,其数量可以为一个或多个。可选的,该样本投影数据也可以由客户端101发送至服务器102,此处不做限制。
502、服务器102对投影样本数据进行域变换处理,得到图像样本数据。
具体的,此处可以采用域变换算法对投影样本数据进行处理,得到图像样本数据,该图像样本数据为投影样本数据经过域变换得到的,此处的域变换生成图像样本数据的过程可以参见上述步骤302中的相关描述,此处不赘述。
503、服务器102获取目标样本数据。
具体的,服务器102获取目标样本数据,该目标样本数据为与图像样本数据相匹配的目标数据。即此处的目标样本数据为通过图像处理模型对图像样本数据进行图像重建后,期望达到的目标图像的数据。
504、服务器102根据图像样本数据及目标样本数据对图像处理模型进行训练,得到模型损失函数。
具体的,该模型训练的过程主要基于循环一致性生成对抗网络,该循环一致性生成对抗网络主要包括两个生成器及两个判别器,则执行本实施方式的过程可以为,服务器102在获取到图像样本数据及目标样本数据的情况下,将该图像样本数据及目标样本数据分别输入第一生成器及第二生成器,并通过第一判别器及第二判别器对生成结果进行判别,通过各个生成器的生成结果及各个判别器的判别结果,得到模型损失函数。进一步的,可以将图像样本数据输入第一生成器得到第一生成数据,并将第一生成数据输入第二生成器得到第三生成数据;另一方面,将目标样本数据输入第二生成器得到第二生成数据,并将第二生成数据输入第二生成器得到第四生成数据。通过第一判别器对图像样本数据及第二生成数据进行判别处理,可以得到第一判别损失函数;通过第二判别器对目标样本数据及第一生成数据进行判别处理,可以得到第二判别损失函数。在得到上述的第一生成数据、第二生成数据、第三生成数据及第四生成数据、第一判别损失函数及第二判别损失函数的情况下,可以得到最终的模型损失函数。上述步骤中,第一判别器及第二判别器所包含的判别器网络的结构示意图 可参见图6,该判别器网络可以包括八个卷积层及两个全连接层。其中,每个卷积层后均携带有一个归一化层及一个激活函数层。该卷积层的卷积核的大小均为3*3,且在图6中第一层至第八层的卷积核数分别以32、32、64、64、128、128、256、256为例。在实际操作过程中,输入判别器网络的数据在经过八个由卷积层、归一化层及激活函数层组成的模块处理后,再输入至两个全连接层,其中,第一个全连接层可以有1024个单位节点,该第一全连接层携带有一个激活函数层;第二个全连接层可以有1个单位节点。第一生成器及第二生成器所包含的生成器网络的相关描述可参见上述步骤303中的相关描述,此处不赘述。
作为一种可选的实施方式,在执行获取模型损失函数的步骤时,可以先根据图像样本数据及目标样本数据,确定图像处理模型的循环一致性损失函数。具体的,服务器102可以获取图像样本数据的第一生成数据及第三生成数据,并获取目标样本数据的第二生成数据及第四生成数据,该第一生成数据、第二生成数据、第三生成数据及第四生成数据的获取方法可参见上述描述,此处不赘述。则根据图像样本数据、目标样本数据、第三生成数据及第四生成数据,得到图像处理模型的循环一致性损失函数。该循环一致性网络的损失函数的表达式为:
Figure PCTCN2020074622-appb-000001
其中,x表示图像样本数据,y表示目标样本数据,F(G(x))表示第三生成数据,G(F(y))表示第四生成数据,F(G(x))≈x表示前向周期一致性,G(F(y))≈y表示后向周期一致性,‖·‖ 2表示L 2范数。这种周期一致性可以防止对抗性学习的恶化,
Figure PCTCN2020074622-appb-000002
表示前向周期一致性的损失结果,
Figure PCTCN2020074622-appb-000003
表示后向周期一致性的损失结果。
通过执行该实施方式获取循环一致性损失函数,可以增强图像样本数据与目标样本数据之前的映射关系,防止对抗性学习的恶化。
进一步的,对图像样本数据及目标样本数据分别进行监督训练,确定图像处理模型的监督损失函数。具体的,服务器102可以对获取到的目标样本数据及第一生成数据进行监督训练,得到第一监督结果;并对获取到的图像样本数据及第二生成数据进行监督训练,得到第二监督结果;根据第一监督结果及第二监督结果,得到图像处理模型的监督损失函数。其中,第一生成数据、第二生 成数据、目标样本数据及图像样本数据的获取方式可以参见上述步骤502、503及504的相关描述,此处不赘述。该监督损失函数的表达式为:
Figure PCTCN2020074622-appb-000004
其中,G(x)是接近源图像x通过生成器G生成的y的图像,即第一生成数据,F(y)是接近源图像y通过生成器F生成的x的图像,即第二生成数据,
Figure PCTCN2020074622-appb-000005
表示第一监督结果,
Figure PCTCN2020074622-appb-000006
表示第二监督结果。
通过执行该实施方式获取监督损失函数,可以进一步优化该图像处理模型,避免对抗性学习的恶化。
进一步的,服务器102获取图像处理模型的第一判别损失函数及第二判别损失函数,其中,第一判别损失函数为第一判别器对图像样本数据及第二生成数据进行判别处理时,生成的损失函数。第二判别损失函数为第二判别器对目标样本数据及第一生成数据进行判别时,生成的损失函数。则可以根据循环一致性损失函数、监督损失函数、第一判别损失函数及第二判别损失函数,确定图像处理模型的模型损失函数。该模型损失函数的表达式为:
L total=L LSGAN(D LC,G)+L LSGAN(D FC,F)+λ 1L CYC(G,F)+λ 2L SUP(G,F)
其中,L LSGAN(D LC,G)表示第一判别损失函数,L LSGAN(D FC,F)表示第二判别损失函数,L CYC(G,F)为循环一致性损失函数,L SUP(G,F)为监督损失函数,λ 1和λ 2是用于平衡不同比例的参数,可选的,λ 1和λ 2均可以取值为1。
可选的,可以对循环一致性损失及监督损失进行优化,具体的,可以在每完成一个周期的训练进行一轮优化。该优化过程所采用的优化算法包括但不限于Adam算法,采用Adam算法对循环一致性损失及监督损失进行优化,为不同的参数设计独立的自适应性学习率,可以实现对图像处理模型的进一步优化。
505、服务器102根据模型损失函数,构建图像处理模型。
具体的,服务器102可以根据步骤504中所描述的模型训练过程,执行至少一次的迭代训练,以优化模型损失函数,构建图像处理模型,达到训练第一生成器的目的,得到第一生成器中图像样本数据与目标样本数据的映射关系F,则第一生成器可以根据该映射关系F将步骤302中的图像数据重建为步骤303中清晰度更高的图像结果。
可见,通过实施图5所描述的方法,服务器102在获取到投影样本数据和目标样本数据后,对投影样本数据做域变换处理得到图像样本数据,将图像样本数据输入至第一生成器,并将目标样本数据输入至第二生成器,并通过循环一 致性生成对抗网络对输入的至少一组图像样本数据及目标样本数据进行迭代训练,该循环一致性生成对抗网络的执行设备包括:第一生成器、第二生成器、第一判别器及第二判别器,以优化模型损失函数,构建图像处理模型,该图像处理模型实现了在减少示踪剂注射剂量,降低辐射的基础上,将PET图像重建成清晰度更高的图像,以使用户可以根据该重建后图像进行疾病诊断和筛查。
基于上述方法实施例的描述,本申请实施例还提出一种图像处理装置。该智能合约调用装置可以是运行于处理设备中的计算机程序(包括程序代码);请参见图7所示,该智能合约调用装置可以运行如下单元:
收发单元701,用于接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
处理单元702,用于对所述待处理投影数据进行域变换处理,得到图像数据;调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
所述收发单元701,还用于发送所述图像结果至所述客户端。
在一种实施方式中,处理单元702,还可用于获取投影样本数据,并对所述投影样本数据进行域变换处理,得到所述图像样本数据;
获取目标样本数据,所述目标样本数据为与所述图像样本数据相匹配的目标数据;
根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数;
根据所述模型损失函数,构建所述图像处理模型。
再一种实施方式中,所述调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,处理单元702,还可用于对所述图像数据进行m层特征提取,得到编码数据,m为正整数;
获取所述编码数据的转换数据,所述转换数据为所述编码数据经过残差网络处理得到的;
对所述转换数据进行n层上采样处理,得到所述图像结果,n为正整数。
再一种实施方式中,处理单元702,还可用于获取所述转换数据的第x层的上采样处理结果,x为小于n且小于或等于m的正整数;
获取所述图像数据的第y层的特征提取结果,y为小于n且小于或等于m的正整数;
对所述第x层上采样处理结果与所述第y层特征提取结果进行跳跃连接处理,得到跳跃链接结果,并将所述跳跃链接结果输入至第(x+1)层。
再一种实施方式中,所述根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数,处理单元702,还可用于根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数;
对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数;
获取所述图像处理模型的第一判别损失函数及第二判别损失函数,所述第一判别损失函数为第一判别器的损失函数,所述第二判别损失函数为所述第二判别器的损失函数;
根据所述循环一致性损失函数、所述监督损失函数、所述第一判别损失函数及所述第二判别损失函数,确定所述图像处理模型的模型损失函数。
再一种实施方式中,所述根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数,处理单元702,还可用于获取所述图像样本数据的第一生成数据及所述目标样本数据的第二生成数据,所述第一生成数据是通过第一生成器对所述图像样本数据进行处理得到的,所述第二生成数据是通过第二生成器对所述目标样本数据进行处理得到的;
获取所图像样本数据的第三生成数据及所述目标样本数据的第四生成数据,所述第三生成数据是通过所述第二生成器对所述第一生成数据进行处理得到的,所述第四生成数据是通过所述第一生成器对所述第二生成数据进行处理得到的;
根据所述图像样本数据、所述目标样本数据、所述第三生成数据及所述第四生成数据,得到所述图像处理模型的所述循环一致性损失函数。
再一种实施方式中,所述对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数,处理单元702,还可用于获取所述图像样本数据的所述第一生成数据,对所述目标样本数据及所述第一生成数据进行监督训练,得到第一监督结果;
获取所述目标样本数据的所述第二生成数据,对所述图像样本数据及所述第二生成数据进行监督训练,得到第二监督结果;
根据所述第一监督结果及所述第二监督结果,得到所述图像处理模型的所述监督损失函数。
根据本申请的一个实施例,图3及图5所示的图像处理方法所涉及的部分步骤可由图像处理装置中的处理单元来执行。例如,图3中所示的步骤301和304可由收发单元701执行;又如,图3所示的步骤302可由处理单元702执行。根据本申请的另一个实施例,图像处理装置中的各个单元可以分别或全部合并为一个或若干个另外的单元来构成,或者其中的某个(些)单元还可以再拆分为功能上更小的多个单元来构成,这可以实现同样的操作,而不影响本申请的实施例的技术效果的实现。
请参见图8,是本申请实施例提供的一种图像处理装置的结构示意图,该图像处理装置包括处理器801、存储器802及通信接口803,处理器801、存储器802及通信接口803通过至少一条通信总线连接,处理器801被配置为支持处理设备执行图3及图5方法中处理设备相应的功能。
存储器802用于存放有适于被处理器加载并执行的至少一条指令,这些指令可以是一个或一个以上的计算机程序(包括程序代码)。
通信接口803用于接收数据和用于发送数据。例如,通信接口803用于发送图像处理请求等。
在本申请实施例中,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
通过通信接口803接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
对所述待处理投影数据进行域变换处理,得到图像数据;
调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
通过通信接口803发送所述图像结果至所述客户端。
作为一种可选的实施方式,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
获取投影样本数据,并对所述投影样本数据进行域变换处理,得到所述图像样本数据;
获取目标样本数据,所述目标样本数据为与所述图像样本数据相匹配的目标数据;
根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数;
根据所述模型损失函数,构建所述图像处理模型。
作为一种可选的实施方式,所述调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
对所述图像数据进行m层特征提取,得到编码数据,m为正整数;
获取所述编码数据的转换数据,所述转换数据为所述编码数据经过残差网络处理得到的;
对所述转换数据进行n层上采样处理,得到所述图像结果,n为正整数。
作为一种可选的实施方式,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
获取所述转换数据的第x层的上采样处理结果,x为小于n且小于或等于m的正整数;
获取所述图像数据的第y层的特征提取结果,y为小于n且小于或等于m的正整数;
对所述第x层上采样处理结果与所述第y层特征提取结果进行跳跃连接处理,得到跳跃链接结果,并将所述跳跃链接结果输入至第(x+1)层。
作为一种可选的实施方式,所述根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数;
对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数;
获取所述图像处理模型的第一判别损失函数及第二判别损失函数,所述第一判别损失函数为第一判别器的损失函数,所述第二判别损失函数为所述第二判别器的损失函数;
根据所述循环一致性损失函数、所述监督损失函数、所述第一判别损失函 数及所述第二判别损失函数,确定所述图像处理模型的模型损失函数。
作为一种可选的实施方式,所述根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
获取所述图像样本数据的第一生成数据及所述目标样本数据的第二生成数据,所述第一生成数据是通过第一生成器对所述图像样本数据进行处理得到的,所述第二生成数据是通过第二生成器对所述目标样本数据进行处理得到的;
获取所图像样本数据的第三生成数据及所述目标样本数据的第四生成数据,所述第三生成数据是通过所述第二生成器对所述第一生成数据进行处理得到的,所述第四生成数据是通过所述第一生成器对所述第二生成数据进行处理得到的;
根据所述图像样本数据、所述目标样本数据、所述第三生成数据及所述第四生成数据,得到所述图像处理模型的所述循环一致性损失函数。
作为一种可选的实施方式,所述对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数,该处理器801可以调用存储器802中存储的程序代码以执行以下操作:
获取所述图像样本数据的所述第一生成数据,对所述目标样本数据及所述第一生成数据进行监督训练,得到第一监督结果;
获取所述目标样本数据的所述第二生成数据,对所述图像样本数据及所述第二生成数据进行监督训练,得到第二监督结果;
根据所述第一监督结果及所述第二监督结果,得到所述图像处理模型的所述监督损失函数。
本申请实施例还提供了一种计算机可读存储介质(Memory),可以用于存储图3及图5中所示实施例中处理设备所用的计算机软件指令,在该存储空间中还存放了适于被处理器加载并执行的至少一条指令,这些指令可以是一个或一个以上的计算机程序(包括程序代码)。
上述计算机可读存储介质包括但不限于快闪存储器、硬盘、固态硬盘。
本领域普通技术人员可以意识到,结合本申请中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者通过计算机可读存储介质进行传输。计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (10)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
    对所述待处理投影数据进行域变换处理,得到图像数据;
    调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
    发送所述图像结果至所述客户端。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取投影样本数据,并对所述投影样本数据进行域变换处理,得到所述图像样本数据;
    获取目标样本数据,所述目标样本数据为与所述图像样本数据相匹配的目标数据;
    根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数;
    根据所述模型损失函数,构建所述图像处理模型。
  3. 根据权利要求1所述的方法,其特征在于,所述调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,包括:
    对所述图像数据进行m层特征提取,得到编码数据,m为正整数;
    获取所述编码数据的转换数据,所述转换数据为所述编码数据经过残差网络处理得到的;
    对所述转换数据进行n层上采样处理,得到所述图像结果,n为正整数。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    获取所述转换数据的第x层的上采样处理结果,x为小于n且小于或等于m的正整数;
    获取所述图像数据的第y层的特征提取结果,y为小于n且小于或等于m的正整数;
    对所述第x层上采样处理结果与所述第y层特征提取结果进行跳跃连接处理,得到跳跃链接结果,并将所述跳跃链接结果输入至第(x+1)层。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述图像样本数据及所述目标样本数据对所述图像处理模型进行训练,得到模型损失函数,包括:
    根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数;
    对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数;
    获取所述图像处理模型的第一判别损失函数及第二判别损失函数,所述第一判别损失函数为第一判别器的损失函数,所述第二判别损失函数为所述第二 判别器的损失函数;
    根据所述循环一致性损失函数、所述监督损失函数、所述第一判别损失函数及所述第二判别损失函数,确定所述图像处理模型的模型损失函数。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述图像样本数据及所述目标样本数据,确定所述图像处理模型的循环一致性损失函数,包括:
    获取所述图像样本数据的第一生成数据及所述目标样本数据的第二生成数据,所述第一生成数据是通过第一生成器对所述图像样本数据进行处理得到的,所述第二生成数据是通过第二生成器对所述目标样本数据进行处理得到的;
    获取所图像样本数据的第三生成数据及所述目标样本数据的第四生成数据,所述第三生成数据是通过所述第二生成器对所述第一生成数据进行处理得到的,所述第四生成数据是通过所述第一生成器对所述第二生成数据进行处理得到的;
    根据所述图像样本数据、所述目标样本数据、所述第三生成数据及所述第四生成数据,得到所述图像处理模型的所述循环一致性损失函数。
  7. 根据权利要求5所述的方法,其特征在于,所述对所述图像样本数据及所述目标样本数据分别进行监督训练,确定所述图像处理模型的监督损失函数,包括:
    获取所述图像样本数据的所述第一生成数据,对所述目标样本数据及所述第一生成数据进行监督训练,得到第一监督结果;
    获取所述目标样本数据的所述第二生成数据,对所述图像样本数据及所述第二生成数据进行监督训练,得到第二监督结果;
    根据所述第一监督结果及所述第二监督结果,得到所述图像处理模型的所述监督损失函数。
  8. 一种图像处理装置,其特征在于,包括:
    收发单元,用于接收客户端的图像处理请求,所述图像处理请求包括待处理投影数据,所述图像处理请求用于请求根据所述待处理投影数据重建图像;
    处理单元,用于对所述待处理投影数据进行域变换处理,得到图像数据;调用图像处理模型对所述图像数据进行图像重建处理,得到图像结果,所述图像处理模型是根据图像样本数据及目标样本数据对所述图像处理模型进行训练得到的;
    所述收发单元,还用于发送所述图像结果至所述客户端。
  9. 一种图像处理装置,其特征在于,包括处理器、存储器和通信接口,所述处理器、所述存储器和所述通信接口相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-7中任一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一条或多条指令,所述一条或多条指令适于由处理器加载并执行如权利要求1-7任一项所述的方法。
PCT/CN2020/074622 2020-02-10 2020-02-10 图像处理方法、装置及计算机可读存储介质 WO2021159234A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074622 WO2021159234A1 (zh) 2020-02-10 2020-02-10 图像处理方法、装置及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074622 WO2021159234A1 (zh) 2020-02-10 2020-02-10 图像处理方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021159234A1 true WO2021159234A1 (zh) 2021-08-19

Family

ID=77291336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074622 WO2021159234A1 (zh) 2020-02-10 2020-02-10 图像处理方法、装置及计算机可读存储介质

Country Status (1)

Country Link
WO (1) WO2021159234A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744158A (zh) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 图像生成方法、装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509235A (zh) * 2018-11-12 2019-03-22 深圳先进技术研究院 Ct图像的重建方法、装置、设备及存储介质
CN110263801A (zh) * 2019-03-08 2019-09-20 腾讯科技(深圳)有限公司 图像处理模型生成方法及装置、电子设备
CN110462689A (zh) * 2017-04-05 2019-11-15 通用电气公司 基于深度学习的断层摄影重建
US20190371018A1 (en) * 2018-05-29 2019-12-05 Korea Advanced Institute Of Science And Technology Method for processing sparse-view computed tomography image using neural network and apparatus therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462689A (zh) * 2017-04-05 2019-11-15 通用电气公司 基于深度学习的断层摄影重建
US20190371018A1 (en) * 2018-05-29 2019-12-05 Korea Advanced Institute Of Science And Technology Method for processing sparse-view computed tomography image using neural network and apparatus therefor
CN109509235A (zh) * 2018-11-12 2019-03-22 深圳先进技术研究院 Ct图像的重建方法、装置、设备及存储介质
CN110263801A (zh) * 2019-03-08 2019-09-20 腾讯科技(深圳)有限公司 图像处理模型生成方法及装置、电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744158A (zh) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 图像生成方法、装置、电子设备和存储介质

Similar Documents

Publication Publication Date Title
CN111325686B (zh) 一种基于深度学习的低剂量pet三维重建方法
CN111340904B (zh) 图像处理方法、装置及计算机可读存储介质
Whiteley et al. FastPET: near real-time reconstruction of PET histo-image data using a neural network
CN112819914B (zh) 一种pet图像处理方法
US11348291B2 (en) System and method for reconstructing magnetic resonance images
WO2022120883A1 (zh) 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法
CN112365560B (zh) 基于多级网络的图像重建方法、系统、可读存储介质和设备
US20220245868A1 (en) System and method for image reconstruction
CN111489406B (zh) 生成高能ct图像模型的训练及生成方法、设备、存储介质
CN112488953A (zh) 一种医学图像降噪方法、系统、终端以及存储介质
CN115272511A (zh) 基于双解码器的cbct图像中金属伪影去除系统、方法、终端及介质
CN111325695B (zh) 基于多剂量等级的低剂量图像增强方法、系统及存储介质
CN112489158A (zh) 基于cGAN的自适应网络用于低剂量PET图像的增强方法
WO2021159234A1 (zh) 图像处理方法、装置及计算机可读存储介质
Su et al. A deep learning method for eliminating head motion artifacts in computed tomography
Whiteley et al. FastPET: Near real-time PET reconstruction from histo-images using a neural network
CN112991220A (zh) 一种基于多重约束的卷积神经网络校正图像伪影的方法
CN111862255A (zh) 正则化图像重建方法、系统、可读存储介质和设备
CN114862980A (zh) 散射校正方法、pet成像方法、装置、设备及存储介质
WO2021189383A1 (zh) 生成高能ct图像模型的训练及生成方法、设备、存储介质
US11574184B2 (en) Multi-modal reconstruction network
CN114305469A (zh) 低剂量的数字乳腺断层摄影方法、装置及乳腺成像设备
CN114730476A (zh) 有限角度重建的网络确定
CN114693831B (zh) 一种图像处理方法、装置、设备和介质
WO2022120694A1 (zh) 低剂量图像去噪网络的训练方法、低剂量图像的去噪方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919281

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919281

Country of ref document: EP

Kind code of ref document: A1