CN111340904B - Image processing method, device and computer readable storage medium - Google Patents

Image processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111340904B
CN111340904B CN202010085677.5A CN202010085677A CN111340904B CN 111340904 B CN111340904 B CN 111340904B CN 202010085677 A CN202010085677 A CN 202010085677A CN 111340904 B CN111340904 B CN 111340904B
Authority
CN
China
Prior art keywords
image
data
sample data
image processing
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010085677.5A
Other languages
Chinese (zh)
Other versions
CN111340904A (en
Inventor
胡战利
杨永峰
薛恒志
郑海荣
梁栋
刘新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010085677.5A priority Critical patent/CN111340904B/en
Publication of CN111340904A publication Critical patent/CN111340904A/en
Application granted granted Critical
Publication of CN111340904B publication Critical patent/CN111340904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and a computer readable storage medium. Wherein the method may comprise: receiving an image processing request of a client, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting reconstruction of an image according to the projection data to be processed; performing domain transformation on projection data to be processed to obtain image data; invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to the image sample data and the target sample data; and sending the image result to the client. By adopting the embodiment of the invention, the injection dosage of the tracer is reduced, the radiation is reduced, the imaging quality of positron emission tomography is improved, and the diagnosis and screening of diseases are facilitated.

Description

Image processing method, device and computer readable storage medium
Technical Field
The present invention relates to the field of big data technologies, and in particular, to an image processing method, an image processing device, and a computer readable storage medium.
Background
Positron emission tomography (Positron Emission Computed Tomography, PET) refers to the diagnosis and screening of diseases by injecting a radioactive tracer into a human body to observe the molecular level activity in human tissues. PET is a relatively popular medical imaging modality, but at the same time has many problems, such as: causing radiation to the patient's body, higher costs of tracers, etc. In order to solve such problems, currently, the adopted solutions mainly comprise reducing the injection dosage of the tracer or reducing the scanning time, but both the above two methods can lead to the degradation of the imaging quality of the PET image, which is not beneficial to the diagnosis and screening of diseases.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a computer readable storage medium, which can reduce the injection dosage of a tracer, reduce radiation, improve the imaging quality of positron emission tomography and facilitate the diagnosis and screening of diseases.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
receiving an image processing request of a client, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed;
performing domain transformation processing on the projection data to be processed to obtain image data;
invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data;
and sending the image result to the client.
In the technical scheme, a client sends an image processing request comprising projection data to be processed to a server so that the server carries out domain transformation on the projection data to be processed to obtain image data, the image data is subjected to image reconstruction through a trained image processing model to obtain an image result, and the image result is returned to the client. By the method, the PET image with poor imaging quality can be subjected to image reconstruction to generate the image with higher definition, so that a user can diagnose and screen diseases according to the image with higher definition.
In a second aspect, an embodiment of the present invention provides an image processing apparatus including:
the receiving and transmitting unit is used for receiving an image processing request of a client, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed;
the processing unit is used for performing domain transformation processing on the projection data to be processed to obtain image data; invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data;
the receiving and transmitting unit is further configured to send the image result to the client.
In a third aspect, an embodiment of the present invention provides an image processing apparatus, including a processor, a memory, and a communication interface, the processor, the memory, and the communication interface being connected to each other, wherein the memory is configured to store a computer program, the computer program including program instructions, the processor being configured to invoke the program instructions to perform a method as described in the first aspect. Embodiments and advantages of the processing device for solving the problems may be referred to the method and advantages described in the first aspect, and repeated descriptions are omitted.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium stores one or more first instructions adapted to be loaded by a processor and to perform a method as described in the first aspect.
In the embodiment of the application, a client sends an image processing request to a server, wherein the image processing request comprises projection data to be processed, the server performs domain transformation on the projection data to be processed according to the image processing request to obtain image data, and the projection data to be processed is subjected to domain transformation before being input into an image processing model, so that network resources can be saved; image reconstruction is carried out on the image data through a trained image processing model to obtain an image result, and the image result is returned to the client, wherein the training method of the image processing model comprises the following steps: iterative training of the input at least one set of image sample data and target sample data by a loop consistency generation countermeasure network, the loop consistency generation countermeasure network execution apparatus comprising: the first generator, the second generator, the first discriminator and the second discriminator are used for optimizing a model loss function, so that the image processing model can accurately and clearly reconstruct a plurality of PET images with low imaging quality on the basis of reducing the injection dosage of the tracer and reducing the radiation, and a user can diagnose and screen diseases according to the reconstructed images with clearer imaging quality.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an image processing system according to an embodiment of the present invention;
FIG. 2 is a block diagram of another image processing system according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a generator network according to an embodiment of the present invention;
FIG. 5 is a flowchart of another image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a structure of a arbiter network according to an embodiment of the present invention;
fig. 7 is a schematic structural view of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second, third and the like in the description and in the claims and in the above drawings, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
Currently, in the course of disease screening and diagnosis using PET, there are many advantages that can be brought about by reducing the injection dose of a radioactive tracer or reducing the scan time for a patient, such as: the radiation damage to the patient can be reduced, the tracer cost is reduced, the motion artifact generated by the physiological motion of the patient during imaging is reduced, and the efficiency of the PET scanner is improved. However, reducing the dose of the radiotracer injected or reducing the scan time to the patient can result in reduced PET imaging quality, for example: image noise increases, sharpness is insufficient, and so on.
In order to solve the above problems, the embodiment of the present invention provides an image processing method, which processes a PET image with poor quality through a trained image processing model to generate an image with higher definition, so as to facilitate disease diagnosis and screening by a user according to the image with higher definition, where the PET image with poor quality may also be described as a low-count projection image, and may also be described as projection data to be processed in the embodiment of the present invention. Alternatively, the present embodiment may be applied to reconstruction of Single Photon Emission Computed Tomography (SPECT) and electron computed tomography (Computed Tomography, CT) images, without limitation.
Specifically, a plurality of projection sample data can be processed through a cyclical consistency generation antagonism network to obtain a mapping relation between the projection sample data and a real image, and based on the mapping relation, image reconstruction is performed on the projection data to be processed to generate an image with higher definition. In the embodiment of the invention, the real image can also be described as target sample data, and the target sample data is a target image expected to be generated, namely, an expected image result when the projection sample data is subjected to image reconstruction through an image processing model.
The above-mentioned image processing method can be applied to the image processing system shown in fig. 1, and the image processing system can include a client 101 and a server 102. The form and number of the clients 101 are used as examples, and are not limiting on the embodiments of the present invention. For example, two clients 101 may be included.
The client 101 may be a client that sends an image processing request to the server 102, or may be a client that is used to provide sample data for the server 102 during training of an image processing model, where the client may be any one of the following: a terminal, a stand-alone application, an application programming interface (Application Programming Interface, API), or a software development kit (Software Development Kit, SDK). Wherein, the terminal may include, but is not limited to: smart phones (e.g., android phones, IOS phones, etc.), tablet computers, portable personal computers, mobile internet devices (Mobile Internet Devices, MID), etc., the embodiments of the present invention are not limited. Server 102 may include, but is not limited to, a clustered server.
In the embodiment of the present invention, the client 101 sends an image processing request to the server 102, the server 102 reconstructs an image according to the projection data to be processed included in the image processing request, specifically, performs domain transformation processing on the projection data to be processed to obtain image data, processes the image data through a pre-trained image processing model to obtain a reconstructed image result, and sends the image result to the client 101, so that the operation user 103 of the client 101 can perform disease diagnosis and screening according to the image result.
The frame diagram of the image processing system may be referred to as fig. 2, and the frame diagram of the image processing system may include a system frame for training an image processing model and a system frame for performing an image reconstruction process using the trained image processing model. The training of the image processing model mainly generates an countermeasure network based on the cyclical consistency, and the training process mainly comprises the following steps: performing domain transformation processing on the projection sample data to obtain image sample data, inputting the image sample data into a first generator to obtain first generation data, and further inputting the first generation data into a second generator to obtain third generation data; on the other hand, target sample data, which is a target image expected to be achieved by training of the image processing model, is determined, and is input to the second generator to obtain second generation data, and further, is input to the second generator to obtain fourth generation data. The first discriminator is used for discriminating the image sample data and the second generated data, so that a first discrimination loss can be obtained; and carrying out discrimination processing on the target sample data and the first generated data by a second discriminator to obtain a second discrimination loss. Under the condition that the first generation data, the second generation data, the third generation data and the fourth generation data, the first discrimination loss and the second discrimination loss are obtained, a final model loss function can be obtained, the model loss function can be optimized through performing at least one iteration training on the training process, the mapping relation between the image sample data and the target sample data can be obtained, and the purpose of training the first generator is achieved. Performing domain transformation on projection data to be processed to obtain image data, inputting the image data into a first generator, and performing image reconstruction processing on the image data through the first generator to obtain a reconstructed image result.
Referring to fig. 3, fig. 3 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 3, the image processing method may include 301 to 304 portions, where:
301. the client 101 transmits an image processing request to the server 102.
Specifically, the client 101 sends an image processing request to the server 102, and accordingly, the server 102 receives an image processing request from the client 101, where the image processing request includes projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed, and an abscissa of the projection data to be processed is a detector distance, and an ordinate is an imaging angle. The projection data to be processed is data of an undersampled image generated during PET imaging due to reduced injection dose of the radiotracer or reduced scanning time for the patient, which undersampled image has lower imaging quality and poorer definition. Alternatively, the server 102 may automatically acquire the generated projection data to be processed after the client 101 acquires the PET image, which is not limited in the present invention, wherein the client 101 may also be an imaging client connected to the PET imaging device.
302. The server 102 performs domain transformation processing on projection data to be processed to obtain image data.
Specifically, the server 102 performs domain transformation processing on projection data to be processed by a domain transformation algorithm to obtain image data. The domain transformation algorithm may include, but is not limited to, back Propagation (BP) algorithm, among others. The relevant description of the projection data to be processed may be referred to in step 301, and the generated image data may be a sinusoidal image, and the generated image data may be input as input data to a trained image processing model to obtain a reconstructed image result. Here, the domain transformation processing is performed on the projection data to be processed before the projection data is input into the image processing model, so that network resources can be saved.
303. The server 102 invokes an image processing model to perform image reconstruction processing on the image data, and an image result is obtained.
Specifically, the server 102 invokes an image processing model to perform image reconstruction processing on the image data, so as to obtain a reconstructed image result. The image processing model is obtained by training the image processing model according to image sample data and target sample data. The image sample data is input data in a model training process, the target sample data is output data expected to be obtained through training, a mapping relation between the input data and the output data can be constructed in the training process, and under the condition that the image data is input into an image processing model as input data, an output result is a reconstructed image result. The image quality and definition of the image result are higher than those of the image corresponding to the projection data to be processed.
As an alternative embodiment, the image reconstruction step may be performed by a first generator, and the image processing model may be configured in the first generator. Further, the image data obtained through the domain transformation may be subjected to m-layer feature extraction to obtain encoded data, where m is a positive integer. And processing the coded data through a residual error network to obtain converted data. And carrying out n layers of up-sampling processing on the converted data to obtain an image result, wherein n is a positive integer.
By executing the embodiment, the input image data can be reconstructed by using the first generator which has completed training, and the reconstructed image result is obtained, so that the PET imaging quality is improved, and the diagnosis and screening of diseases are facilitated.
As an optional implementation manner, in the process of performing n-layer up-sampling processing on the conversion data to obtain an image result, an up-sampling processing result of an xth layer of the conversion data may also be obtained, where x is a positive integer less than n and less than or equal to m; the feature extraction result of the y layer of the image data is obtained, wherein y is a positive integer smaller than n and smaller than or equal to m; and performing jump connection processing on the up-sampling processing result of the x layer and the feature extraction result of the y layer to obtain a jump link result, and inputting the result to the (x+1) layer. Optionally, the up-sampling processing result of the x layer and the feature extraction result of the y layer form a mirror image relationship, that is, two mirror image results with the same number of feature graphs generated in the up-sampling and feature extraction processes are subjected to jump linking.
By executing the embodiment, the problem of detail loss of the image data caused by too deep network layers when the image data is subjected to feature extraction and residual network processing can be solved, and meanwhile, the training difficulty of an image processing model can be reduced.
Further, a schematic structural diagram of the first generator network referred to in the above alternative embodiment may be referred to fig. 4, and the first generator network may include: an encoding section, a transforming section, a decoding section, and a skip linking section. Wherein, the encoding part is used for executing the above-mentioned step of extracting m layers of characteristics from the image data to obtain encoded data, taking m as 3 as an example in fig. 4; the transforming part is configured to perform the step of processing the encoded data through a residual network to obtain transformed data, where in fig. 4, the residual network includes 5 residual blocks as an example; the decoding section is configured to perform a step of performing n-layer up-sampling processing on the converted data to obtain an image result, where n is equal to 4 as an example in fig. 4. The jump linking part is used for executing the step of carrying out jump connection processing on the x layer up-sampling processing result and the y layer feature extraction result to obtain image data.
Further, when the step is executed, after the image data is input to the first generator, 3-layer feature extraction can be performed on the image data through the coding part of the first generator network, so as to obtain coded data. Specifically, each layer is composed of two convolution layer (Convolutional Layer, CIL) blocks, and there are 6 CIL blocks in total. Each CIL block comprises a convolution layer, a normalization layer and an activation function layer. Alternatively, the convolution kernel may be 3*3 in size. The convolution step size of the first CIL block of the second layer and the CIL of the first layer of the third layer is 2, by which the size of the image can be reduced by half. While the convolution steps of the remaining CIL blocks are 1. Further, the number of convolution kernels of the first CIL block of the second layer and the CIL of the first CIL of the third layer may be set to 2 times as large as the previous CIL block, for example: the number of convolution kernels of the second CIL block of the first layer is 32, the number of convolution kernels of the first CIL block of the second layer is 64, in this way, the number of feature images can be expanded, the feature images are the feature extraction result of the output of each CIL block, the number of the feature images is the same as the number of the convolution kernels, and the number of the feature images output by each layer can be 32, 64 and 128.
Further, after the encoding part is completed, the obtained encoded data may be processed by a transforming part of the first generator network to obtain converted data. Specifically, the transformation part is formed by a residual network, and the residual network includes at least one residual module, in this embodiment of the present invention, taking five residual modules as an example, where each residual module may include, but is not limited to, two CIL blocks, and the description about the CIL blocks may be referred to the above description, which is not repeated here. Alternatively, the CIL block may have a step size of 1 and the number of convolution kernels may be 128. In execution, the input data of each residual module may be described as an input tensor and the output data may be described as an output tensor. By introducing a residual network to execute the method described in the embodiment, the first generator network can be deeper, the expression capability is stronger, and the output reconstruction result is more accurate.
Further, after the conversion part is completed, the obtained conversion data can be subjected to four-layer up-sampling processing by a decoding part of the first generator network, so that an image result is obtained. Specifically, the decoding portion includes four CIL blocks and two deconvolution (Deconvolution Layer, DIL) blocks, wherein the first layer and the fourth layer each have only one CIL block, and the second layer and the third layer each include one CIL block and one DIL block. The first layer is connected with the residual error module, is a first input layer of the transformation part, and the fourth layer is a final output layer of the first generator network, and the description of the CIL block is referred to above and is not repeated here. The DIL block is composed of a deconvolution layer with a step length of 2, a normalization layer and an activation function layer, and setting the step length to 2 can expand the size of the image to 2 times of the original size. Further, the number of convolution kernels of two DIL blocks may also be set to half of the previous CIL block to reduce the number of feature maps by half, for example: the convolution kernel of the first CIL block of the first layer is 128 and the convolution kernel of the first DIL block of the second layer is 64. The relevant description of the number of convolution kernels and the number of feature maps can be found in the above description, and is not repeated here. The final layer sets the convolution kernel number to 1, and the data output after the convolution kernel processing of the final layer can be used as a final image result.
Further, in the process of executing the above-described decoding section, a skip link process may also be added. Specifically, jump connection processing can be performed on the x-layer up-sampling processing result and the y-layer feature extraction result. The up-sampling processing result of the x layer and the feature extraction result of the y layer form a mirror image relationship, namely two mirror image results with the same number of feature graphs generated in the up-sampling and feature extraction processes are subjected to jump linking. For example: when x is 3 and y is 1, the number of feature graphs obtained by up-sampling processing of the third layer is 32, and the number of feature graphs obtained by feature extraction of the first layer is also 32, at this time, the feature extraction result of the first layer and the up-sampling result of the third layer are in mirror symmetry, and then the up-sampling processing result of the third layer and the feature extraction result of the first layer are subjected to jump linking, so as to obtain a jump linking result, and the jump linking result can be used as input data of the fourth layer in the up-sampling process.
304. The server 102 sends the image result to the client 101.
Specifically, in the case that the reconstructed image result is obtained through the first generator network, the server 102 sends the image result to the client 101, and accordingly, the client 101 receives the image result. The sharpness of the image result is higher than that of the image corresponding to the projection data to be processed sent by the client 101.
As can be seen, by implementing the method described in fig. 3, after the client 101 sends the image processing request, the server 102 performs domain transformation processing on the data to be processed in the image processing request to obtain image data, where the domain transformation processing is performed on the projection data to be processed before the projection data is input into the image processing model, so that network resources can be saved; image reconstruction is performed on the image data through the trained image processing model, a reconstructed image result is generated, and the image result is sent to the client 101. By the method, the tracer injection dosage is reduced, radiation is reduced, and meanwhile, a PET image with poor imaging quality is subjected to image reconstruction to generate an image with higher definition, so that a user can diagnose and screen diseases according to the image with higher definition.
Referring to fig. 5, fig. 5 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 5, the image processing method may include 501 to 505 parts, where:
501. the server 102 obtains projection sample data.
In particular, the server 102 may obtain, from the client 101 or other data storage platform, projection sample data, where the projection sample data belongs to a class of data with respect to the projection data to be processed in step 301, where the projection sample data may refer to a description related to the projection data to be processed in step 301, where the projection sample data is mainly used as sample data to be input into an image processing model to train the image processing model, and the number of the projection sample data may be one or more. Alternatively, the sample projection data may be sent by the client 101 to the server 102, without limitation.
502. The server 102 performs domain transform processing on the projection sample data to obtain image sample data.
Specifically, the domain transformation algorithm may be used to process the projection sample data to obtain image sample data, where the image sample data is obtained by performing domain transformation on the projection sample data, and the process of generating the image sample data by domain transformation may be referred to the related description in step 302 and is not described herein.
503. The server 102 obtains target sample data.
Specifically, the server 102 acquires target sample data, which is target data that matches the image sample data. That is, the target sample data here is data of a target image that is expected to be achieved after image reconstruction of the image sample data by the image processing model.
504. The server 102 trains the image processing model according to the image sample data and the target sample data to obtain a model loss function.
Specifically, the process of model training mainly generates the countermeasure network based on the cycle consistency, where the cycle consistency generation countermeasure network mainly includes two generators and two discriminators, and the process of executing the embodiment may be that, when the server 102 obtains the image sample data and the target sample data, the server inputs the image sample data and the target sample data into the first generator and the second generator, discriminates the generated result through the first discriminator and the second discriminator, and obtains the model loss function through the generated result of each generator and the discriminated result of each discriminator. Further, the image sample data may be input to a first generator to obtain first generated data, and the first generated data may be input to a second generator to obtain third generated data; on the other hand, the target sample data is input to the second generator to obtain second generated data, and the second generated data is input to the second generator to obtain fourth generated data. The first discriminant loss function can be obtained by performing discriminant processing on the image sample data and the second generated data by the first discriminant; and carrying out discrimination processing on the target sample data and the first generated data by a second discriminator to obtain a second discrimination loss function. When the first, second, third, and fourth generation data, the first and second discrimination loss functions are obtained, a final model loss function can be obtained. In the above steps, the schematic structure of the arbiter network included in the first arbiter and the second arbiter may be shown in fig. 6, and the arbiter network may include eight convolution layers and two full connection layers. Wherein, each convolution layer carries a normalization layer and an activation function layer. The convolution kernels of the convolution layers are 3*3, and the number of convolution kernels of the first layer to the eighth layer in fig. 6 is exemplified by 32, 64, 128, 256, respectively. In the actual operation process, data input into the discriminator network are processed by eight modules consisting of a convolution layer, a normalization layer and an activation function layer and then input into two full-connection layers, wherein a first full-connection layer can have 1024 unit nodes, and the first full-connection layer carries an activation function layer; the second fully connected layer may have 1 unit node. The relevant descriptions of the generator networks included in the first generator and the second generator can be referred to as the relevant descriptions in step 303, and are not repeated here.
As an alternative embodiment, when the step of obtaining the model loss function is performed, the loop consistency loss function of the image processing model may be determined first according to the image sample data and the target sample data. Specifically, the server 102 may acquire the first generated data and the third generated data of the image sample data, and acquire the second generated data and the fourth generated data of the target sample data, where the method for acquiring the first generated data, the second generated data, the third generated data and the fourth generated data may be referred to the above description, and is not repeated herein. And obtaining a cyclic consistency loss function of the image processing model according to the image sample data, the target sample data, the third generation data and the fourth generation data. The loss function of the cyclic consistency network is expressed as:
wherein x represents image sample data, y represents target sample data, F (G (x)) represents third generated data, G (F (y)) represents fourth generated data, F (G (x))≡x represents forward periodic consistency, G (F (y))≡y represents backward periodic consistency, and ii·ii 2 Represents L 2 Norms. This periodic consistency can prevent deterioration of resistance learning,before representation To loss of cycle consistency results, +.>Indicating the loss of backward cycle consistency.
By performing the acquisition of the loop consistency loss function of this embodiment, the mapping relation between the image sample data and the target sample data can be enhanced, and deterioration of resistance learning can be prevented.
Further, the image sample data and the target sample data are respectively subjected to supervision training, and a supervision loss function of the image processing model is determined. Specifically, the server 102 may perform supervised training on the obtained target sample data and the first generated data to obtain a first supervision result; performing supervision training on the acquired image sample data and second generated data to obtain a second supervision result; and obtaining a supervision loss function of the image processing model according to the first supervision result and the second supervision result. The method for obtaining the first generated data, the second generated data, the target sample data, and the image sample data may be referred to in the above description of steps 502, 503, and 504, which are not repeated here. The expression of the supervised loss function is:
where G (x) is an image of y generated by the generator G in proximity to the source image x, i.e., first generated data, F (y) is an image of x generated by the generator F in proximity to the source image y, i.e., second generated data, Representing a first supervision result,/->Representing a second supervision result.
By executing the embodiment to acquire the supervised loss function, the image processing model can be further optimized, avoiding deterioration of resistance learning.
Further, the server 102 obtains a first discrimination loss function and a second discrimination loss function of the image processing model, where the first discrimination loss function is a loss function generated when the first discriminator performs discrimination processing on the image sample data and the second generated data. The second discrimination loss function is a loss function generated when the second discriminator discriminates the target sample data and the first generated data. The model loss function of the image processing model may be determined based on the cyclical uniformity loss function, the supervised loss function, the first discriminant loss function, and the second discriminant loss function. The model loss function has the expression:
L total =L LSGAN (D LC ,G)+L LSGAN (D FC ,F)+λ 1 L CYC (G,F)+λ 2 L SUP (G,F)
wherein L is LSGAN (D LC G) represents a first discriminant loss function, L LSGAN (D FC F) represents a second discriminant loss function, L CYC (G, F) is a cyclic consistency loss function, L SUP (G, F) is a supervision loss function, lambda 1 And lambda (lambda) 2 Is used for balancing parameters of different proportions, alternatively lambda 1 And lambda (lambda) 2 All can take a value of 1.
Alternatively, the cycle consistency loss and the supervision loss may be optimized, and in particular, one round of optimization may be performed every time one cycle of training is completed. The optimization algorithm adopted in the optimization process comprises, but is not limited to, an Adam algorithm, the Adam algorithm is adopted to optimize the cycle consistency loss and the supervision loss, and independent adaptive learning rates are designed for different parameters, so that further optimization of an image processing model can be realized.
505. The server 102 builds an image processing model from the model loss function.
Specifically, the server 102 may perform at least one iterative training according to the model training process described in step 504 to optimize the model loss function, construct an image processing model, and achieve the purpose of training the first generator, so as to obtain the mapping relationship F between the image sample data and the target sample data in the first generator, and then the first generator may reconstruct the image data in step 302 into the image result with higher definition in step 303 according to the mapping relationship F.
As can be seen, by implementing the method described in fig. 5, after obtaining the projection sample data and the target sample data, the server 102 performs domain transformation processing on the projection sample data to obtain image sample data, inputs the image sample data to the first generator, inputs the target sample data to the second generator, and performs iterative training on at least one set of input image sample data and target sample data through the loop consistency generation countermeasure network, where the loop consistency generation countermeasure network execution apparatus includes: the first generator, the second generator, the first discriminator and the second discriminator are used for optimizing a model loss function and constructing an image processing model, and the image processing model realizes that a PET image is reconstructed into an image with higher definition on the basis of reducing the injection dosage of the tracer and reducing the radiation, so that a user can diagnose and screen diseases according to the reconstructed image.
Based on the description of the method embodiment, the embodiment of the invention also provides an image processing device. The smart contract invoking apparatus may be a computer program (including program code) running in a processing device; referring to fig. 7, the smart contract invoking apparatus may operate as follows:
a transceiver unit 701, configured to receive an image processing request of a client, where the image processing request includes projection data to be processed, and the image processing request is used to request reconstruction of an image according to the projection data to be processed;
a processing unit 702, configured to perform domain transform processing on the projection data to be processed, so as to obtain image data; invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data;
the transceiver unit 701 is further configured to send the image result to the client.
In one embodiment, the processing unit 702 is further configured to obtain projection sample data, and perform domain transform processing on the projection sample data to obtain the image sample data;
Acquiring target sample data, wherein the target sample data is matched with the image sample data;
training the image processing model according to the image sample data and the target sample data to obtain a model loss function;
and constructing the image processing model according to the model loss function.
In another embodiment, the image processing model is called to perform image reconstruction processing on the image data to obtain an image result, and the processing unit 702 is further configured to perform m-layer feature extraction on the image data to obtain encoded data, where m is a positive integer;
obtaining conversion data of the coded data, wherein the conversion data is obtained by processing the coded data through a residual error network;
and carrying out n layers of up-sampling processing on the converted data to obtain the image result, wherein n is a positive integer.
In yet another embodiment, the processing unit 702 is further configured to obtain an upsampling result of an x-th layer of the converted data, where x is a positive integer less than n and less than or equal to m;
acquiring a feature extraction result of a y-th layer of the image data, wherein y is a positive integer smaller than n and smaller than or equal to m;
and performing jump connection processing on the x-th layer up-sampling processing result and the y-th layer feature extraction result to obtain a jump link result, and inputting the jump link result to the (x+1) th layer.
In yet another embodiment, the training the image processing model according to the image sample data and the target sample data to obtain a model loss function, and the processing unit 702 may be further configured to determine a cyclic consistency loss function of the image processing model according to the image sample data and the target sample data;
respectively performing supervision training on the image sample data and the target sample data to determine a supervision loss function of the image processing model;
acquiring a first discrimination loss function and a second discrimination loss function of the image processing model, wherein the first discrimination loss function is a loss function of a first discriminator, and the second discrimination loss function is a loss function of a second discriminator;
and determining a model loss function of the image processing model according to the cyclic consistency loss function, the supervision loss function, the first discrimination loss function and the second discrimination loss function.
In yet another embodiment, the determining the cyclic consistency loss function of the image processing model according to the image sample data and the target sample data, the processing unit 702 may be further configured to obtain first generated data of the image sample data and second generated data of the target sample data, where the first generated data is obtained by processing the image sample data by a first generator, and the second generated data is obtained by processing the target sample data by a second generator;
Acquiring third generation data of the image sample data and fourth generation data of the target sample data, wherein the third generation data is obtained by processing the first generation data through the second generator, and the fourth generation data is obtained by processing the second generation data through the first generator;
and obtaining the cyclic consistency loss function of the image processing model according to the image sample data, the target sample data, the third generation data and the fourth generation data.
In yet another embodiment, the performing supervisory training on the image sample data and the target sample data respectively, determining a supervisory loss function of the image processing model, and the processing unit 702 may be further configured to obtain the first generated data of the image sample data, and performing supervisory training on the target sample data and the first generated data to obtain a first supervisory result;
acquiring the second generated data of the target sample data, and performing supervision training on the image sample data and the second generated data to obtain a second supervision result;
and obtaining the supervision loss function of the image processing model according to the first supervision result and the second supervision result.
According to an embodiment of the present invention, part of the steps involved in the image processing methods shown in fig. 3 and 5 may be performed by a processing unit in the image processing apparatus. For example, steps 301 and 304 shown in fig. 3 may be performed by transceiver unit 701; as another example, step 302 shown in FIG. 3 may be performed by processing unit 702. According to another embodiment of the present invention, each unit in the image processing apparatus may be constituted by combining one or several additional units separately or in total, or some unit(s) thereof may be further constituted by splitting into a plurality of units having a smaller function, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present invention.
Referring to fig. 8, a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention is provided, where the image processing apparatus includes a processor 801, a memory 802, and a communication interface 803, where the processor 801, the memory 802, and the communication interface 803 are connected through at least one communication bus, and the processor 801 is configured to support a processing device to execute corresponding functions of the processing device in the methods of fig. 3 and 5.
The memory 802 is used to store at least one instruction, which may be one or more computer programs (including program code), adapted to be loaded and executed by a processor.
The communication interface 803 is for receiving data and for transmitting data. For example, the communication interface 803 is used to transmit an image processing request or the like.
In an embodiment of the present invention, the processor 801 may call program code stored in the memory 802 to perform the following operations:
receiving an image processing request of a client through a communication interface 803, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed;
performing domain transformation processing on the projection data to be processed to obtain image data;
invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data;
the image results are sent to the client via the communication interface 803.
As an alternative embodiment, the processor 801 may call program code stored in the memory 802 to perform the following operations:
obtaining projection sample data, and performing domain transformation processing on the projection sample data to obtain image sample data;
Acquiring target sample data, wherein the target sample data is matched with the image sample data;
training the image processing model according to the image sample data and the target sample data to obtain a model loss function;
and constructing the image processing model according to the model loss function.
As an alternative embodiment, the invoking image processing model performs image reconstruction processing on the image data to obtain an image result, and the processor 801 may invoke the program code stored in the memory 802 to perform the following operations:
extracting m layers of features of the image data to obtain coded data, wherein m is a positive integer;
obtaining conversion data of the coded data, wherein the conversion data is obtained by processing the coded data through a residual error network;
and carrying out n layers of up-sampling processing on the converted data to obtain the image result, wherein n is a positive integer.
As an alternative embodiment, the processor 801 may call program code stored in the memory 802 to perform the following operations:
acquiring an up-sampling processing result of an x-th layer of the conversion data, wherein x is a positive integer smaller than n and smaller than or equal to m;
Acquiring a feature extraction result of a y-th layer of the image data, wherein y is a positive integer smaller than n and smaller than or equal to m;
and performing jump connection processing on the x-th layer up-sampling processing result and the y-th layer feature extraction result to obtain a jump link result, and inputting the jump link result to the (x+1) th layer.
As an alternative embodiment, the training of the image processing model according to the image sample data and the target sample data results in a model loss function, and the processor 801 may call the program code stored in the memory 802 to perform the following operations:
determining a cyclic consistency loss function of the image processing model according to the image sample data and the target sample data;
respectively performing supervision training on the image sample data and the target sample data to determine a supervision loss function of the image processing model;
acquiring a first discrimination loss function and a second discrimination loss function of the image processing model, wherein the first discrimination loss function is a loss function of a first discriminator, and the second discrimination loss function is a loss function of a second discriminator;
And determining a model loss function of the image processing model according to the cyclic consistency loss function, the supervision loss function, the first discrimination loss function and the second discrimination loss function.
As an alternative embodiment, the determining the loop consistency loss function of the image processing model according to the image sample data and the target sample data, the processor 801 may call the program code stored in the memory 802 to perform the following operations:
acquiring first generation data of the image sample data and second generation data of the target sample data, wherein the first generation data is obtained by processing the image sample data through a first generator, and the second generation data is obtained by processing the target sample data through a second generator;
acquiring third generation data of the image sample data and fourth generation data of the target sample data, wherein the third generation data is obtained by processing the first generation data through the second generator, and the fourth generation data is obtained by processing the second generation data through the first generator;
And obtaining the cyclic consistency loss function of the image processing model according to the image sample data, the target sample data, the third generation data and the fourth generation data.
As an alternative embodiment, the performing the supervised training on the image sample data and the target sample data respectively, determining the supervised loss function of the image processing model, the processor 801 may call the program code stored in the memory 802 to perform the following operations:
acquiring the first generation data of the image sample data, and performing supervision training on the target sample data and the first generation data to obtain a first supervision result;
acquiring the second generated data of the target sample data, and performing supervision training on the image sample data and the second generated data to obtain a second supervision result;
and obtaining the supervision loss function of the image processing model according to the first supervision result and the second supervision result.
Embodiments of the present invention also provide a computer readable storage medium (Memory) that may be used to store computer software instructions for use by the processing device of the embodiments shown in fig. 3 and 5, and in which at least one instruction, which may be one or more computer programs (including program code), is stored that are adapted to be loaded and executed by a processor.
The computer readable storage medium includes but is not limited to flash memory, hard disk, solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid State Disks (SSDs)), among others.
The foregoing is merely illustrative embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and the invention should be covered. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (9)

1. An image processing method, the method comprising:
receiving an image processing request of a client, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed;
performing domain transformation processing on the projection data to be processed to obtain image data;
invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data, and training of the image processing model is realized based on a circular consistency generation countermeasure network;
sending the image result to the client;
the method further comprises the steps of: obtaining projection sample data, and performing domain transformation processing on the projection sample data to obtain image sample data; acquiring target sample data, wherein the target sample data is matched with the image sample data; training the image processing model according to the image sample data and the target sample data to obtain a model loss function; and constructing the image processing model according to the model loss function.
2. The method of claim 1, wherein invoking the image processing model to perform image reconstruction processing on the image data to obtain an image result comprises:
extracting m layers of features of the image data to obtain coded data, wherein m is a positive integer;
obtaining conversion data of the coded data, wherein the conversion data is obtained by processing the coded data through a residual error network;
and carrying out n layers of up-sampling processing on the converted data to obtain the image result, wherein n is a positive integer.
3. The method according to claim 2, wherein the method further comprises:
acquiring an up-sampling processing result of an x-th layer of the conversion data, wherein x is a positive integer smaller than n and smaller than or equal to m;
acquiring a feature extraction result of a y-th layer of the image data, wherein y is a positive integer smaller than n and smaller than or equal to m;
and performing jump connection processing on the x-th layer up-sampling processing result and the y-th layer feature extraction result to obtain a jump link result, and inputting the jump link result to the (x+1) th layer.
4. The method of claim 1, wherein training the image processing model based on the image sample data and the target sample data results in a model loss function, comprising:
Determining a cyclic consistency loss function of the image processing model according to the image sample data and the target sample data;
respectively performing supervision training on the image sample data and the target sample data to determine a supervision loss function of the image processing model;
acquiring a first discrimination loss function and a second discrimination loss function of the image processing model, wherein the first discrimination loss function is a loss function of a first discriminator, and the second discrimination loss function is a loss function of a second discriminator;
and determining a model loss function of the image processing model according to the cyclic consistency loss function, the supervision loss function, the first discrimination loss function and the second discrimination loss function.
5. The method of claim 4, wherein determining a cyclic consistency loss function of the image processing model from the image sample data and the target sample data comprises:
acquiring first generation data of the image sample data and second generation data of the target sample data, wherein the first generation data is obtained by processing the image sample data through a first generator, and the second generation data is obtained by processing the target sample data through a second generator;
Acquiring third generation data of the image sample data and fourth generation data of the target sample data, wherein the third generation data is obtained by processing the first generation data through the second generator, and the fourth generation data is obtained by processing the second generation data through the first generator;
and obtaining the cyclic consistency loss function of the image processing model according to the image sample data, the target sample data, the third generation data and the fourth generation data.
6. The method of claim 4, wherein the performing the supervised training on the image sample data and the target sample data, respectively, determines a supervised loss function of the image processing model, comprising:
acquiring the first generation data of the image sample data, and performing supervision training on the target sample data and the first generation data to obtain a first supervision result;
acquiring the second generated data of the target sample data, and performing supervision training on the image sample data and the second generated data to obtain a second supervision result;
And obtaining the supervision loss function of the image processing model according to the first supervision result and the second supervision result.
7. An image processing apparatus, comprising:
the receiving and transmitting unit is used for receiving an image processing request of a client, wherein the image processing request comprises projection data to be processed, and the image processing request is used for requesting to reconstruct an image according to the projection data to be processed;
the processing unit is used for performing domain transformation processing on the projection data to be processed to obtain image data; invoking an image processing model to carry out image reconstruction processing on the image data to obtain an image result, wherein the image processing model is obtained by training the image processing model according to image sample data and target sample data, and training of the image processing model is realized based on a circular consistency generation countermeasure network;
the receiving and transmitting unit is further configured to send the image result to the client;
the processing unit is also used for acquiring projection sample data and performing domain transformation processing on the projection sample data to obtain the image sample data; acquiring target sample data, wherein the target sample data is matched with the image sample data; training the image processing model according to the image sample data and the target sample data to obtain a model loss function; and constructing the image processing model according to the model loss function.
8. An image processing apparatus comprising a processor, a memory and a communication interface, the processor, the memory and the communication interface being interconnected, wherein the memory is adapted to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-6.
9. A computer readable storage medium storing one or more instructions adapted to be loaded by a processor and to perform the method of any one of claims 1-6.
CN202010085677.5A 2020-02-10 2020-02-10 Image processing method, device and computer readable storage medium Active CN111340904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085677.5A CN111340904B (en) 2020-02-10 2020-02-10 Image processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085677.5A CN111340904B (en) 2020-02-10 2020-02-10 Image processing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111340904A CN111340904A (en) 2020-06-26
CN111340904B true CN111340904B (en) 2023-09-29

Family

ID=71186805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085677.5A Active CN111340904B (en) 2020-02-10 2020-02-10 Image processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111340904B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016475B (en) * 2020-08-31 2022-07-08 支付宝(杭州)信息技术有限公司 Human body detection and identification method and device
CN112509091B (en) * 2020-12-10 2023-11-14 上海联影医疗科技股份有限公司 Medical image reconstruction method, device, equipment and medium
CN112819016A (en) * 2021-02-19 2021-05-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197229A (en) * 2019-05-31 2019-09-03 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN110580689A (en) * 2019-08-19 2019-12-17 深圳先进技术研究院 image reconstruction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050249667A1 (en) * 2004-03-24 2005-11-10 Tuszynski Jack A Process for treating a biological organism
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
US11062489B2 (en) * 2018-02-13 2021-07-13 Wisconsin Alumni Research Foundation System and method for multi-architecture computed tomography pipeline
CN109754402B (en) * 2018-03-15 2021-11-19 京东方科技集团股份有限公司 Image processing method, image processing apparatus, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197229A (en) * 2019-05-31 2019-09-03 腾讯科技(深圳)有限公司 Training method, device and the storage medium of image processing model
CN110580689A (en) * 2019-08-19 2019-12-17 深圳先进技术研究院 image reconstruction method and device

Also Published As

Publication number Publication date
CN111340904A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340904B (en) Image processing method, device and computer readable storage medium
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN109685206A (en) The system and method for generating the neural network model for image procossing
EP3338636B1 (en) An apparatus and associated method for imaging
CN112819914B (en) PET image processing method
CN111462020A (en) Method, system, storage medium and device for correcting motion artifact of heart image
US20240177377A1 (en) Spect imaging prediction model creation method and apparatus, and device and storage medium
CN113920213B (en) Multi-layer magnetic resonance imaging method and device based on long-distance attention model reconstruction
CN112365560B (en) Image reconstruction method, system, readable storage medium and device based on multi-level network
US20240185484A1 (en) System and method for image reconstruction
CN111489406B (en) Training and generating method, device and storage medium for generating high-energy CT image model
CN112489158A (en) Enhancement method for low-dose PET image by using cGAN-based adaptive network
CN110874855B (en) Collaborative imaging method and device, storage medium and collaborative imaging equipment
CN115272511A (en) System, method, terminal and medium for removing metal artifacts in CBCT image based on dual decoders
Su et al. A deep learning method for eliminating head motion artifacts in computed tomography
WO2021159234A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN111626964B (en) Optimization method and optimization device for scanned image and medical scanning system
CN111862255A (en) Regularization image reconstruction method, system, readable storage medium and device
CN112017258A (en) PET image reconstruction method, apparatus, computer device, and storage medium
CN113053496B (en) Deep learning method for low-dose estimation of medical image
US11574184B2 (en) Multi-modal reconstruction network
CN115205415A (en) CT mean image generation method, device and system and computer equipment
CN114596285A (en) Multitask medical image enhancement method based on generation countermeasure network
CN113269847A (en) CT image reconstruction method, device and equipment based on short scan and storage medium
CN114730476A (en) Network determination for limited angle reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant