WO2023216720A1 - 图像重建模型的训练方法、装置、设备、介质及程序产品 - Google Patents

图像重建模型的训练方法、装置、设备、介质及程序产品 Download PDF

Info

Publication number
WO2023216720A1
WO2023216720A1 PCT/CN2023/082436 CN2023082436W WO2023216720A1 WO 2023216720 A1 WO2023216720 A1 WO 2023216720A1 CN 2023082436 W CN2023082436 W CN 2023082436W WO 2023216720 A1 WO2023216720 A1 WO 2023216720A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
sample
degraded
sample image
Prior art date
Application number
PCT/CN2023/082436
Other languages
English (en)
French (fr)
Inventor
黄雅雯
郑冶枫
张乐
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023216720A1 publication Critical patent/WO2023216720A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • Embodiments of the present application relate to the field of image processing technology, and in particular to a training method, device, equipment, medium and program product for an image reconstruction model.
  • High-quality three-dimensional images can display detailed information more clearly.
  • high-quality three-dimensional medical images contribute to medical diagnosis and analysis.
  • imperfections in the imaging system, recording equipment, transmission media and processing methods lead to a decline in image quality.
  • deep convolutional neural networks are usually used to directly learn the mapping relationship between pairs of low-quality images and high-quality images, thereby generating high-quality three-dimensional images based on low-quality three-dimensional images.
  • This application provides an image reconstruction model training method, device, equipment, media and program products, which can obtain relatively accurate image reconstruction results.
  • the technical solutions are as follows:
  • a training method for an image reconstruction model includes:
  • Model parameters of the image reconstruction model are updated based on the loss function value.
  • the damage type includes at least one of a blur damage type, a noise damage type, and an offset damage type.
  • the method further includes:
  • first image refers to an image with multiple damage types
  • the trained reconstruction network layer Based on the trained reconstruction network layer, image reconstruction processing is performed on the first image to obtain a first reconstructed image.
  • the first reconstructed image refers to an image obtained by removing multiple damage types in the first image. ;
  • the first reconstructed image is output.
  • a training device for an image reconstruction model includes:
  • An acquisition module configured to acquire a first sample image and at least two second sample images, where the second sample image refers to an image with a single damage type, and the image quality of the first sample image is higher than the The image quality of the second sample image;
  • a degradation module configured to add at least two damage features corresponding to the second sample image to the first sample image respectively to generate at least two single degradation images
  • a fusion module used to fuse at least two of the single degraded images to obtain the corresponding image of the first sample image.
  • Multiple degraded images the multiple degraded images refer to images with at least two types of damage;
  • a reconstruction module configured to perform image reconstruction processing on the multiple degraded images and generate predicted reconstructed images corresponding to the multiple degraded images
  • a calculation module configured to calculate a loss function value based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image
  • An update module configured to update model parameters of the image reconstruction model based on the loss function value.
  • a computer device includes: a processor and a memory. At least one computer program is stored in the memory. The at least one computer program is loaded and executed by the processor to achieve the above aspects. Training method for image reconstruction model.
  • a computer storage medium stores at least one computer program.
  • the at least one computer program is loaded and executed by a processor to implement the image reconstruction model as described above. Training methods.
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium; the computer program is obtained by a processor of a computer device from the computer program.
  • the computer-readable storage medium is read and executed, so that the computer device performs the training method of the image reconstruction model as described in the above aspect.
  • the image reconstruction model training method provided by this application obtains multiple degraded images corresponding to the first sample image by simultaneously performing image damage of multiple damage types on the first sample image, and performs multiple degradation images with multiple damage types on the first sample image. Degraded images are reconstructed.
  • the model trained by the above method can reconstruct multiple damage types of low-quality images at the same time, avoiding the cumulative error caused by sequentially reconstructing damage types in low-quality images, thus improving the trained image reconstruction model. image reconstruction accuracy.
  • Figure 1 is a schematic diagram of a training method for an image reconstruction model provided by an exemplary embodiment of the present application
  • Figure 2 is an architectural schematic diagram of a computer system provided by an exemplary embodiment of the present application
  • Figure 3 is a flow chart of a training method for an image reconstruction model provided by an exemplary embodiment of the present application
  • Figure 4 is a flow chart of a training method for an image reconstruction model provided by an exemplary embodiment of the present application
  • Figure 5 is a schematic diagram of a training method for an image reconstruction model provided by an exemplary embodiment of the present application.
  • Figure 6 is a schematic diagram of the reconstruction effect of the image reconstruction model provided by an exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of the reconstruction effect of the image reconstruction model provided by an exemplary embodiment of the present application.
  • Figure 8 is a schematic diagram of a training method for an image reconstruction model provided by an exemplary embodiment of the present application.
  • Figure 9 is a framework diagram of image reconstruction model generation and image reconstruction provided by an exemplary embodiment of the present application.
  • Figure 10 is a flow chart of an image reconstruction method provided by an exemplary embodiment of the present application.
  • Figure 11 is a schematic diagram of an image reconstruction method provided by an exemplary embodiment of the present application.
  • Figure 12 is a block diagram of a training device for an image reconstruction model provided by an exemplary embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a computer device provided by an exemplary embodiment of the present application.
  • Embodiments of the present application provide a technical solution for a training method for an image reconstruction model.
  • the method can be executed by a computer device, which can be a terminal or a server.
  • the computer device acquires a first sample image 101 and at least two second sample images.
  • at least two second The sample images include at least two of a blurred sample image 102, a biased sample image 103, and a noisy sample image 104.
  • the second sample image refers to an image with a single damage type, and the image quality of the first sample image is higher than the image quality of the second sample image.
  • the damage type includes at least one of a blur damage type, a noise damage type, and an offset damage type, but is not limited to this, and the embodiments of the present application do not specifically limit this.
  • the first sample image 101 refers to an image with high resolution, or does not affect the expression of image content, or has a small impact on the expression of image content.
  • the blur sample image 102 refers to an image with blur content in the image.
  • the noisy sample image 104 refers to the image content that is unnecessary or has a negative impact on the analysis and understanding of the image content, that is, image noise. This image is the noisy sample image 104.
  • the offset sample image 103 refers to an image with a difference in image brightness due to offset. It can be understood that the blur damage type, noise damage type, and offset damage type in the second sample image are all randomly set.
  • the computer device extracts the first features corresponding to the first sample image 101 through the first degradation encoder 105, and extracts the second features corresponding to at least two second sample images respectively.
  • the computer device Based on the first feature and the second feature, the computer device extracts the damage feature in the second feature through the corresponding damage kernel extractor; the computer device adds the damage feature to the first feature of the first sample image 101 to obtain the first intermediate features, and input the intermediate first features to the first degradation decoder 109 for decoding processing to obtain a single degraded image corresponding to the first sample image.
  • the computer device extracts the characteristics of the first sample image 101 through the first degenerate encoder 105 to obtain the first feature; the computer device extracts the blur sample image 102, the noisy sample image 104 and the offset respectively through the first degenerate encoder 105.
  • the features of the sample image 103 are respectively obtained as fuzzy sample features, noise sample features and bias sample features.
  • the computer device inputs the first feature and the fuzzy sample feature to the fuzzy kernel extractor 106 for feature extraction to obtain the fuzzy damage feature in the fuzzy sample feature; the computer device inputs the first feature and the offset sample feature to the offset kernel extractor 107 Feature extraction is performed to obtain the bias damage feature among the bias sample features; the computer device inputs the first feature and the noise sample feature to the noise kernel extractor 108 for feature extraction, and the noise damage feature among the noise sample features is obtained.
  • the computer device fuses the first feature and the blur damage feature corresponding to the first sample image 101 to generate the intermediate first blur feature; the computer device decodes the intermediate first blur feature through the first degradation decoder 109 to generate the second intermediate blur feature.
  • the computer device fuses the first feature and the offset damage feature corresponding to the first sample image 101 to generate the intermediate first offset feature; the computer device decodes the intermediate first offset feature through the first degradation decoder 109 , generate the offset degraded image 111 corresponding to the first sample image.
  • the computer device fuses the first feature and the noise damage feature corresponding to the first sample image 101 to generate an intermediate first noise feature; the computer device decodes the intermediate first noise feature through the first degradation decoder 109 to generate a third A noise degraded image 112 corresponding to a sample image.
  • the computer device obtains the third features corresponding to at least two single degraded images through the second degradation encoder 113, and fuses the third features to obtain the degraded fusion features; the computer device performs the degradation processing through the second degradation decoder 114.
  • the features are fused and decoded to generate a multiple degraded image 115 corresponding to the first sample image 101.
  • the computer device performs feature extraction on the blurred degraded image 110, the offset degraded image 111, and the noise degraded image 112 through the second degraded encoder 113, and extracts the features corresponding to the blurred degraded image 110 and the features corresponding to the offset degraded image 111.
  • the features corresponding to the noise degraded image 112 are feature fused to obtain degraded fusion features; the computer device decodes the degraded fusion features through the second degradation decoder 114 to generate multiple degraded images 115 corresponding to the first sample image 101.
  • the computer device performs image reconstruction processing on the multiple degraded images based on the reconstruction encoder 116 and the reconstruction decoder 117 in the reconstruction network layer in the image reconstruction model, and generates a predicted reconstructed image 118 corresponding to the multiple degraded images 115 .
  • the computer device calculates a first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degraded image.
  • the first loss function value includes a first blur loss function value, a first bias loss function value and the first noise loss function value.
  • the computer device calculates the first blur loss function value based on the second feature corresponding to the blur sample image 102 and the third feature corresponding to the blur degraded image 110; the computer device calculates the first blur loss function value based on the second feature corresponding to the offset sample image 103 and the offset.
  • the third feature corresponding to the degraded image 111 is calculated to obtain the first bias loss function value; the computer device is calculated to obtain the first noise loss function based on the second feature corresponding to the noisy sample image 104 and the third feature corresponding to the noise degraded image 112 value.
  • the first loss function value is used to measure the similarity between the second sample image and the single degraded image corresponding to the second sample image.
  • the computer device calculates the second loss function value based on the first feature corresponding to the first sample image 101 and the fourth feature corresponding to the predicted reconstructed image 118 .
  • the second loss function value is used to measure the authenticity of the predicted reconstructed image.
  • the computer device calculates the third loss function value based on the structural features corresponding to the multiple degraded image 115 and the structural features corresponding to the first sample image 101 .
  • the structural features corresponding to the multiple degraded image 115 refer to the structural features of the non-content portion of the multiple degraded image 115 .
  • the third loss function value is used to measure the similarity of the non-content portion between the multiple degraded image 115 and the first sample image 101 .
  • the computer device calculates the fourth loss function value based on the content features and texture features corresponding to the first sample image 101 and the content features and texture features corresponding to the predicted reconstructed image 118 .
  • the fourth loss function value is used to measure the similarity between the first sample image and the predicted reconstructed image.
  • the computer device updates the model parameters of the image reconstruction model based on the sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
  • the method provided by this embodiment obtains the first sample image and three second sample images; and in the degraded network layer, the damage corresponding to each of the three second sample images is Features are added to the first sample image respectively to generate three single degraded images; the three single degraded images are fused to obtain multiple degraded images corresponding to the first sample image; then the multiple degraded images are processed in the reconstruction network layer.
  • Image reconstruction processing generates predicted reconstructed images corresponding to multiple degraded images; the computer device calculates a loss function value based on three second sample images, three single degraded images, the first sample image and the predicted reconstructed image; and based on the loss function value Update the model parameters of the image reconstruction model.
  • the training method of the image reconstruction model provided by this application simultaneously performs image damage of multiple damage types on the first sample image to obtain multiple degraded images corresponding to the first sample image, and reconstructs the network layer with multiple types of image damage. Multiple degraded images of damaged types are reconstructed.
  • the model trained by the above method can simultaneously reconstruct multiple damaged types of low-quality images, avoiding the cumulative error caused by sequentially reconstructing low-quality images, thereby improving the trained image reconstruction model. image reconstruction accuracy.
  • Figure 2 shows a schematic architectural diagram of a computer system provided by an embodiment of the present application.
  • the computer system may include: a terminal 100 and a server 200.
  • the terminal 100 may be an electronic device such as a mobile phone, a tablet computer, a vehicle-mounted terminal (car machine), a wearable device, a personal computer (PC), an intelligent voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, an unmanned vending terminal, etc. equipment.
  • a client that runs a target application can be installed in the terminal 100.
  • the target application can be an image reconstruction application or other application that provides an image reconstruction function, which is not limited in this application.
  • this application does not limit the form of the target application, including but not limited to an application (Application, App) installed in the terminal 100, an applet, etc., and may also be in the form of a web page.
  • the server 200 can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server, cloud database, cloud computing, cloud function, cloud storage, or network service that provides cloud computing services. , cloud communications, middleware services, domain name services, security services, content delivery network (Content Delivery Network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the server 200 may be a background server of the above-mentioned target application, and is used to provide background services for clients of the target application.
  • cloud technology refers to a hosting technology that unifies a series of resources such as hardware, software, and networks within a wide area network or local area network to realize data calculation, storage, processing, and sharing.
  • Cloud technology is based on cloud computing providers It is a general term for network technology, information technology, integration technology, management platform technology, application technology, etc. applied in business models. It can form a resource pool and use it as needed, which is flexible and convenient. Cloud computing technology will become an important support.
  • the background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites. With the rapid development and application of the Internet industry, in the future each item may have its own identification mark, which needs to be transmitted to the backend system for logical processing. Data at different levels will be processed separately, and all types of industry data need to be powerful. System backing support can only be achieved through cloud computing.
  • the above-mentioned server can also be implemented as a node in the blockchain system.
  • Blockchain is a new application model of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain is essentially a decentralized database. It is a series of data blocks generated using cryptographic methods. Each data block contains information about a batch of network transactions, which is used to verify the validity of its information. (anti-counterfeiting) and generate the next block.
  • Blockchain can include the underlying platform of the blockchain, the platform product service layer and the application service layer.
  • the terminal 100 and the server 200 can communicate through a network, such as a wired or wireless network.
  • the execution subject of each step may be a computer device.
  • the computer device refers to an electronic device with data calculation, processing, and storage capabilities.
  • the training method or image reconstruction method of the image reconstruction model can be executed by the terminal 100 (for example, the client of the target application installed and running in the terminal 100 executes the training method or image reconstruction method of the image reconstruction model.
  • the training method or the image reconstruction method of the image reconstruction model can also be executed by the server 200, or executed by the terminal 100 and the server 200 in interactive cooperation, which is not limited in this application.
  • Figure 3 is a flow chart of a training method for an image reconstruction model provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device, which may be the terminal 100 or the server 200 in FIG. 2 .
  • the method includes:
  • Step 302 Obtain a first sample image and at least two second sample images.
  • the second sample image refers to an image with a single damage type, and the image quality of the first sample image is higher than the image quality of the second sample image.
  • Single damage type means there is only one damage type. Damage types include image blur, image offset, image noise, etc.
  • the second sample image is obtained by performing an image degradation operation on the first sample image. At this time, the first sample image and the second sample image only differ in image quality.
  • the second sample image is obtained by performing an image quality enhancement operation on the first sample image. At this time, there is only a difference in image quality between the first sample image and the second sample image.
  • the first sample image and the second sample image are obtained by photographing the same object using different shooting parameters.
  • the first sample image and the second sample image only differ in image quality.
  • the first sample image is obtained by shooting with correct shooting parameters
  • the second sample image is obtained by shooting with wrong shooting parameters.
  • Correct shooting parameters and wrong shooting parameters are associated with high-quality images and low-quality images respectively. correspond.
  • the first sample image is a high-resolution image
  • the second sample image is a low-resolution image.
  • the images involved in the embodiments of the present application may be biological or non-biological internal tissue images that cannot be directly seen by the human eye and are obtained through non-invasive methods.
  • the images in the embodiments of the present application may be biological images (such as medical images).
  • Bioimaging refers to images of the internal tissues of a living thing or a certain part of a living thing (such as the human body or a certain part of the human body) obtained in a non-invasive manner for the purpose of medical treatment or medical research.
  • the images in the embodiments of the present application can be images of the heart, lungs, liver, stomach, large and small intestines, human brain, bones, blood vessels, etc.; they can also be images of non-human organs such as tumors.
  • the images involved in the embodiments of this application may be based on x-ray (X-ray) technology, computerized tomography (CT), positron emission tomography (Positron Emission Tomography, PET), or magnetic resonance imaging. Images generated by imaging technologies such as Nuclear Magnetic Resonance Imaging (NMRI) technology and Medical Ultrasonography.
  • the image in the embodiment of the present application may also be a WYSIWYG image generated through visual imaging technology, such as an image captured by a camera (such as a camera of a camera, a camera of a terminal, etc.).
  • Step 304 Add the damage features corresponding to at least two second sample images to the first sample image respectively to generate At least two single degraded images.
  • Features are corresponding (essential) characteristics or characteristics that distinguish a certain type of object from other types of objects, or a collection of these characteristics or characteristics.
  • the computer device can use a machine learning model to extract features from the image. Damage features refer to features corresponding to the damaged parts in the second sample image. For example, the features corresponding to the blurred parts in the second sample image, and the features corresponding to the noise parts in the second sample image.
  • the computer device extracts damage features corresponding to at least two second sample images, and adds the damage features corresponding to each second sample image to the first sample image, respectively, to generate a single degraded image, so that the generated A single degraded image contains the same or similar damage features as the second sample image.
  • a single degraded image refers to an image obtained by adding a single damage type to the first sample image.
  • the computer device extracts blur damage features corresponding to the second sample image, adds the blur damage features to the first sample image, and obtains a blur degraded image corresponding to the first sample image.
  • Step 306 Fusion of at least two single degraded images to obtain multiple degraded images corresponding to the first sample image.
  • a multi-degraded image is an image with at least two types of damage.
  • the computer device fuses at least two single degraded images to obtain images with multiple damage types.
  • a single degraded image is a blur degraded image, a bias degraded image, and a noise degraded image.
  • the computer device fuses the blur degraded image, the bias degraded image, and the noise degraded image to generate multiple degraded images, so that the generated multiple degraded images simultaneously Have the same or similar fuzzy damage characteristics, noise damage characteristics and offset damage characteristics.
  • Step 308 Perform image reconstruction processing on the multiple degraded images to generate predicted reconstructed images corresponding to the multiple degraded images.
  • Image reconstruction processing refers to processing the damaged features in multiple degraded images, for example, reducing or removing blur damaged features, noise damaged features and offset damaged features in multiple degraded images.
  • Predictive reconstructed images refer to images obtained by reducing or removing corrupted features in multiple degraded images.
  • the computer device performs reconstruction processing on the multiple degraded images, reduces or removes blur damage features, noise damage features, and bias damage features in the multiple degraded images, thereby generating a predicted reconstructed image corresponding to the multiple degraded images.
  • Step 310 Calculate the loss function value based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image.
  • the loss function value calculated based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image can be used to measure the training effect of the image reconstruction model.
  • the loss function value is at least one of cross entropy, mean square error, and absolute difference, but is not limited to this, and the embodiment of the present application does not limit this.
  • Step 312 Update the model parameters of the image reconstruction model based on the loss function value.
  • Model parameter update refers to updating the network parameters in the image reconstruction model, or updating the network parameters of each network module in the model, or updating the network parameters of each network layer in the model, but is not limited to this.
  • the application examples do not limit this.
  • the model parameters of the image reconstruction model may also be continuously adjusted according to the training loss.
  • the method provided by this embodiment acquires a first sample image and at least two second sample images; the computer device adds damage features corresponding to at least two second sample images to the first sample image respectively. In Predict the reconstructed image; the computer device calculates a loss function value based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image; the computer device updates the model parameters of the image reconstruction model based on the loss function value.
  • the training method of the image reconstruction model provided by this application simultaneously performs image damage of multiple damage types on the first sample image to obtain multiple degraded images corresponding to the first sample image, and reconstructs the network layer with multiple types of image damage. Corruption type of multiple degraded images into Through reconstruction, the model trained by the above method can reconstruct multiple damage types of low-quality images at the same time, avoiding the cumulative error caused by sequential reconstruction of low-quality images, thus improving the image reconstruction accuracy of the trained image reconstruction model.
  • the embodiment of the present application provides an image reconstruction model, which includes: a degradation network layer and a reconstruction network layer.
  • the computer device acquires the first sample image and at least two second sample images, and adds the damage features corresponding to the at least two second sample images to the first sample image through the degradation network layer to generate at least two single degradation image; fuse at least two single degraded images to obtain multiple degraded images corresponding to the first sample image.
  • the computer device performs image reconstruction processing on the multiple degraded images through the reconstruction network layer, and generates predicted reconstructed images corresponding to the multiple degraded images.
  • Figure 4 is a flow chart of a training method for an image reconstruction model provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device, which may be the terminal 100 or the server 200 in FIG. 2 .
  • the method includes:
  • Step 402 Obtain a first sample image and at least two second sample images.
  • the first sample image refers to an image with high resolution, or with damage type that does not affect the expression of the image content, or with damage type that has little impact on the expression of the image content.
  • the second sample image refers to an image with a single damage type, and the image quality of the first sample image is higher than the image quality of the second sample image.
  • the damage type includes at least one of a blur damage type, a noise damage type, and an offset damage type, but is not limited to this, and the embodiments of the present application do not limit this.
  • the second sample image may be any one of a blurred sample image, a noisy sample image, and a biased sample image.
  • Blurry sample images refer to images with blurred content in the image.
  • a noisy sample image refers to the image content that is unnecessary or has a negative impact on the analysis and understanding of the image content, that is, image noise, and the image is a noisy sample image.
  • a biased sample image refers to an image with differences in image brightness due to bias. For example, artifacts (i.e., image noise) caused by interference from metal objects often appear in medical images, which may affect doctors' judgment.
  • Step 404 Add the damage features corresponding to at least two second sample images to the first sample image respectively to generate at least two single degraded images.
  • Damage features refer to features corresponding to the damaged parts in the second sample image. For example, the features corresponding to the blurred parts in the second sample image, and the features corresponding to the noise parts in the second sample image.
  • a single degraded image refers to an image obtained by adding a single damage type to the first sample image.
  • the computer device acquires the first feature corresponding to the first sample image, and acquires the second features corresponding to at least two second sample images respectively; the computer device obtains the second sample image based on the first feature and the second feature. Corresponding damage features; the computer device adds the damage features to the first features of the first sample image to obtain a single degraded image corresponding to the first sample image.
  • the first feature is used to characterize the image features of the first sample image
  • the second feature is used to characterize the image features of the second sample image.
  • the image reconstruction model includes a degenerate network layer, and the degenerate network layer includes a first degenerate encoder, a corrupted kernel extractor, and a first degenerate decoder.
  • the computer device extracts first features corresponding to the first sample image through the first degenerate encoder, and extracts second features corresponding to at least two second sample images respectively.
  • the computer device determines the damage feature by comparing the first feature and the second feature, and decouples the damage feature from the second feature through the damage kernel extractor to obtain the damage feature corresponding to the second sample image; the computer device adds the damage feature to the first sample image. From the first feature of this image, an intermediate first feature is obtained, and the intermediate first feature is input to the first degraded decoder for decoding processing to obtain a single degraded image corresponding to the first sample image.
  • the second sample image takes a blurred sample image, a noisy sample image and a biased sample image as examples.
  • the computer device extracts the features of the first sample image through the first degraded encoder to obtain the first feature; the computer device uses the first degraded encoder to obtain the first feature.
  • the degenerate encoder extracts the characteristics of blurred sample images, noisy sample images and offset sample images respectively, and obtains the characteristics of blurred samples respectively. characteristics, noise sample characteristics and bias sample characteristics.
  • the computer device inputs the first feature and the fuzzy sample feature into the fuzzy kernel extractor for feature extraction to obtain the fuzzy damage feature in the fuzzy sample feature; the computer device inputs the first feature and the offset sample feature into the offset kernel extractor for feature extraction. Extract, and obtain the bias damage feature among the bias sample features; the computer device inputs the first feature and the noise sample feature to the noise kernel extractor for feature extraction, and obtains the noise damage feature among the noise sample features.
  • the computer device fuses the first feature and the fuzzy damage feature corresponding to the first sample image to generate the first intermediate fuzzy feature; the computer device decodes the first intermediate fuzzy feature through the first degraded decoder to generate the first image.
  • the computer device fuses the first feature corresponding to the first sample image and the offset damage feature to generate the intermediate first offset feature; the computer device decodes the intermediate first offset feature through the first degradation decoder to generate The offset degraded image corresponding to the first sample image.
  • the computer device fuses the first feature corresponding to the first sample image and the noise damage feature to generate the first intermediate noise feature; the computer device decodes the first intermediate noise feature through the first degraded decoder to generate the first sample image.
  • the noise degraded image corresponding to this image.
  • Step 406 Obtain third features corresponding to at least two single degraded images, and fuse the third features to obtain multiple degraded images corresponding to the first sample image.
  • a multi-degraded image is an image with at least two types of damage.
  • the third feature is used to characterize the image characteristics of the single degraded image.
  • the degraded network layer in the image reconstruction model also includes a second degraded encoder and a second degraded decoder; the computer device obtains at least two third features corresponding to the single degraded image through the second degraded encoder, and uses the second degraded image. The three features are fused to obtain the degraded fusion feature; the computer device decodes the degraded fusion feature through the second degradation decoder to generate multiple degraded images corresponding to the first sample image.
  • a single degraded image is a blurred degraded image, a biased degraded image, and a noise degraded image.
  • the computer device performs feature extraction on the blurred degraded image, the biased degraded image, and the noise degraded image respectively through the second degradation encoder, and converts the blurred degraded image into a blurred image.
  • Features corresponding to the degraded image, features corresponding to the offset degraded image, and features corresponding to the noise degraded image are feature fused to obtain degraded fusion features; the computer device decodes the degraded fusion features through the second degradation decoder to generate the first sample Multiple degraded images corresponding to the image.
  • Step 408 Perform image reconstruction processing on the multiple degraded images to generate predicted reconstructed images corresponding to the multiple degraded images.
  • Image reconstruction processing refers to processing the damage features in multiple degraded images, reducing or removing blur damage features, noise damage features and offset damage features in multiple degradation images.
  • Predictive reconstructed images refer to images obtained by reducing or removing corrupted features in multiple degraded images.
  • the image reconstruction model includes a reconstruction network layer, and the reconstruction network layer includes a reconstruction encoder and a reconstruction decoder; the computer device inputs multiple degraded images to the reconstruction encoder for feature extraction to obtain image reconstruction features; the computer device passes the reconstruction decoder The image reconstruction features are decoded to generate predicted reconstructed images corresponding to the multiple degraded images.
  • Step 410 Calculate the first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degraded image; based on the first feature corresponding to the first sample image and the fourth feature corresponding to the predicted reconstructed image , calculate the second loss function value.
  • the fourth feature is used to characterize the image features of the predicted reconstructed image.
  • the first loss function value and the second loss function value calculated based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image can be used to measure the training effect of the image reconstruction model.
  • the loss function value is at least one of cross entropy, mean square error, and absolute difference, but is not limited to this, and the embodiment of the present application does not limit this.
  • the first loss function value is used to measure the similarity between the second sample image and the single degraded image corresponding to the second sample image.
  • the computer device calculates a first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degraded image.
  • the first loss function value includes a first blur loss function value, a first bias The loss function value and the first noise loss function value.
  • the computer device calculates the first characteristic based on the second feature corresponding to the i-th second sample image among the at least two second sample images and the third feature corresponding to the i-th single degraded image among the at least two single degraded images.
  • Loss function value, i is a positive integer.
  • the computer device extracts the characteristics of the first sample image 501 through the first degraded encoder 505 to obtain the first feature; the computer device extracts the blurred sample image 502, the noisy sample image 502 through the first degraded encoder 505 respectively.
  • the features of the sample image 504 and the offset sample image 503 are respectively obtained as blur sample features, noise sample features and offset sample features.
  • the computer device inputs the first feature and the fuzzy sample feature to the fuzzy kernel extractor 506 for feature extraction to obtain the fuzzy damage feature in the fuzzy sample feature; the computer device inputs the first feature and the offset sample feature to the offset kernel extractor 507 Feature extraction is performed to obtain the bias damage feature among the bias sample features; the computer device inputs the first feature and the noise sample feature to the noise kernel extractor 508 for feature extraction, and the noise damage feature among the noise sample features is obtained.
  • the computer device fuses the first feature and the blur damage feature corresponding to the first sample image 501 to generate the intermediate first blur feature; the computer device decodes the intermediate first blur feature through the first degradation decoder 509 to generate the second intermediate blur feature.
  • the computer device fuses the first feature and the offset damage feature corresponding to the first sample image 501 to generate the intermediate first offset feature; the computer device decodes the intermediate first offset feature through the first degradation decoder 509 , generate the offset degraded image 511 corresponding to the first sample image 501.
  • the computer device fuses the first feature and the noise damage feature corresponding to the first sample image 501 to generate an intermediate first noise feature; the computer device decodes the intermediate first noise feature through the first degradation decoder 509 to generate a third A noise degraded image 512 corresponding to the sample image 501.
  • the computer device calculates the first blur loss function value 513 based on the second feature corresponding to the blur sample image 502 and the third feature corresponding to the blur degraded image 510;
  • the computer device calculates the first offset loss function value 514 based on the second feature corresponding to the offset sample image 503 and the third feature corresponding to the offset degradation image 511;
  • the computer device calculates a first noise loss function value 515 based on the second feature corresponding to the noisy sample image 504 and the third feature corresponding to the noise degraded image 512 .
  • the calculation formula of the first loss function value can be expressed as:
  • N is the number of data groups
  • R is the number of damage types
  • r is used to characterize the damage type
  • x is the first sample image
  • y is the second sample image
  • K is the damage kernel extractor
  • is the first degenerate decoder
  • E is the Charbonnier loss function.
  • the second loss function value is used to measure the authenticity of the predicted reconstructed image.
  • the computer device calculates the second loss function value based on the first feature corresponding to the first sample image and the fourth feature corresponding to the predicted reconstructed image.
  • the embodiment of the present application uses the idea of generative adversarial to input the first sample image and the predicted reconstructed image into the discriminator for discrimination to obtain the discrimination result.
  • the discriminator is used to distinguish the first sample image and the predicted reconstructed image.
  • the image is reconstructed, and the adversarial loss is determined based on the discrimination result, that is, the second loss function value is determined.
  • the discriminator cannot distinguish whether the given image is the first sample image or the predicted reconstructed image, that is, the predicted reconstructed image is close to the first sample image, the training is completed.
  • the calculation formula of the second loss function value can be expressed as:
  • E is the Charbonnier loss function
  • D up is the discriminator
  • x is the first sample image
  • G up is the reconstruction network layer.
  • the loss function value also includes a third loss function value and a fourth loss function value.
  • the third loss function value is used to measure the similarity of the non-content parts between the multiple degraded images and the first sample image.
  • the third loss function value is calculated based on the structural features corresponding to the multiple degraded images and the structural features corresponding to the first sample image.
  • the calculation formula of the third loss function value can be expressed as:
  • x i is the i-th first sample image, is the i-th multiple degraded image, is the feature representation of the l-th layer of the image.
  • the fourth loss function value is used to measure the similarity between the first sample image and the predicted reconstructed image.
  • the fourth loss function value is calculated based on the content features and texture features corresponding to the first sample image and the content features and texture features corresponding to the predicted reconstructed image.
  • the content feature refers to the pixel value of each pixel in the image, such as the brightness, brightness, etc. of each pixel.
  • calculation formula of the fourth loss function value can be expressed as:
  • is the weight value, for example, ⁇ is 0.9.
  • Step 412 Update the model parameters of the image reconstruction model based on the sum of the first loss function value and the second loss function value.
  • Model parameter update refers to updating the network parameters in the image reconstruction model, or updating the network parameters of each network module in the model, or updating the network parameters of each network layer in the model, but is not limited to this.
  • the application examples do not limit this.
  • the computer device updates the model parameters of the image reconstruction model based on the sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
  • a linear combination constructed based on the sum of the first loss function value, the second loss function value, the third loss function value and the fourth loss function value can be expressed as:
  • the loss function value is the first loss function value
  • the third loss function value is the second loss function value
  • the model parameters of the image reconstruction model include network parameters of the first degraded encoder, network parameters of the damaged kernel extractor, network parameters of the first degraded decoder, network parameters of the second degraded encoder, and network parameters of the second degraded decoder. , at least one of network parameters of the reconstructed encoder and network parameters of the reconstructed decoder.
  • the computer device performs network parameters of the first degraded encoder, the network parameters of the damaged kernel extractor, the network parameters of the first degraded decoder, and the second degraded decoder in the image reconstruction model based on the loss function value.
  • the network parameters of the encoder, the network parameters of the second degraded decoder, the network parameters of the reconstructed encoder and the network parameters of the reconstructed decoder are updated to obtain the updated first degraded encoder, damaged kernel extractor and first degraded decoder.
  • the second degraded encoder, the second degraded decoder, the reconstructed encoder and the reconstructed decoder thereby obtaining the trained image reconstruction model type.
  • updating the model parameters of the image reconstruction model includes updating the network parameters of all network modules in the image reconstruction model, or fixing the network parameters of some network modules in the image reconstruction model and updating only the remaining network modules.
  • Network parameters For example, when updating the model parameters of the image reconstruction model, the network parameters of the first degraded encoder, the second degraded encoder, and the reconstructed encoder in the image reconstruction model are fixed, and only the damaged kernel extractor and the first degraded decoder are , the second degraded decoder and the reconstructed decoder network parameters are updated.
  • the method provided by this embodiment acquires a first sample image and at least two second sample images; the computer device adds damage features corresponding to at least two second sample images to the first sample image respectively.
  • the computer device calculates the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value based on the second sample image, the single degraded image, the first sample image, and the predicted reconstructed image.
  • the computer device updates the model parameters of the image reconstruction model based on the sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
  • the training method of the image reconstruction model provided by this application simultaneously performs image damage of multiple damage types on the first sample image to obtain multiple degraded images corresponding to the first sample image, and reconstructs the network layer with multiple types of image damage. Multiple degraded images of damaged types are reconstructed.
  • the model trained by the above method can simultaneously reconstruct multiple damaged types of low-quality images, avoiding the cumulative error caused by sequentially reconstructing low-quality images, thereby improving the trained image reconstruction model. image reconstruction accuracy.
  • Figure 6 is a schematic diagram of the reconstruction effect of the image reconstruction model provided by an exemplary embodiment of the present application.
  • the reference scheme one by selecting different processing orders to perform denoising, deblurring and debiasing processing on multiple degraded images, six sets of image reconstruction results are finally obtained.
  • the six sets of images in the reference scheme one are Comparing the image reconstruction result with the first sample image, and comparing the predicted reconstructed image obtained based on the solution provided by this application with the first sample image, it can be obtained that: the solution provided by this application can produce more realistic images for medical images. , image reconstruction results with higher visual effects.
  • Figure 8 is a schematic diagram of a training method for an image reconstruction model provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device, which may be the terminal 100 or the server 200 in FIG. 2 .
  • the method includes:
  • the first sample image 801 and at least two second sample images 802 are input into the degradation network layer 803, and the first sample image 801 is subjected to image language degradation processing to obtain an image. Degradation results: multiple degraded images 804.
  • Image degradation refers to the decline in image quality due to imperfections in the imaging system, recording equipment, transmission media, and processing methods during the formation, recording, processing, and transmission of images. This phenomenon is called image degradation.
  • the image degradation processing performed on the first sample image 801 is to add multiple damage types to the first sample image 801 at the same time. For example, adding random blur processing, random blur processing to the first sample image 801 Noise processing and random offset processing are to add multiple types of defects to the first sample image 801.
  • the reconstruction network layer 805 When reconstructing the multiple degraded image 804, the reconstruction network layer 805 simultaneously reconstructs multiple types of defects in the multiple degraded image 804, that is, removes blur, removes noise, and removes offsets.
  • the reconstruction network layer 805 reconstructs the multiple degraded image 804. is a high-quality image, that is, the predicted reconstructed image 806.
  • the accuracy of the image reconstruction affects the accuracy of the information display in the medical image, which in turn affects the accuracy of medical diagnosis based on the displayed information. Therefore, in the medical image-assisted diagnosis scenario, the image reconstruction model obtained by using the image reconstruction model training method provided by this application can improve the accuracy of image reconstruction of medical images, thereby displaying detailed information in medical images more clearly, and improving the accuracy of medical auxiliary diagnosis.
  • the training method of the image reconstruction model involved in this application can be implemented based on the image reconstruction model.
  • the solution includes an image reconstruction model generation stage and an image reconstruction stage.
  • Figure 9 is a frame diagram of image reconstruction model generation and image reconstruction illustrating an exemplary embodiment of the present application.
  • the image reconstruction model generation device 910 passes preset training After obtaining the sample data set (including the first sample image and at least two second sample images) and obtaining the image reconstruction model, the image reconstruction result is generated based on the image reconstruction model.
  • the image reconstruction device 920 processes the input first image based on the image reconstruction model to obtain an image reconstruction result of the first image.
  • the above-mentioned image reconstruction model generation device 910 and image reconstruction device 920 can be computer equipment.
  • the computer equipment can be a fixed computer equipment such as a personal computer or a server, or the computer equipment can also be a tablet computer, e-book reader, etc. and other mobile computer equipment.
  • the image reconstruction model generation device 910 and the image reconstruction device 920 may be the same device, or the image reconstruction model generation device 910 and the image reconstruction device 920 may be different devices.
  • the image reconstruction model generation device 910 and the image reconstruction device 920 may be the same type of device, for example, the image reconstruction model generation device 910 and the image reconstruction device 920 may be Both are servers; or the image reconstruction model generation device 910 and the image reconstruction device 920 can also be different types of devices.
  • the image reconstruction device 920 can be a personal computer or terminal, and the image reconstruction model generation device 910 can be a server, etc.
  • the embodiment of the present application does not limit the specific types of the image reconstruction model generation device 910 and the image reconstruction device 920.
  • Figure 10 is a flow chart of an image reconstruction method provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device, which may be the terminal 100 or the server 200 in FIG. 2 .
  • the method includes:
  • Step 1002 Obtain the first image.
  • the first image refers to the image with multiple damage types.
  • the method of obtaining the first image includes at least one of the following situations:
  • the computer device receives the first image.
  • the terminal is a terminal that initiates image scanning.
  • the terminal scans the image, and after the scanning is completed, sends the first image to the server.
  • the computer device obtains the first image from a stored database, such as the MNIST segmentation data set, the public In the brain MRI data set, at least one first image is acquired. It is worth noting that the above-mentioned method of obtaining the first image is only an illustrative example, and the embodiments of the present application are not limited thereto.
  • Step 1004 Based on the trained reconstruction network layer, perform image reconstruction processing on the first image to obtain the first reconstructed image.
  • the computer device performs image reconstruction processing on the first image based on the trained reconstruction network layer, that is, the reconstruction network layer simultaneously reconstructs multiple types of defects in the first image, that is, removes blur, removes noise, removes offsets, and reconstructs the network.
  • the layer reconstructs the first image into a high-quality image, i.e., a first reconstructed image.
  • Step 1006 Output the first reconstructed image.
  • the computer device outputs the first reconstructed image.
  • the MNIST data set was selected for the experiment.
  • the MNIS T data set includes 60,000 training and 10,000 test examples, and all images are 28 ⁇ 28. All images were pre-registered, all images were divided into 90% for training and 10% for testing, and 20% of the training images were retained as the validation set for both data sets.
  • the reference scheme includes reference scheme 1, reference scheme 2 and reference scheme 3.
  • the reference solution selects denoising (DN) processing, deblurring (DB) processing and N4 offset correction processing for low-quality images, but the order of processing is different.
  • the evaluation indicators used are: Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM), which are used to evaluate the image reconstruction effect.
  • the method provided in this embodiment acquires the first image, performs image reconstruction processing on the first image based on the trained reconstruction network layer, and obtains a high-quality first reconstructed image.
  • This application is based on the trained reconstruction network layer and can obtain more accurate image reconstruction results.
  • Figure 11 is a schematic diagram of an image reconstruction method provided by an exemplary embodiment of the present application.
  • the method may be performed by a computer device, which may be the terminal 100 or the server 200 in FIG. 2 .
  • the method includes:
  • the first front-end 1101 When the first front-end 1101 receives the first image that needs to be reconstructed, the first image refers to an image with multiple loss types, the first front-end 1101 uploads the first image to the computer device 1102 for image reconstruction processing, and the computer
  • the image reconstruction process of the first image by the device 1102 can be referred to the description in the previous embodiments, and will not be described again here.
  • the computer device 1102 After the computer device 1102 performs image reconstruction processing on the first image, the computer device 1102 outputs the image reconstruction result to the second front end 1103 .
  • first front end 1101 and the second front end 1103 may be the same front end, or they may be different front ends, which is not limited in this embodiment of the present application.
  • Figure 12 shows a schematic structural diagram of a training device for an image reconstruction model provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or part of the computer equipment through software, hardware, or a combination of both.
  • the device includes:
  • the acquisition module 1201 is used to acquire a first sample image and at least two second sample images.
  • the second sample image refers to an image with a single damage type, and the image quality of the first sample image is higher than that of the first sample image. The image quality of the second sample image.
  • the degradation module 1202 is configured to add at least two damage features corresponding to the second sample image to the first sample image, respectively, to generate at least two single degradation images.
  • the fusion module 1203 is configured to fuse at least two of the single degraded images to obtain a multiple degraded image corresponding to the first sample image.
  • the multiple degraded image refers to an image with at least two types of damage.
  • the reconstruction module 1204 is configured to perform image reconstruction processing on the multiple degraded images and generate predicted reconstructed images corresponding to the multiple degraded images.
  • the calculation module 1205 is configured to calculate a loss function value based on the second sample image, the single degraded image, the first sample image and the predicted reconstructed image.
  • Update module 1206 configured to update model parameters of the image reconstruction model based on the loss function value.
  • the degradation module 1202 is also configured to obtain the first feature corresponding to the first sample image, and obtain the second features corresponding to at least two kinds of the second sample image respectively; based on the The first feature and the second feature are used to obtain the damage feature corresponding to the second sample image; the damage feature is added to the first feature of the first sample image to obtain the The single degraded image corresponding to the first sample image.
  • the image reconstruction model includes a degenerate network layer including a first degenerate encoder, a corrupted kernel extractor and a first degenerate decoder.
  • the degradation module 1202 is further configured to extract the first feature corresponding to the first sample image through the first degradation encoder, and respectively extract at least Second features corresponding to the two second sample images; by comparing the first feature and the second feature, the damage feature is determined, and from the second feature through the damage kernel extractor, Decoupling to obtain the damage feature corresponding to the second sample image; Add the damaged feature to the first feature of the first sample image to obtain an intermediate first feature, and input the intermediate first feature to the first degraded decoder for decoding processing to obtain The single degraded image corresponding to the first sample image.
  • the fusion module 1203 is also configured to obtain at least two third features corresponding to the single degraded image, and fuse the third features to obtain the corresponding first sample image. of the multiple degraded images.
  • the degraded network layer in the image reconstruction model also includes a second degraded encoder and a second degraded decoder.
  • the fusion module 1203 is also configured to obtain at least two third features corresponding to the single degraded image through the second degraded encoder, and fuse the third features to obtain Degenerate fusion features; use the second degradation decoder to decode the degradation fusion features to generate the multiple degradation images corresponding to the first sample image.
  • the image reconstruction model includes a reconstruction network layer, and the reconstruction network layer includes a reconstruction encoder and a reconstruction decoder.
  • the reconstruction module 1204 is also configured to input the multiple degraded images to the reconstruction encoder for feature extraction to obtain the image reconstruction features; and use the reconstruction decoder to perform feature extraction on the image.
  • the reconstructed features are decoded to generate the predicted reconstructed image corresponding to the multiple degraded image.
  • the loss function value includes a first loss function value and a second loss function value.
  • the first loss function value is used to measure the difference between the second sample image and the single degraded image corresponding to the second sample image.
  • the second loss function value is used to measure the authenticity of the predicted reconstructed image.
  • the calculation module 1205 is further configured to calculate the first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degraded image.
  • the calculation module 1205 is further configured to calculate the second loss function value based on the first feature corresponding to the first sample image and the fourth feature corresponding to the predicted reconstructed image.
  • the update module 1206 is also configured to update the model parameters of the image reconstruction model based on the sum of the first loss function value and the second loss function value.
  • the calculation module 1205 is further configured to base on the second feature corresponding to the i-th second sample image in at least two of the second sample images and the i-th second sample image in at least two of the single degraded images. For the third feature corresponding to i single degraded image, calculate the value of the first loss function, where i is a positive integer.
  • the loss function value also includes a third loss function value and a fourth loss function value.
  • the third loss function value is used to measure the similarity of the non-content parts between the multiple degraded images and the first sample image.
  • the fourth loss function value is used to measure the similarity between the first sample image and the predicted reconstructed image.
  • the calculation module 1205 is further configured to calculate a third loss function value based on the structural features corresponding to the multiple degraded images and the structural features corresponding to the first sample image.
  • the calculation module 1205 is further configured to calculate the third content feature and texture feature corresponding to the first sample image and the content feature and texture feature corresponding to the predicted reconstructed image. Four loss function values.
  • the update module 1206 is also configured to base on the sum of the first loss function value, the second loss function value, the third loss function value and the fourth loss function value. Model parameters of the image reconstruction model are updated.
  • the damage type includes at least one of a blur damage type, a noise damage type, and an offset damage type.
  • the acquisition module 1201 is also used to acquire a first image, where the first image refers to an image with multiple damage types.
  • the reconstruction module 1204 is also configured to perform image reconstruction processing on the first image based on the trained reconstruction network layer to obtain a first reconstructed image, where the first reconstructed image refers to An image obtained by removing multiple damage types in the first image; and outputting the first reconstructed image.
  • FIG. 13 shows a structural block diagram of a computer device 1300 according to an exemplary embodiment of the present application.
  • the computer device can be implemented as the server in the above solution of this application.
  • the image computer device 1300 includes a central processing unit (Central Processing Unit, CPU) 1301, a system memory 1304 including a random access memory (Random Access Memory, RAM) 1302 and a read-only memory (Read-Only Memory, ROM) 1303, and a system bus 1305 connecting the system memory 1304 and the central processing unit 1301.
  • the imaging computer device 1300 also includes a mass storage device 1306 for storing an operating system 1309, applications 1310 and other program modules 1311.
  • the mass storage device 1306 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305 .
  • the mass storage device 1306 and its associated computer-readable media provide non-volatile storage for the image computing device 1300 . That is, the mass storage device 1306 may include computer-readable media (not shown) such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • CD-ROM Compact Disc Read-Only Memory
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, Erasable Programmable Read Only Memory (EPROM), Electronically Erasable Programmable Read-Only Memory (EEPROM) flash memory or others Solid-state storage technology, CD-ROM, Digital Versatile Disc (DVD) or other optical storage, tape cassettes, tapes, disk storage or other magnetic storage devices.
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electronically Erasable Programmable Read-Only Memory
  • SSD Digital Versatile Disc
  • the image computer device 1300 may also operate on a remote computer connected to a network through a network such as the Internet. That is, the image computer device 1300 can be connected to the network 1308 through the network interface unit 1307 connected to the system bus 1305, or the network interface unit 1307 can also be used to connect to other types of networks or remote computer systems (not shown). out).
  • the memory also includes at least one section of a computer program.
  • the at least one section of the computer program is stored in the memory.
  • the central processor 1301 implements all or part of the training method of the image reconstruction model shown in the above embodiments by executing the at least one section of the program. step.
  • An embodiment of the present application also provides a computer device.
  • the computer device includes a processor and a memory. At least one program is stored in the memory. The at least one program is loaded and executed by the processor to implement the image reconstruction provided by the above method embodiments. Model training method.
  • Embodiments of the present application also provide a computer-readable storage medium, which stores at least one computer program.
  • the at least one computer program is loaded and executed by the processor to implement the training of the image reconstruction model provided by the above method embodiments. method.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes a computer program.
  • the computer program is stored in a computer-readable storage medium.
  • the computer program is readable by a processor of a computer device from the computer.
  • the storage medium is read and executed, so that the computer device executes to implement the training method of the image reconstruction model provided by each of the above method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像重建模型的训练方法、装置、设备、介质及程序产品,属于图像处理技术领域。包括:获取第一样本图像和至少两种第二样本图像(302);将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像(304);将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像(306);将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像(308);基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算损失函数值(310);基于损失函数值对模型的模型参数进行更新(312)。通过上述方法训练得到的模型,避免了依次重建图像导致的累积误差。

Description

图像重建模型的训练方法、装置、设备、介质及程序产品
本申请要求于2022年05月10日提交的申请号为202210508810.2、发明名称为“图像重建模型的训练方法、装置、设备、介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理技术领域,特别涉及一种图像重建模型的训练方法、装置、设备、介质及程序产品。
背景技术
高质量的三维图像能够更清晰的展示细节信息。例如,在医学领域,高质量的三维医学图像有助于医学诊断和分析。然而,图像在形成、记录、处理和传输过程中,由于成像系统、记录设备、传输介质和处理方法的不完善,导致图像质量的下降。
在相关技术中,通常采用深度卷积神经网络直接学习成对的低质量图像和高质量图像之间的映射关系,从而基于低质量的三维图像生成高质量的三维图像。
然而,在低质量的三维图像具有多个损坏部分的情况下,例如噪声和图像缺失。上述方法通常是对低质量的三维图像的损坏部分依次进行重建,其结果是第一阶段的重建误差传播至随后的阶段,从而导致整体的图像重建误差较大。
发明内容
本申请提供了一种图像重建模型的训练方法、装置、设备、介质及程序产品,能够得到较为准确的图像重建结果。所述技术方案如下:
根据本申请的一方面,提供了一种图像重建模型的训练方法,所述方法包括:
获取第一样本图像和至少两种第二样本图像,所述第二样本图像是指带有单一损坏类型的图像,所述第一样本图像的图像质量高于所述第二样本图像的图像质量;
将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像;将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的多重退化图像,所述多重退化图像是指带有至少两种损坏类型的图像;
将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像;
基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值;
基于所述损失函数值对所述图像重建模型的模型参数进行更新。
在一种可能的实现方式中,所述损坏类型包括模糊损坏类型、噪声损坏类型、偏置损坏类型中的至少一种。
在一种可能的实现方式中,所述方法还包括:
获取第一图像,所述第一图像是指带有多种损坏类型的图像;
基于训练完毕的重建网络层,对所述第一图像进行图像重建处理,得到第一重建图像,所述第一重建图像是指将所述第一图像中的多种损坏类型进行去除得到的图像;
输出所述第一重建图像。
根据本申请的一方面,提供了一种图像重建模型的训练装置,所述装置包括:
获取模块,用于获取第一样本图像和至少两种第二样本图像,所述第二样本图像是指带有单一损坏类型的图像,所述第一样本图像的图像质量高于所述第二样本图像的图像质量;
退化模块,用于将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像;
融合模块,用于将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的 多重退化图像,所述多重退化图像是指带有至少两种损坏类型的图像;
重建模块,用于将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像;
计算模块,用于基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值;
更新模块,用于基于所述损失函数值对所述图像重建模型的模型参数进行更新。
根据本申请的另一方面,提供了一种计算机设备,该计算机设备包括:处理器和存储器,存储器中存储有至少一条计算机程序,至少一条计算机程序由处理器加载并执行以实现如上方面所述的图像重建模型的训练方法。
根据本申请的另一方面,提供了一种计算机存储介质,计算机可读存储介质中存储有至少一条计算机程序,至少一条计算机程序由处理器加载并执行以实现如上方面所述的图像重建模型的训练方法。
根据本申请的另一方面,提供了一种计算机程序产品,上述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中;所述计算机程序由计算机设备的处理器从所述计算机可读存储介质读取并执行,使得所述计算机设备执行如上方面所述的图像重建模型的训练方法。
本申请提供的技术方案带来的有益效果至少包括:
通过获取第一样本图像和至少两种第二样本图像;将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像;将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像;随后将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像;计算机设备基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算损失函数值;并基于损失函数值对图像重建模型的模型参数进行更新。本申请提供的图像重建模型的训练方法,通过对第一样本图像同时执行多个损坏类型的图像损坏,得到第一样本图像对应的多重退化图像,并对带有多种损坏类型的多重退化图像进行重建,通过上述方法训练得到的模型,可以同时重建低质量图像的多种损坏类型,避免了依次重建低质量图像中的损坏类型导致的累积误差,进而提升了训练完成的图像重建模型的图像重建精度。
附图说明
图1是本申请一个示例性实施例提供的一种图像重建模型的训练方法的示意图;
图2是本申请一个示例性实施例提供的计算机系统的架构示意图;
图3是本申请一个示例性实施例提供的图像重建模型的训练方法的流程图;
图4是本申请一个示例性实施例提供的图像重建模型的训练方法的流程图;
图5是本申请一个示例性实施例提供的图像重建模型的训练方法的示意图;
图6是本申请一个示例性实施例提供的图像重建模型的重建效果的示意图;
图7是本申请一个示例性实施例提供的图像重建模型的重建效果的示意图;
图8是本申请一个示例性实施例提供的图像重建模型的训练方法的示意图;
图9是本申请一个示例性实施例提供的图像重建模型生成以及图像重建的框架图;
图10是本申请一个示例性实施例提供的图像重建方法的流程图;
图11是本申请一个示例性实施例提供的图像重建方法的示意图;
图12是本申请一个示例性实施例提供的图像重建模型的训练装置的框图;
图13是本申请一个示例性实施例提供的计算机设备的结构示意图。
具体实施方式
本申请实施例提供了一种图像重建模型的训练方法的技术方案,如图1所示,该方法可以由计算机设备执行,计算机设备可以是终端或服务器。
示例性地,计算机设备获取第一样本图像101和至少两种第二样本图像。至少两种第二 样本图像包括模糊样本图像102、偏置样本图像103和带噪样本图像104中的至少两种。第二样本图像是指带有单一损坏类型的图像,第一样本图像的图像质量高于第二样本图像的图像质量。
可选地,损坏类型包括模糊损坏类型、噪声损坏类型、偏置损坏类型中的至少一种,但不限于此,本申请实施例对此不作具体限定。
例如,第一样本图像101是指高分辨率、或不影响图像内容表达、或对图像内容表达影响较小的图像。模糊样本图像102是指图像中存在模糊内容的图像。带噪样本图像104是指图像中存在不需要的、或对图像内容分析理解存在负面影响的图像内容,即图像噪声,该图像即为带噪样本图像104。偏置样本图像103是指因偏置导致图像亮度差异的图像。可以理解的是,第二样本图像中的模糊损坏类型、噪声损坏类型、偏置损坏类型均为随机设置。
示例性地,计算机设备通过第一退化编码器105提取第一样本图像101对应的第一特征,以及分别提取至少两种第二样本图像对应的第二特征。
计算机设备基于第一特征及第二特征,通过对应的损坏内核提取器提取第二特征中的损坏特征;计算机设备将损坏特征添加至第一样本图像101的第一特征中,得到中间第一特征,并将中间第一特征输入至第一退化解码器109进行解码处理,得到第一样本图像对应的单一退化图像。
例如,计算机设备通过第一退化编码器105提取第一样本图像101的特征,得到第一特征;计算机设备通过第一退化编码器105分别提取模糊样本图像102、带噪样本图像104及偏置样本图像103的特征,分别得到模糊样本特征、噪声样本特征及偏置样本特征。
计算机设备将第一特征和模糊样本特征输入至模糊内核提取器106进行特征提取,得到模糊样本特征中的模糊损坏特征;计算机设备将第一特征和偏置样本特征输入至偏置内核提取器107进行特征提取,得到偏置样本特征中的偏置损坏特征;计算机设备将第一特征和噪声样本特征输入至噪声内核提取器108进行特征提取,得到噪声样本特征中的噪声损坏特征。
计算机设备将第一样本图像101对应的第一特征及模糊损坏特征进行融合处理,生成中间第一模糊特征;计算机设备通过第一退化解码器109对中间第一模糊特征进行解码处理,生成第一样本图像101对应的模糊退化图像110。
计算机设备将第一样本图像101对应的第一特征及偏置损坏特征进行融合处理,生成中间第一偏置特征;计算机设备通过第一退化解码器109对中间第一偏置特征进行解码处理,生成第一样本图像对应的偏置退化图像111。
计算机设备将第一样本图像101对应的第一特征及噪声损坏特征进行融合处理,生成中间第一噪声特征;计算机设备通过第一退化解码器109对中间第一噪声特征进行解码处理,生成第一样本图像对应的噪声退化图像112。
示例性地,计算机设备通过第二退化编码器113获取至少两种单一退化图像对应的第三特征,并将第三特征进行融合,得到退化融合特征;计算机设备通过第二退化解码器114对退化融合特征进行解码处理,生成第一样本图像101对应的多重退化图像115。
例如,计算机设备通过第二退化编码器113分别对模糊退化图像110、偏置退化图像111、噪声退化图像112进行特征提取,并将模糊退化图像110对应的特征、偏置退化图像111对应的特征、噪声退化图像112对应的特征进行特征融合,得到退化融合特征;计算机设备通过第二退化解码器114对退化融合特征进行解码处理,生成第一样本图像101对应的多重退化图像115。
计算机设备基于图像重建模型中的重建网络层中的重建编码器116和重建解码器117对多重退化图像进行图像重建处理,生成多重退化图像115对应的预测重建图像118。
示例性地,计算机设备基于第二样本图像对应的第二特征及单一退化图像对应的第三特征,计算第一损失函数值,第一损失函数值包括第一模糊损失函数值、第一偏置损失函数值 及第一噪声损失函数值。
例如,计算机设备基于模糊样本图像102对应的第二特征及模糊退化图像110对应的第三特征,计算得到第一模糊损失函数值;计算机设备基于偏置样本图像103对应的第二特征及偏置退化图像111对应的第三特征,计算得到第一偏置损失函数值;计算机设备基于带噪样本图像104对应的第二特征及噪声退化图像112对应的第三特征,计算得到第一噪声损失函数值。第一损失函数值用于衡量第二样本图像与第二样本图像对应的单一退化图像之间的相似度。
示例性地,计算机设备基于第一样本图像101对应的第一特征及预测重建图像118对应的第四特征,计算得到第二损失函数值。第二损失函数值用于衡量预测重建图像的真实度。
示例性地,计算机设备基于多重退化图像115对应的结构特征及第一样本图像101对应的结构特征,计算得到第三损失函数值。多重退化图像115对应的结构特征是指多重退化图像115中非内容部分的结构特征。第三损失函数值用于衡量多重退化图像115及第一样本图像101之间的非内容部分的相似度。
示例性地,计算机设备基于第一样本图像101对应的内容特征、纹理特征,及预测重建图像118对应的内容特征、纹理特征,计算第四损失函数值。
第四损失函数值用于衡量第一样本图像与预测重建图像之间的相似度。
示例性地,计算机设备基于第一损失函数值、第二损失函数值、第三损失函数值及第四损失函数值的和对图像重建模型的模型参数进行更新。
综上所述,本实施例提供的方法,通过获取第一样本图像和三种第二样本图像;在退化网络层中将三种第二样本图像中每一种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成三种单一退化图像;将三种单一退化图像进行融合,得到第一样本图像对应的多重退化图像;随后在重建网络层中将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像;计算机设备基于三种第二样本图像、三种单一退化图像、第一样本图像及预测重建图像,计算损失函数值;并基于损失函数值对图像重建模型的模型参数进行更新。本申请提供的图像重建模型的训练方法,通过对第一样本图像同时执行多个损坏类型的图像损坏,得到第一样本图像对应的多重退化图像,并通过重建网络层对带有多种损坏类型的多重退化图像进行重建,通过上述方法训练得到的模型,可以同时重建低质量图像的多种损坏类型,避免了依次重建低质量图像导致的累积误差,进而提升了训练完成的图像重建模型的图像重建精度。
图2示出了本申请一个实施例提供的计算机系统的架构示意图。该计算机系统可以包括:终端100和服务器200。
终端100可以是诸如手机、平板电脑、车载终端(车机)、可穿戴设备、个人计算机(Personal Computer,PC)、智能语音交互设备、智能家电、车载终端、飞行器、无人售货终端等电子设备。终端100中可以安装运行目标应用程序的客户端,该目标应用程序可以是图像重建的应用程序,也可以是提供有图像重建功能的其他应用程序,本申请对此不作限定。另外,本申请对该目标应用程序的形式不作限定,包括但不限于安装在终端100中的应用程序(Application,App)、小程序等,还可以是网页形式。
服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。服务器200可以是上述目标应用程序的后台服务器,用于为目标应用程序的客户端提供后台服务。
其中,云技术(Cloud technology)是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。云技术基于云计算商 业模式应用的网络技术、信息技术、整合技术、管理平台技术、应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源,如视频网站、图片类网站和更多的门户网站。伴随着互联网行业的高度发展和应用,将来每个物品都有可能存在自己的识别标志,都需要传输到后台系统进行逻辑处理,不同程度级别的数据将会分开处理,各类行业数据皆需要强大的系统后盾支撑,只能通过云计算来实现。
在一些实施例中,上述服务器还可以实现为区块链系统中的节点。区块链(Blockchain)是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链,本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层。
终端100和服务器200之间可以通过网络进行通信,如有线或无线网络。
本申请实施例提供的图像重建模型的训练方法,各步骤的执行主体可以是计算机设备,所述计算机设备是指具备数据计算、处理和存储能力的电子设备。以图2所示的方案实施环境为例,可以由终端100执行图像重建模型的训练方法或图像重建方法(如终端100中安装运行的目标应用程序的客户端执行图像重建模型的训练方法或图像重建方法),也可以由服务器200执行该图像重建模型的训练方法或图像重建方法,或者由终端100和服务器200交互配合执行,本申请对此不作限定。
图3是本申请一个示例性实施例提供的图像重建模型的训练方法的流程图。该方法可以由计算机设备执行,计算机设备可以是图2中的终端100或服务器200。该方法包括:
步骤302:获取第一样本图像和至少两种第二样本图像。
第二样本图像是指带有单一损坏类型的图像,第一样本图像的图像质量高于第二样本图像的图像质量。单一损坏类型,是指仅具有一种损坏类型。损坏类型包括图像模糊、图像偏置、图像噪声等。可选的,第二样本图像是通过对第一样本图像进行图像退化操作得到的,此时第一样本图像和第二样本图像仅在图像质量上存在差异。可选的,第二样本图像是通过对第一样本图像进行图像质量增强操作得到的,此时第一样本图像和第二样本图像仅在图像质量上存在差异。
可选的,第一样本图像和第二样本图像是通过采用不同的拍摄参数对同一对象进行拍摄得到的,此时第一样本图像和第二样本图像仅在图像质量上存在差异。第一样本图像是通过正确的拍摄参数进行拍摄得到的,第二样本图像是通过错误的拍摄参数进行拍摄得到的,正确的拍摄参数和错误的拍摄参数分别与高质量图像和低质量图像相对应。
例如,第一样本图像为高分辨率的图像,第二样本图像为低分辨率的图像。可以理解的是,本申请实施例中涉及的图像,可以是人眼不能直接看见的、通过非侵入方式取得的生物或非生物的内部组织影像。比如,在生物医疗领域,本申请实施例中的图像可以是生物影像(如医学影像)。生物影像是指为了医疗或医学研究,对生物或生物某部分(如人体或人体某部分),以非侵入方式取得的其内部组织的影像。在一个示例中,对于医疗领域,本申请实施例中的图像可以是心肺、肝脏、胃部、大小肠、人脑、骨骼、血管等影像;也可以是肿瘤等非人体器官的影像。另外,本申请实施例中涉及的图像,可以是基于x光(X-ray)技术、电脑断层扫描技术(Computerized Tomography,CT)、正子发射断层扫描技术(Positron Emission Tomography,PET)、核磁共振成像(Nuclear Magnetic Resonance Imaging,NMRI)技术、医学超音波检查(Medical Ultrasonography)等呈像技术生成的影像。另外,本申请实施例中的图像,也可以是通过视觉呈像技术生成的、所见即所得的图像,如通过摄像头(如相机的摄像头、终端的摄像头等)拍摄得到的图像。
步骤304:将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成 至少两种单一退化图像。
特征是某一类对象区别于其他类对象的相应(本质)特点或特性,或这些特点或特性的集合,在一种可能的实现方式中,计算机设备可以利用机器学习模型对图像进行特征提取。损坏特征是指第二样本图像中损坏部分对应的特征。例如,第二样本图像中模糊部位对应的特征、第二样本图像中噪声部位对应的特征。
示例性地,计算机设备提取至少两种第二样本图像对应的损坏特征,并将每一种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成单一退化图像,使得生成的单一退化图像中包含与第二样本图像相同或相似的损坏特征。单一退化图像是指在第一样本图像添加单一损坏类型得到的图像。例如,计算机设备提取第二样本图像对应的模糊损坏特征,将模糊损坏特征添加至第一样本图像中,得到第一样本图像对应的模糊退化图像。
步骤306:将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像。
多重退化图像是指带有至少两种损坏类型的图像。示例性地,计算机设备将至少两种单一退化图像进行融合,得到带有多种损坏类型的图像。例如,单一退化图像为模糊退化图像、偏置退化图像及噪声退化图像,计算机设备将模糊退化图像、偏置退化图像及噪声退化图像进行融合,生成多重退化图像,使得生成的多重退化图像中同时具有相同或相似的模糊损坏特征、噪声损坏特征及偏置损坏特征。
步骤308:将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像。
图像重建处理是指对多重退化图像中的损坏特征进行处理,例如,减少或去除多重退化图像中的模糊损坏特征、噪声损坏特征及偏置损坏特征。预测重建图像是指通过减少或去除多重退化图像中的损坏特征得到的图像。
示例性地,计算机设备对多重退化图像进行重建处理,减少或去除多重退化图像中的模糊损坏特征、噪声损坏特征及偏置损坏特征,从而生成多重退化图像对应的预测重建图像。
步骤310:基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算损失函数值。
基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算得到的损失函数值,可以用来衡量图像重建模型的训练效果。可选地,损失函数值为交叉熵、均方差、绝对差中的至少一种,但不限于此,本申请实施例对此不作限定。
步骤312:基于损失函数值对图像重建模型的模型参数进行更新。
模型参数更新是指对图像重建模型里面的网络参数进行更新,或对模型里面的各个网络模块的网络参数进行更新,或对模型里面的各个网络层的网络参数进行更新,但不限于此,本申请实施例对此不作限定。
可选地,基于图像重建模型的损失函数值,调整图像重建模型的模型参数,直到图像重建模型或图像重建模型的训练系统达到训练停止条件,得到训练完成的图像重建模型。在一些实施例中,图像重建模型或图像重建模型的训练系统在达到训练停止条件之前,图像重建模型的训练系统中的其他学习模型的模型参数,也会根据训练损失不断调整。
综上所述,本实施例提供的方法,通过获取第一样本图像和至少两种第二样本图像;计算机设备将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像;计算机设备将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像;并将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像;计算机设备基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算损失函数值;计算机设备基于损失函数值对图像重建模型的模型参数进行更新。本申请提供的图像重建模型的训练方法,通过对第一样本图像同时执行多个损坏类型的图像损坏,得到第一样本图像对应的多重退化图像,并通过重建网络层对带有多种损坏类型的多重退化图像进 行重建,通过上述方法训练得到的模型,可以同时重建低质量图像的多种损坏类型,避免了依次重建低质量图像导致的累积误差,进而提升了训练完成的图像重建模型的图像重建精度。
本申请实施例提供了一个图像重建模型,该图像重建模型包括:退化网络层和重建网络层。
计算机设备获取第一样本图像和至少两种第二样本图像,并通过退化网络层将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像;将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像。计算机设备通过重建网络层将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像。
基于该图像重建模型,提供有如下图像重建模型的训练方法。
图4是本申请一个示例性实施例提供的图像重建模型的训练方法的流程图。该方法可以由计算机设备执行,计算机设备可以是图2中的终端100或服务器200。该方法包括:
步骤402:获取第一样本图像和至少两种第二样本图像。
第一样本图像是指高分辨率、或有损坏类型但不影响图像内容表达、或有损坏类型但对图像内容表达影响较小的图像。第二样本图像是指带有单一损坏类型的图像,第一样本图像的图像质量高于第二样本图像的图像质量。
可选地,损坏类型包括模糊损坏类型、噪声损坏类型、偏置损坏类型中的至少一种,但不限于此,本申请实施例对此不作限定。
第二样本图像可以为模糊样本图像、带噪样本图像、偏置样本图像中的任意一种。模糊样本图像是指图像中存在模糊内容的图像。带噪样本图像是指图像中存在不需要的、或对图像内容分析理解存在负面影响的图像内容,即图像噪声,该图像即为带噪样本图像。偏置样本图像是指因偏置导致图像亮度差异的图像。例如,医学影像中常常会出现由于金属物干扰而产生的伪影(即图像噪声),有可能会影响医生的判断。
步骤404:将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像。
损坏特征是指第二样本图像中损坏部分对应的特征。例如,第二样本图像中模糊部位对应的特征、第二样本图像中噪声部位对应的特征。
单一退化图像是指在第一样本图像添加单一损坏类型得到的图像。
示例性地,计算机设备获取第一样本图像对应的第一特征,以及分别获取至少两种第二样本图像对应的第二特征;计算机设备基于第一特征及第二特征,得到第二样本图像对应的损坏特征;计算机设备将损坏特征添加至第一样本图像的第一特征中,得到第一样本图像对应的单一退化图像。第一特征用来表征第一样本图像的图像特征,第二特征用来表征第二样本图像的图像特征。
示例性地,图像重建模型包括退化网络层,退化网络层包括第一退化编码器、损坏内核提取器和第一退化解码器。
计算机设备通过第一退化编码器提取第一样本图像对应的第一特征,以及分别提取至少两种第二样本图像对应的第二特征。
计算机设备通过对比第一特征及第二特征,确定损坏特征,并通过损坏内核提取器从第二特征中,解耦得到第二样本图像对应的损坏特征;计算机设备将损坏特征添加至第一样本图像的第一特征中,得到中间第一特征,并将中间第一特征输入至第一退化解码器进行解码处理,得到第一样本图像对应的单一退化图像。
例如,第二样本图像以模糊样本图像、带噪样本图像及偏置样本图像为例,计算机设备通过第一退化编码器提取第一样本图像的特征,得到第一特征;计算机设备通过第一退化编码器分别提取模糊样本图像、带噪样本图像及偏置样本图像的特征,分别得到模糊样本特 征、噪声样本特征及偏置样本特征。
计算机设备将第一特征和模糊样本特征输入至模糊内核提取器进行特征提取,得到模糊样本特征中的模糊损坏特征;计算机设备将第一特征和偏置样本特征输入至偏置内核提取器进行特征提取,得到偏置样本特征中的偏置损坏特征;计算机设备将第一特征和噪声样本特征输入至噪声内核提取器进行特征提取,得到噪声样本特征中的噪声损坏特征。
计算机设备将第一样本图像对应的第一特征及模糊损坏特征进行融合处理,生成中间第一模糊特征;计算机设备通过第一退化解码器对中间第一模糊特征进行解码处理,生成第一样本图像对应的模糊退化图像。
计算机设备将第一样本图像对应的第一特征及偏置损坏特征进行融合处理,生成中间第一偏置特征;计算机设备通过第一退化解码器对中间第一偏置特征进行解码处理,生成第一样本图像对应的偏置退化图像。
计算机设备将第一样本图像对应的第一特征及噪声损坏特征进行融合处理,生成中间第一噪声特征;计算机设备通过第一退化解码器对中间第一噪声特征进行解码处理,生成第一样本图像对应的噪声退化图像。
步骤406:获取至少两种单一退化图像对应的第三特征,并将第三特征进行融合,得到第一样本图像对应的多重退化图像。
多重退化图像是指带有至少两种损坏类型的图像。
第三特征用来表征所述单一退化图像的图像特征。
示例性地,图像重建模型中的退化网络层还包括第二退化编码器和第二退化解码器;计算机设备通过第二退化编码器获取至少两种单一退化图像对应的第三特征,并将第三特征进行融合,得到退化融合特征;计算机设备通过第二退化解码器对退化融合特征进行解码处理,生成第一样本图像对应的多重退化图像。
例如,单一退化图像以模糊退化图像、偏置退化图像及噪声退化图像为例,计算机设备通过第二退化编码器分别对模糊退化图像、偏置退化图像、噪声退化图像进行特征提取,并将模糊退化图像对应的特征、偏置退化图像对应的特征、噪声退化图像对应的特征进行特征融合,得到退化融合特征;计算机设备通过第二退化解码器对退化融合特征进行解码处理,生成第一样本图像对应的多重退化图像。
步骤408:将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像。
图像重建处理是指对多重退化图像中的损坏特征进行处理,减少或去除多重退化图像中的模糊损坏特征、噪声损坏特征及偏置损坏特征。
预测重建图像是指通过减少或去除多重退化图像中的损坏特征得到的图像。
示例性地,图像重建模型包括重建网络层,重建网络层包括重建编码器和重建解码器;计算机设备将多重退化图像输入至重建编码器进行特征提取,得到图像重建特征;计算机设备通过重建解码器对图像重建特征进行解码处理,生成多重退化图像对应的预测重建图像。
步骤410:基于第二样本图像对应的第二特征及单一退化图像对应的第三特征,计算第一损失函数值;基于第一样本图像对应的第一特征及预测重建图像对应的第四特征,计算第二损失函数值。
第四特征用来表征预测重建图像的图像特征。
基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算得到的第一损失函数值及第二损失函数值,可以用来衡量图像重建模型的训练效果。
可选地,损失函数值为交叉熵、均方差、绝对差中的至少一种,但不限于此,本申请实施例对此不作限定。
第一损失函数值用于衡量第二样本图像与第二样本图像对应的单一退化图像之间的相似度。
示例性地,计算机设备基于第二样本图像对应的第二特征及单一退化图像对应的第三特征,计算第一损失函数值,第一损失函数值包括第一模糊损失函数值、第一偏置损失函数值及第一噪声损失函数值。
可选地,计算机设备基于至少两种第二样本图像中第i种第二样本图像对应的第二特征及至少两种单一退化图像中第i种单一退化图像对应的第三特征,计算第一损失函数值,i为正整数。
例如,如图5所示,计算机设备通过第一退化编码器505提取第一样本图像501的特征,得到第一特征;计算机设备通过第一退化编码器505分别提取模糊样本图像502、带噪样本图像504及偏置样本图像503的特征,分别得到模糊样本特征、噪声样本特征及偏置样本特征。计算机设备将第一特征和模糊样本特征输入至模糊内核提取器506进行特征提取,得到模糊样本特征中的模糊损坏特征;计算机设备将第一特征和偏置样本特征输入至偏置内核提取器507进行特征提取,得到偏置样本特征中的偏置损坏特征;计算机设备将第一特征和噪声样本特征输入至噪声内核提取器508进行特征提取,得到噪声样本特征中的噪声损坏特征。
计算机设备将第一样本图像501对应的第一特征及模糊损坏特征进行融合处理,生成中间第一模糊特征;计算机设备通过第一退化解码器509对中间第一模糊特征进行解码处理,生成第一样本图像501对应的模糊退化图像510。计算机设备将第一样本图像501对应的第一特征及偏置损坏特征进行融合处理,生成中间第一偏置特征;计算机设备通过第一退化解码器509对中间第一偏置特征进行解码处理,生成第一样本图像501对应的偏置退化图像511。计算机设备将第一样本图像501对应的第一特征及噪声损坏特征进行融合处理,生成中间第一噪声特征;计算机设备通过第一退化解码器509对中间第一噪声特征进行解码处理,生成第一样本图像501对应的噪声退化图像512。
计算机设备基于模糊样本图像502对应的第二特征及模糊退化图像510对应的第三特征,计算得到第一模糊损失函数值513;
计算机设备基于偏置样本图像503对应的第二特征及偏置退化图像511对应的第三特征,计算得到第一偏置损失函数值514;
计算机设备基于带噪样本图像504对应的第二特征及噪声退化图像512对应的第三特征,计算得到第一噪声损失函数值515。
示例性地,第一损失函数值的计算公式可表示为:
式中,为第一损失函数值,N为数据组个数,R为损坏类型个数,r用于表征损坏类型,x为第一样本图像,y为第二样本图像,K为损坏内核提取器,Ψ为第一退化解码器,E为Charbonnier损失函数。
第二损失函数值用于衡量预测重建图像的真实度。
示例性地,计算机设备基于第一样本图像对应的第一特征及预测重建图像对应的第四特征,计算得到第二损失函数值。
为了使生成的图像更加接近真实图像,本申请实施例利用生成对抗思想,将第一样本图像和预测重建图像输入判别器进行判别,得到判别结果,判别器用于判别第一样本图像和预测重建图像,并基于判别结果确定对抗性损失,即确定第二损失函数值。最终在判别器无法区分给定图像是第一样本图像还是预测重建图像的情况下,即预测重建图像接近第一样本图像,完成训练。
示例性地,第二损失函数值的计算公式可表示为:
式中,为第二损失函数值,E为Charbonnier损失函数,Dup为判别器,x为第一样本图像,为多重退化图像,Gup为重建网络层。
在一种可能的实现方式中,损失函数值还包括第三损失函数值和第四损失函数值。
第三损失函数值用于衡量多重退化图像及第一样本图像之间非内容部分的相似度。
示例性地,基于多重退化图像对应的结构特征及第一样本图像对应的结构特征,计算第三损失函数值。
示例性地,第三损失函数值的计算公式可表示为:
式中,为第三损失函数值,xi为第i个第一样本图像,为第i个多重退化图像,为图像第l层的特征表示。
第四损失函数值用于衡量第一样本图像与预测重建图像之间的相似度。
示例性地,基于第一样本图像对应的内容特征、纹理特征,及预测重建图像对应的内容特征、纹理特征,计算第四损失函数值。可选的,内容特征是指图像中各个像素的像素值,比如,各个像素的亮度、明度等。
示例性地,第四损失函数值的计算公式可表示为:
式中,为第四损失函数值,xi为第i个第一样本图像,为第i个预测重建图像,Gup为重建网络层,为内容损失函数值,为纹理损失函数值,λ为权重值,例如λ为0.9。
步骤412:基于第一损失函数值及第二损失函数值的和对图像重建模型的模型参数进行更新。
模型参数更新是指对图像重建模型里面的网络参数进行更新,或对模型里面的各个网络模块的网络参数进行更新,或对模型里面的各个网络层的网络参数进行更新,但不限于此,本申请实施例对此不作限定。
在一种可能的实现方式中,计算机设备基于第一损失函数值、第二损失函数值、第三损失函数值及第四损失函数值的和对图像重建模型的模型参数进行更新。
示例性地,基于第一损失函数值、第二损失函数值、第三损失函数值及第四损失函数值的和构建的线性组合可表示为:
式中,为损失函数值,为第一损失函数值,为第三损失函数值,为第二损失函数值,为第四损失函数值,α、β为权重因子,例如,α=400,β=100。
图像重建模型的模型参数包括第一退化编码器的网络参数、损坏内核提取器的网络参数、第一退化解码器的网络参数、第二退化编码器的网络参数、第二退化解码器的网络参数、重建编码器的网络参数和重建解码器的网络参数中的至少一种。
在获取损失函数值的情况下,计算机设备基于损失函数值对图像重建模型中的第一退化编码器的网络参数、损坏内核提取器的网络参数、第一退化解码器的网络参数、第二退化编码器的网络参数、第二退化解码器的网络参数、重建编码器的网络参数和重建解码器的网络参数进行更新,得到更新后的第一退化编码器、损坏内核提取器、第一退化解码器、第二退化编码器、第二退化解码器、重建编码器和重建解码器,从而得到训练完成的图像重建模 型。
在一些实施例中,图像重建模型的模型参数更新包括更新图像重建模型中的所有网络模块的网络参数,或,固定图像重建模型中的部分网络模块的网络参数,仅更新剩余部分的网络模块的网络参数。比如,对图像重建模型的模型参数进行更新时,固定图像重建模型中的第一退化编码器、第二退化编码器、重建编码器的网络参数,仅对损坏内核提取器、第一退化解码器、第二退化解码器和重建解码器网络参数进行更新。
综上所述,本实施例提供的方法,通过获取第一样本图像和至少两种第二样本图像;计算机设备将至少两种第二样本图像对应的损坏特征分别添加至第一样本图像中,生成至少两种单一退化图像;计算机设备将至少两种单一退化图像进行融合,得到第一样本图像对应的多重退化图像;并将多重退化图像进行图像重建处理,生成多重退化图像对应的预测重建图像;计算机设备基于第二样本图像、单一退化图像、第一样本图像及预测重建图像,计算第一损失函数值、第二损失函数值、第三损失函数值、第四损失函数值;计算机设备基于第一损失函数值、第二损失函数值、第三损失函数值、第四损失函数值的和对图像重建模型的模型参数进行更新。本申请提供的图像重建模型的训练方法,通过对第一样本图像同时执行多个损坏类型的图像损坏,得到第一样本图像对应的多重退化图像,并通过重建网络层对带有多种损坏类型的多重退化图像进行重建,通过上述方法训练得到的模型,可以同时重建低质量图像的多种损坏类型,避免了依次重建低质量图像导致的累积误差,进而提升了训练完成的图像重建模型的图像重建精度。
图6是本申请一个示例性实施例提供的图像重建模型的重建效果的示意图。
在医学领域中,医学图像已成为医学辅助诊断的重要辅助工具,对医学图像进行图像重建,并基于图像重建的结果对医药图像中的细节信息进行更加清晰地展示,可以更好的辅助医护人员进行医疗诊断,如图6所示,通过设计对照实验,来比较本申请提供的方案与参考方案之间的对比效果。实验选用MNIST数据集,参考方案包括参考方案一、参考方案二及参考方案三。参考方案选用对多重退化图像进行去噪声处理、去模糊处理及去偏置处理,但处理的顺序不同。
例如,在参考方案一中,通过选用不同的处理顺序对多重退化图像进行去噪声处理、去模糊处理及去偏置处理,最终得到六组图像重建结果,将参考方案一中的六组图像重建结果与第一样本图像进行对比,以及将基于本申请提供的方案得到的预测重建图像与第一样本图像进行对比,可以得到:本申请提供的方案针对医学图像可以产生更逼真的,具有更高视觉效果的图像重建结果。
在图片瑕疵修复领域中,高质量的图片能够体现出更加逼真的视图,同时提供更高的视觉效果。如图7所示,通过设计对照实验,来比较本申请提供的方案与参考方案之间的对比效果。实验选用MNIST数据集,参考方案包括参考方案一、参考方案二及参考方案三。参考方案选用对多重退化图像进行去噪声处理、去模糊处理及去偏置处理,但处理的顺序不同。
例如,针对不同的参考方案,选用不同的处理顺序对多重退化图像进行去噪声处理、去模糊处理及去偏置处理,最终每一种参考方案得到六组图像重建结果,将不同的参考方案中的六组图像重建结果与第一样本图像进行对比,以及将基于本申请提供的方案得到的预测重建图像与第一样本图像进行对比,可以得到:本申请提供的方案针对瑕疵图像可以起到更好的瑕疵修复效果。
图8是本申请一个示例性实施例提供的图像重建模型的训练方法的示意图。该方法可以由计算机设备执行,计算机设备可以是图2中的终端100或服务器200。该方法包括:
在医学领域中,医学图像已成为医学辅助诊断的重要辅助工具,对医学图像进行图像重建,并基于图像重建的结果对医药图像中的细节信息进行更加清晰地展示,可以更好的辅助 医护人员进行医疗诊断,如图8所示,将第一样本图像801和至少两种第二样本图像802输入至退化网络层803中对第一样本图像801进行图像语退化处理,得到图像退化结果:多重退化图像804。
图像退化是指图像在形成、记录、处理和传输过程中,由于成像系统、记录设备、传输介质和处理方法的不完善,导致图像质量的下降,这种现象叫做图像退化。
在本申请实施例中,对第一样本图像801进行的图像退化处理,是同时对第一样本图像801增加多个损坏类型,例如,对第一样本图像801添加随机模糊处理、随机噪声处理、随机偏置处理,即为第一样本图像801添加多个种类的瑕疵。
在对多重退化图像804进行图像重建时,重建网络层805同时重建多重退化图像804中的多个种类的瑕疵,即去除模糊、去除噪声、去除偏置,重建网络层805将多重退化图像804重建为高质量图像,即预测重建图像806。
在该图像重建过程中,图像重建的准确性,影响着在医学图像中的信息显示的准确性,进而影响医护人员基于显示信息进行医学诊断的准确性,因此,在医学图像辅助诊断场景中,通过使用本申请提供的图像重建模型的训练方法得到的图像重建模型,可以提高对医学图像进行图像重建的准确性,进而更加清晰地展示医药图像中的细节信息,提高医疗辅助诊断的准确性。
本申请涉及的图像重建模型的训练方法可以基于图像重建模型实现,该方案包括图像重建模型生成阶段和图像重建阶段。图9是本申请一示例性实施例示出的一种图像重建模型生成以及图像重建的框架图,如图9所示,在图像重建模型生成阶段,图像重建模型生成设备910通过预先设置好的训练样本数据集(包括第一样本图像以及至少两种第二样本图像),得到图像重建模型之后,基于该图像重建模型生成图像重建结果。在图像重建阶段,图像重建设备920基于该图像重建模型,对输入的第一图像进行处理,获得该第一图像的图像重建结果。
其中,上述图像重建模型生成设备910和图像重建设备920可以是计算机设备,比如,该计算机设备可以是个人电脑、服务器等固定式计算机设备,或者,该计算机设备也可以是平板电脑、电子书阅读器等移动式计算机设备。
可选的,上述图像重建模型生成设备910和图像重建设备920可以是同一个设备,或者,图像重建模型生成设备910和图像重建设备920也可以是不同的设备。并且,当图像重建模型生成设备910和图像重建设备920是不同设备时,图像重建模型生成设备910和图像重建设备920可以是同一类型的设备,比如图像重建模型生成设备910和图像重建设备920可以都是服务器;或者图像重建模型生成设备910和图像重建设备920也可以是不同类型的设备,比如图像重建设备920可以是个人电脑或者终端,而图像重建模型生成设备910可以是服务器等。本申请实施例对图像重建模型生成设备910和图像重建设备920的具体类型不做限定。
上诉实施例对图像重建模型的训练方法进行了描述,接下来将就图像重建方法进行描述。
图10是本申请一个示例性实施例提供的图像重建方法的流程图。该方法可以由计算机设备执行,计算机设备可以是图2中的终端100或服务器200。该方法包括:
步骤1002:获取第一图像。
第一图像是指带有多种损坏类型的图像。
其中,获取第一图像的方式包括如下情况中的至少一种:
1、计算机设备接收第一图像,例如:终端为发起图像扫描的终端,通过终端扫描图片,并在扫描结束后,将第一图像发送至服务器。
2、计算机设备从已存储的数据库中获取第一图像,如:在MNIST分割数据集、公开的 大脑MRI数据集中,获取至少一个第一图像。值得注意的是,上述获取第一图像的方式仅为示意性的举例,本申请实施例对此不加以限定。
步骤1004:基于训练完毕的重建网络层,对第一图像进行图像重建处理,得到第一重建图像。
计算机设备基于训练完毕的重建网络层,对第一图像进行图像重建处理,即,重建网络层同时重建第一图像中的多个种类的瑕疵,即去除模糊、去除噪声、去除偏置,重建网络层将第一图像重建为高质量图像,即第一重建图像。
步骤1006:输出第一重建图像。
计算机设备输出第一重建图像。
为了验证本申请实施例提供的方案所训练得到的图像重建模型的效果,通过设计对照实验,来比较本申请提供的方案与参考方案之间的对比效果。实验选用MNIST数据集,MNIS T数据集包括60,000个训练和10,000个测试示例,所有图像都是28×28。所有的图像被进行了预先配准处理,所有图像都分为90%用于训练和10%供测试用,并保留20%的训练图像作为两个数据集的验证集。
参考方案包括参考方案1、参考方案2及参考方案3。参考方案选用对低质量图像进行去噪声(Denoising,DN)处理、去模糊(Deblurring,DB)处理及N4偏置校正处理,但处理的顺序不同。采用的评价指标分别为:峰值信噪比(Peak Signal to Noise Ratio,PSNR)以及结构相似性(Structural Similarity,SSIM),用于评价图像重建效果。
表1不同方案、不同顺序的修复方法的平均性能对比表

实验结果参见表1所示,通过表1可知,本申请提供的方案的重建结果在MNIST数据集上实现的PSNR为35.52dB,SSIM为0.9482,在该数据集上的表现均优于参考方案,具有较高的适用性和稳定性。
综上所述,本实施例提供的方法,获取第一图像,基于训练完毕的重建网络层,对第一图像进行图像重建处理,得到高质量的第一重建图像。本申请基于训练完毕的重建网络层,可以得到较为准确的图像重建结果。
图11是本申请一个示例性实施例提供的图像重建方法的示意图。该方法可以由计算机设备执行,计算机设备可以是图2中的终端100或服务器200。该方法包括:
在第一前端1101接收到需要进行重建的第一图像的情况下,第一图像是指具有多种损失类型的图像,第一前端1101将第一图像上传至计算机设备1102进行图像重建处理,计算机设备1102对第一图像的图像重建处理过程可参见前述实施例中描述,本处不作赘述。
在计算机设备1102对第一图像进行图像重建处理后,计算机设备1102将图像重建结果输出到第二前端1103。
可选地,第一前端1101和第二前端1103可以是同一前端,也可以是不同的前端,本申请实施例对此不作限定。
图12示出了本申请一个示例性实施例提供的图像重建模型的训练装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为计算机设备的全部或一部分,该装置包括:
获取模块1201,用于获取第一样本图像和至少两种第二样本图像,所述第二样本图像是指带有单一损坏类型的图像,所述第一样本图像的图像质量高于所述第二样本图像的图像质量。
退化模块1202,用于将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像。
融合模块1203,用于将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的多重退化图像,所述多重退化图像是指带有至少两种损坏类型的图像。
重建模块1204,用于将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像。
计算模块1205,用于基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值。
更新模块1206,用于基于所述损失函数值对所述图像重建模型的模型参数进行更新。
在一种可能的实现方式中,退化模块1202,还用于获取所述第一样本图像对应的第一特征,以及分别获取至少两种所述第二样本图像对应的第二特征;基于所述第一特征及所述第二特征,得到所述第二样本图像对应的所述损坏特征;将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到所述第一样本图像对应的所述单一退化图像。
所述图像重建模型包括退化网络层,所述退化网络层包括第一退化编码器、损坏内核提取器和第一退化解码器。
在一种可能的实现方式中,退化模块1202,还用于通过所述第一退化编码器提取所述第一样本图像对应的第一特征,以及通过所述第一退化编码器分别提取至少两种所述第二样本图像对应的第二特征;通过对比所述第一特征及所述第二特征,确定所述损坏特征,并通过所述损坏内核提取器从所述第二特征中,解耦得到所述第二样本图像对应的所述损坏特征; 将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到中间第一特征,并将所述中间第一特征输入至所述第一退化解码器进行解码处理,得到所述第一样本图像对应的所述单一退化图像。
在一种可能的实现方式中,融合模块1203,还用于获取至少两种所述单一退化图像对应的第三特征,并将所述第三特征进行融合,得到所述第一样本图像对应的所述多重退化图像。
所述图像重建模型中的退化网络层还包括第二退化编码器和第二退化解码器。
在一种可能的实现方式中,融合模块1203,还用于通过所述第二退化编码器获取至少两种所述单一退化图像对应的第三特征,并将所述第三特征进行融合,得到退化融合特征;通过所述第二退化解码器对所述退化融合特征进行解码处理,生成所述第一样本图像对应的所述多重退化图像。
所述图像重建模型包括重建网络层,所述重建网络层包括重建编码器和重建解码器。
在一种可能的实现方式中,重建模块1204,还用于将所述多重退化图像输入至所述重建编码器进行特征提取,得到所述图像重建特征;通过所述重建解码器对所述图像重建特征进行解码处理,生成所述多重退化图像对应的所述预测重建图像。
所述损失函数值包括第一损失函数值和第二损失函数值,所述第一损失函数值用于衡量所述第二样本图像与所述第二样本图像对应的所述单一退化图像之间的相似度,所述第二损失函数值用于衡量所述预测重建图像的真实度。
在一种可能的实现方式中,计算模块1205,还用于基于所述第二样本图像对应的第二特征及所述单一退化图像对应的第三特征,计算所述第一损失函数值。
在一种可能的实现方式中,计算模块1205,还用于基于所述第一样本图像对应的第一特征及所述预测重建图像对应的第四特征,计算所述第二损失函数值。
在一种可能的实现方式中,更新模块1206,还用于基于所述第一损失函数值及所述第二损失函数值的和对所述图像重建模型的模型参数进行更新。
在一种可能的实现方式中,计算模块1205,还用于基于至少两种所述第二样本图像中第i种第二样本图像对应的第二特征及至少两种所述单一退化图像中第i种单一退化图像对应的第三特征,计算所述第一损失函数值,i为正整数。
所述损失函数值还包括第三损失函数值和第四损失函数值,所述第三损失函数值用于衡量所述多重退化图像及所述第一样本图像之间非内容部分的相似度,所述第四损失函数值用于衡量所述第一样本图像与所述预测重建图像之间的相似度。
在一种可能的实现方式中,计算模块1205,还用于基于所述多重退化图像对应的结构特征及所述第一样本图像对应的结构特征,计算第三损失函数值。
在一种可能的实现方式中,计算模块1205,还用于基于所述第一样本图像对应的内容特征、纹理特征,及所述预测重建图像对应的内容特征、纹理特征,计算所述第四损失函数值。
在一种可能的实现方式中,更新模块1206,还用于基于所述第一损失函数值、所述第二损失函数值、所述第三损失函数值及所述第四损失函数值的和对所述图像重建模型的模型参数进行更新。
所述损坏类型包括模糊损坏类型、噪声损坏类型、偏置损坏类型中的至少一种。
在一种可能的实现方式中,获取模块1201,还用于获取第一图像,所述第一图像是指带有多种损坏类型的图像。
在一种可能的实现方式中,重建模块1204,还用于基于训练完毕的重建网络层,对所述第一图像进行图像重建处理,得到第一重建图像,所述第一重建图像是指将所述第一图像中的多种损坏类型进行去除得到的图像;并输出所述第一重建图像。
图13示出了本申请一示例性实施例示出的计算机设备1300的结构框图。该计算机设备可以实现为本申请上述方案中的服务器。所述图像计算机设备1300包括中央处理单元(Central Processing Unit,CPU)1301、包括随机存取存储器(Random Access Memory,RAM)1302和只读存储器(Read-Only Memory,ROM)1303的系统存储器1304,以及连接系统存储器1304和中央处理单元1301的系统总线1305。所述图像计算机设备1300还包括用于存储操作系统1309、应用程序1310和其他程序模块1311的大容量存储设备1306。
所述大容量存储设备1306通过连接到系统总线1305的大容量存储控制器(未示出)连接到中央处理单元1301。所述大容量存储设备1306及其相关联的计算机可读介质为图像计算机设备1300提供非易失性存储。也就是说,所述大容量存储设备1306可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读寄存器(Erasable Programmable Read Only Memory,EPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)闪存或其他固态存储其技术,CD-ROM、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器1304和大容量存储设备1306可以统称为存储器。
根据本公开的各种实施例,所述图像计算机设备1300还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即图像计算机设备1300可以通过连接在所述系统总线1305上的网络接口单元1307连接到网络1308,或者说,也可以使用网络接口单元1307来连接到其他类型的网络或远程计算机系统(未示出)。
所述存储器还包括至少一段计算机程序,所述至少一段计算机程序存储于存储器中,中央处理器1301通过执行该至少一段程序来实现上述各个实施例所示的图像重建模型的训练方法的全部或部分步骤。
本申请实施例还提供一种计算机设备,该计算机设备包括处理器和存储器,该存储器中存储有至少一条程序,该至少一条程序由处理器加载并执行以实现上述各方法实施例提供的图像重建模型的训练方法。
本申请实施例还提供一种计算机可读存储介质,该存储介质中存储有至少一条计算机程序,该至少一条计算机程序由处理器加载并执行以实现上述各方法实施例提供的图像重建模型的训练方法。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中;所述计算机程序由计算机设备的处理器从所述计算机可读存储介质读取并执行,使得所述计算机设备执行以实现上述各方法实施例提供的图像重建模型的训练方法。
可以理解的是,在本申请的具体实施方式中,涉及到的数据,历史数据,以及画像等与用户身份或特性相关的用户数据处理等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。

Claims (20)

  1. 一种图像重建模型的训练方法,其中,所述方法由终端设备执行,所述方法包括:
    获取第一样本图像和至少两种第二样本图像,所述第二样本图像是指带有单一损坏类型的图像,所述第一样本图像的图像质量高于所述第二样本图像的图像质量;
    将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像;将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的多重退化图像,所述多重退化图像是指带有至少两种损坏类型的图像;
    将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像;
    基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值;
    基于所述损失函数值对所述图像重建模型的模型参数进行更新。
  2. 根据权利要求1所述的方法,其中,所述将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像,包括:
    获取所述第一样本图像对应的第一特征,以及分别获取至少两种所述第二样本图像对应的第二特征,所述第一特征用来表征所述第一样本图像的图像特征,所述第二特征用来表征所述第二样本图像的图像特征;
    对于至少两种所述第二特征中的任意一种第二特征,基于所述第一特征及所述第二特征,得到所述第二样本图像对应的所述损坏特征;将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到所述第一样本图像对应的所述单一退化图像。
  3. 根据权利要求2所述的方法,其中,所述图像重建模型包括退化网络层,所述退化网络层包括第一退化编码器;
    所述获取所述第一样本图像对应的第一特征,以及分别获取至少两种所述第二样本图像对应的第二特征,包括:
    通过所述第一退化编码器提取所述第一样本图像对应的第一特征,以及通过所述第一退化编码器分别提取至少两种所述第二样本图像对应的第二特征。
  4. 根据权利要求2所述的方法,其中,所述图像重建模型包括退化网络层,所述退化网络层包括损坏内核提取器;
    所述基于所述第一特征及所述第二特征,得到所述第二样本图像对应的所述损坏特征,包括:
    通过对比所述第一特征及所述第二特征,确定所述损坏特征,并通过所述损坏内核提取器从所述第二特征中,解耦得到所述第二样本图像对应的所述损坏特征。
  5. 根据权利要求2所述的方法,其中,所述图像重建模型包括退化网络层,所述退化网络层包括第一退化解码器;
    所述将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到所述第一样本图像对应的所述单一退化图像,包括:
    将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到中间第一特征,并将所述中间第一特征输入至所述第一退化解码器进行解码处理,得到所述第一样本图像对应的所述单一退化图像。
  6. 根据权利要求1所述的方法,其中,所述将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的多重退化图像,包括:
    获取至少两种所述单一退化图像对应的第三特征,并将至少两种所述第三特征进行融合,得到所述第一样本图像对应的所述多重退化图像,所述第三特征用来表征所述单一退化图像的图像特征。
  7. 根据权利要求6所述的方法,其中,所述图像重建模型中的退化网络层还包括第二退 化编码器和第二退化解码器;
    所述获取至少两种所述单一退化图像对应的第三特征,并将至少两种所述第三特征进行融合,得到所述第一样本图像对应的所述多重退化图像,包括:
    通过所述第二退化编码器获取至少两种所述单一退化图像对应的第三特征,并将至少两种所述第三特征进行融合,得到退化融合特征;
    通过所述第二退化解码器对所述退化融合特征进行解码处理,生成所述第一样本图像对应的所述多重退化图像。
  8. 根据权利要求1所述的方法,其中,所述图像重建模型包括重建网络层,所述重建网络层包括重建编码器和重建解码器;
    所述将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像,包括:
    将所述多重退化图像输入至所述重建编码器进行特征提取,得到图像重建特征;
    通过所述重建解码器对所述图像重建特征进行解码处理,生成所述多重退化图像对应的所述预测重建图像。
  9. 根据权利要求1所述的方法,其中,所述损失函数值包括第一损失函数值和第二损失函数值,所述第一损失函数值用于衡量所述第二样本图像与所述第二样本图像对应的所述单一退化图像之间的相似度,所述第二损失函数值用于衡量所述预测重建图像的真实度;
    所述基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值,包括:
    基于所述第二样本图像对应的第二特征及所述单一退化图像对应的第三特征,计算所述第一损失函数值;
    基于所述第一样本图像对应的第一特征及所述预测重建图像对应的第四特征,计算所述第二损失函数值,所述第四特征用来表征所述预测重建图像的图像特征;
    所述基于所述损失函数值对所述图像重建模型的模型参数进行更新,包括:
    基于所述第一损失函数值及所述第二损失函数值的和对所述图像重建模型的模型参数进行更新。
  10. 根据权利要求9所述的方法,其中,所述基于所述第二样本图像对应的第二特征及所述单一退化图像对应的第三特征,计算所述第一损失函数值,包括:
    基于至少两种所述第二样本图像中第i种第二样本图像对应的第二特征及至少两种所述单一退化图像中第i种单一退化图像对应的第三特征,计算所述第一损失函数值,i为正整数。
  11. 根据权利要求9所述的方法,其中,所述损失函数值还包括第三损失函数值和第四损失函数值,所述第三损失函数值用于衡量所述多重退化图像及所述第一样本图像之间非内容部分的相似度,所述第四损失函数值用于衡量所述第一样本图像与所述预测重建图像之间的相似度;所述方法还包括:
    基于所述多重退化图像对应的结构特征及所述第一样本图像对应的结构特征,计算第三损失函数值;
    基于所述第一样本图像对应的内容特征、纹理特征,及所述预测重建图像对应的内容特征、纹理特征,计算所述第四损失函数值;
    所述基于所述第一损失函数值及所述第二损失函数值的和对所述图像重建模型的模型参数进行更新,包括:
    基于所述第一损失函数值、所述第二损失函数值、所述第三损失函数值及所述第四损失函数值的和对所述图像重建模型的模型参数进行更新。
  12. 根据权利要求8所述的方法,其中,所述方法还包括:
    获取第一图像,所述第一图像是带有多种损坏类型的图像;
    基于所述重建网络层,对所述第一图像进行图像重建处理,得到第一重建图像;
    输出所述第一重建图像。
  13. 一种图像重建模型的训练装置,其中,所述装置包括:
    获取模块,用于获取第一样本图像和至少两种第二样本图像,所述第二样本图像是指带有单一损坏类型的图像,所述第一样本图像的图像质量高于所述第二样本图像的图像质量;
    退化模块,用于将至少两种所述第二样本图像对应的损坏特征分别添加至所述第一样本图像中,生成至少两种单一退化图像;
    融合模块,用于将至少两种所述单一退化图像进行融合,得到所述第一样本图像对应的多重退化图像,所述多重退化图像是指带有至少两种损坏类型的图像;
    重建模块,用于将所述多重退化图像进行图像重建处理,生成所述多重退化图像对应的预测重建图像;
    计算模块,用于基于所述第二样本图像、所述单一退化图像、所述第一样本图像及所述预测重建图像,计算损失函数值;
    更新模块,用于基于所述损失函数值对所述图像重建模型的模型参数进行更新。
  14. 根据权利要求13所述的装置,其中,
    所述退化模块,还用于获取所述第一样本图像对应的第一特征,以及分别获取至少两种所述第二样本图像对应的第二特征,所述第一特征用来表征所述第一样本图像的图像特征,所述第二特征用来表征所述第二样本图像的图像特征;
    所述退化模块,还用于对于至少两种所述第二特征中的任意一种第二特征,基于所述第一特征及所述第二特征,得到所述第二样本图像对应的所述损坏特征;将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到所述第一样本图像对应的所述单一退化图像。
  15. 根据权利要求14所述的装置,其中,所述图像重建模型包括退化网络层,所述退化网络层包括第一退化编码器;
    所述退化模块,还用于通过所述第一退化编码器提取所述第一样本图像对应的第一特征,以及通过所述第一退化编码器分别提取至少两种所述第二样本图像对应的第二特征。
  16. 根据权利要求14所述的装置,其中,所述图像重建模型包括退化网络层,所述退化网络层包括损坏内核提取器;
    所述退化模块,还用于通过对比所述第一特征及所述第二特征,确定所述损坏特征,并通过所述损坏内核提取器从所述第二特征中,解耦得到所述第二样本图像对应的所述损坏特征。
  17. 根据权利要求14所述的装置,其中,所述图像重建模型包括退化网络层,所述退化网络层包括第一退化解码器;
    所述退化模块,还用于将所述损坏特征添加至所述第一样本图像的所述第一特征中,得到中间第一特征,并将所述中间第一特征输入至所述第一退化解码器进行解码处理,得到所述第一样本图像对应的所述单一退化图像。
  18. 一种计算机设备,其中,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条计算机程序,至少一条所述计算机程序由所述处理器加载并执行以实现如权利要求1至12中任一项所述的图像重建模型的训练方法。
  19. 一种计算机存储介质,其中,所述计算机可读存储介质中存储有至少一条计算机程序,至少一条计算机程序由处理器加载并执行以实现如权利要求1至12中任一项所述的图像重建模型的训练方法。
  20. 一种计算机程序产品,其中,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中;所述计算机程序由计算机设备的处理器从所述计算机可读存储介质读取并执行,使得所述计算机设备执行如权利要求1至12中任一项所述的图像重建模型的训练方法。
PCT/CN2023/082436 2022-05-10 2023-03-20 图像重建模型的训练方法、装置、设备、介质及程序产品 WO2023216720A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210508810.2 2022-05-10
CN202210508810.2A CN115115900A (zh) 2022-05-10 2022-05-10 图像重建模型的训练方法、装置、设备、介质及程序产品

Publications (1)

Publication Number Publication Date
WO2023216720A1 true WO2023216720A1 (zh) 2023-11-16

Family

ID=83327018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082436 WO2023216720A1 (zh) 2022-05-10 2023-03-20 图像重建模型的训练方法、装置、设备、介质及程序产品

Country Status (2)

Country Link
CN (1) CN115115900A (zh)
WO (1) WO2023216720A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115900A (zh) * 2022-05-10 2022-09-27 腾讯科技(深圳)有限公司 图像重建模型的训练方法、装置、设备、介质及程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140375636A1 (en) * 2013-06-25 2014-12-25 Simpleware Limited Image processing method
CN109146813A (zh) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 一种多任务图像重建方法、装置、设备和介质
CN109191411A (zh) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 一种多任务图像重建方法、装置、设备和介质
CN111311704A (zh) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 图像重建方法、装置、计算机设备和存储介质
CN112529776A (zh) * 2019-09-19 2021-03-19 中移(苏州)软件技术有限公司 图像处理模型的训练方法、图像处理方法及装置
CN115115900A (zh) * 2022-05-10 2022-09-27 腾讯科技(深圳)有限公司 图像重建模型的训练方法、装置、设备、介质及程序产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140375636A1 (en) * 2013-06-25 2014-12-25 Simpleware Limited Image processing method
CN109146813A (zh) * 2018-08-16 2019-01-04 广州视源电子科技股份有限公司 一种多任务图像重建方法、装置、设备和介质
CN109191411A (zh) * 2018-08-16 2019-01-11 广州视源电子科技股份有限公司 一种多任务图像重建方法、装置、设备和介质
CN112529776A (zh) * 2019-09-19 2021-03-19 中移(苏州)软件技术有限公司 图像处理模型的训练方法、图像处理方法及装置
CN111311704A (zh) * 2020-01-21 2020-06-19 上海联影智能医疗科技有限公司 图像重建方法、装置、计算机设备和存储介质
CN115115900A (zh) * 2022-05-10 2022-09-27 腾讯科技(深圳)有限公司 图像重建模型的训练方法、装置、设备、介质及程序产品

Also Published As

Publication number Publication date
CN115115900A (zh) 2022-09-27

Similar Documents

Publication Publication Date Title
Küstner et al. Retrospective correction of motion‐affected MR images using deep learning frameworks
CN110570426B (zh) 使用深度学习的图像联合配准和分割
US10387765B2 (en) Image correction using a deep generative machine-learning model
CN110570492B (zh) 一种基于神经网络的ct伪影抑制方法、设备以及介质
CN110599421B (zh) 模型训练方法、视频模糊帧转换方法、设备及存储介质
Missert et al. Synthesizing images from multiple kernels using a deep convolutional neural network
US11978146B2 (en) Apparatus and method for reconstructing three-dimensional image
CN107862665B (zh) Ct图像序列的增强方法及装置
US20210190892A1 (en) Methods, systems, and computer readable media for using a trained adversarial network for performing retrospective magnetic resonance imaging (mri) artifact correction
WO2023216720A1 (zh) 图像重建模型的训练方法、装置、设备、介质及程序产品
WO2021102644A1 (zh) 图像增强方法、装置及终端设备
Zhang et al. CNN‐Based Medical Ultrasound Image Quality Assessment
US11950877B2 (en) System and method for fully automatic LV segmentation of myocardial first-pass perfusion images
Liu et al. Style transfer generative adversarial networks to harmonize multisite MRI to a single reference image to avoid overcorrection
CN116312986A (zh) 三维医学影像标注方法、装置、电子设备及可读存储介质
CN114627202A (zh) 一种基于特异性联邦学习的模型训练方法及装置
Gheorghiță et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
Yu et al. Reducing positional variance in cross-sectional abdominal CT slices with deep conditional generative models
US20200311940A1 (en) Probabilistic motion model for generating medical images or medical image sequences
Mangalagiri et al. Toward generating synthetic CT volumes using a 3D-conditional generative adversarial network
KR101948701B1 (ko) 피검체의 뇌 구조를 기술하는 잠재 변수에 기반하여 상기 피검체의 뇌질환을 판정하는 방법 및 이를 이용한 장치
US20220292673A1 (en) On-Site training of a machine-learning algorithm for generating synthetic imaging data
US20240135502A1 (en) Generalizable Image-Based Training Framework for Artificial Intelligence-Based Noise and Artifact Reduction in Medical Images
CN112233126B (zh) 医学图像的加窗方法及装置
KR102490586B1 (ko) 자기지도 학습 방식의 반복적 노이즈 저감 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23802496

Country of ref document: EP

Kind code of ref document: A1