CN118052715A - Image reconstruction method, image reconstruction device, computer equipment and storage medium - Google Patents

Image reconstruction method, image reconstruction device, computer equipment and storage medium Download PDF

Info

Publication number
CN118052715A
CN118052715A CN202410351059.9A CN202410351059A CN118052715A CN 118052715 A CN118052715 A CN 118052715A CN 202410351059 A CN202410351059 A CN 202410351059A CN 118052715 A CN118052715 A CN 118052715A
Authority
CN
China
Prior art keywords
image
model
resolution
super
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410351059.9A
Other languages
Chinese (zh)
Inventor
陶乐乐
陈智唯
何志华
杨柳恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microport Trace Medical Equipment Co ltd
Original Assignee
Shenzhen Microport Trace Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microport Trace Medical Equipment Co ltd filed Critical Shenzhen Microport Trace Medical Equipment Co ltd
Priority to CN202410351059.9A priority Critical patent/CN118052715A/en
Publication of CN118052715A publication Critical patent/CN118052715A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to an image reconstruction method, an image reconstruction device, computer equipment and a storage medium. The method comprises the following steps: inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model. Processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed. The method can not only enable the limit of the segmented region to be clearer, but also enable the details of the segmented region to be richer, and can remarkably improve the image quality of intravascular ultrasound imaging.

Description

Image reconstruction method, image reconstruction device, computer equipment and storage medium
Technical Field
The present application relates to the field of intravascular ultrasound, and in particular, to an image reconstruction method, apparatus, computer device, storage medium, and computer program product.
Background
Intravascular ultrasound imaging, also known as Intravascular Ultrasound, i.e., IVUS technology, is a technology in which a miniature ultrasound probe is mounted at the front end of a catheter, and the catheter is advanced into a blood vessel by a professional technique to probe the tissue structure of the blood vessel, which is a relatively effective, direct, high-quality ultrasound diagnostic technique at the present stage. Because the IVUS technology has higher recognition rate of components in each part of the blood vessel and can clearly display the tissue structure information of the blood vessel wall, the IVUS technology is gradually recommended by various medical guidelines at present, and is more and more widely applied to interventional catheter laboratories. In the aspect of cardiovascular disease diagnosis, the IVUS technology not only can know the size, shape and wall structure of a lumen, but also can accurately measure the sectional area of the lumen to identify vascular calcification, fibrosis, lipid nucleus and other lesions.
The current IVUS image quality suffers from drawbacks such as unclear, noisy, less sharp boundaries between blood and intima, etc. Where the boundary between blood and intima is not clear, is highly desirable for the clinician to resolve. Because this limitation is unclear, it presents some difficulties for the physician in assessing vascular lesions and guiding treatment.
To improve the quality of the IVUS image, increasing the accuracy of the assessment of vascular lesions may be achieved by a method that improves the resolution of the IVUS image. Conventional methods for improving the resolution of an image, such as image interpolation, image filtering, multi-frame averaging, image enhancement, etc., can improve the definition of the image, but have the disadvantage of blurring and distortion of details, and the image interpolation and image filtering methods use known pixels to estimate the value of an unknown pixel, which, although simple, can easily cause blurring and distortion when dealing with complex boundaries between blood and intima.
The quality of intravascular ultrasound images cannot be effectively improved at present.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image reconstruction method, apparatus, computer device, computer readable storage medium, and computer program product that can effectively improve intravascular ultrasound image quality.
In a first aspect, the present application provides an image reconstruction method, including:
inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In one embodiment, the method for obtaining the image reconstruction model includes:
Acquiring a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image;
Inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model;
acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image;
According to the first loss function and the second loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to finish one iteration training;
and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
In one embodiment, obtaining the training set includes:
Acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image;
Downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image;
Multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image;
And obtaining a training set according to the sample image, the first label image and the second label image.
In one embodiment, obtaining the training set according to the sample image, the first label image and the second label image includes:
Obtaining an initial training set according to the sample image, the first label image and the second label image;
and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set.
In one embodiment, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model according to the first loss function and the second loss function comprises:
summing the first loss function and the second loss function to obtain a third loss function;
And according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model.
In one embodiment, the method further comprises:
Carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value;
Determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model;
determining a memory model and an interface model according to model parameters of the image reconstruction model;
Acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
In a second aspect, the present application also provides an image reconstruction apparatus, including:
the image input module is used for inputting the image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in the image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
The first processing module is used for processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
the second processing module is used for processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, and the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In a third aspect, the present application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In a fifth aspect, the application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
The image reconstruction method, the device, the computer equipment, the storage medium and the computer program product input the image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model. Processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed. The output result of the segmentation model is sent to the super-resolution model, so that information sharing is realized, the super-resolution model learns the characteristics of the segmentation region more, the performance and the efficiency of the model are improved, the limit of the segmentation region can be clearer, the details of the segmentation region can be richer, and the image quality of intravascular ultrasound imaging is remarkably improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a diagram of an application environment for an image reconstruction method in one embodiment;
FIG. 2 is a flow chart of an image reconstruction method in one embodiment;
FIG. 3 is a schematic diagram of an image reconstruction model structure in one embodiment;
FIG. 4 is a schematic diagram of image reconstruction model training logic in one embodiment;
FIG. 5 is a schematic diagram of an image reconstruction model effect in one embodiment;
FIG. 6 is a flow diagram of an image reconstruction model training and FPGA design method in one embodiment;
FIG. 7 is a block diagram of an image reconstruction apparatus in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The image reconstruction method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the computer device 102 communicates with the IVUS device 104 by wire or wirelessly. The data storage system may store data that computer device 102 needs to process. The data storage system may be integrated on the computer device 102 or may be located on a cloud or other network server. The computer device 102 may be a terminal or a server. The terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In an exemplary embodiment, as shown in fig. 2, an image reconstruction method is provided, which is illustrated by way of example as being applied to the computer device 102 in fig. 1, and includes the following steps 202 to 206. Wherein:
Step 202, inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model.
Wherein the image to be reconstructed may be, but is not limited to, an IVUS image and the region to be segmented may be, but is not limited to, a non-blood region in the IVUS image.
Alternatively, to better segment the blood and non-blood regions of the IVUS image, the image segmentation model uses the Attention U-Net model. The Attention U-Net model introduces a mechanism of Attention to improve segmentation performance. The model enables better focusing of the model on the features of the blood region, mainly by careful weighting of the channel dimensions of the feature map. In Attention U-Net, the Attention mechanism is applied to the jump connection between encoder and decoder. In particular, the attention module calculates an attention weight map describing the importance of each location based on the encoder's profile and the decoder's profile. The attention weight map is then multiplied with the feature map of the decoder to weight transfer the important information of the encoder to the decoder, thereby enabling the decoder to better segment. By introducing an Attention mechanism, the Attention U-Net can learn the importance of the blood region, so that the model can be better focused on the blood region, and the segmentation accuracy is improved.
To acquire more image detail of the IVUS image, the Super-Resolution model uses SRGAN (Super-Resolution GENERATIVE ADVERSARIAL Network) model. SRGAN is a super-resolution model based on generating a countermeasure network (GAN). It generates higher quality super resolution images by way of countermeasure training. SRGAN comprise a generator network and a discriminator network. The generator network is responsible for converting the low resolution input image into a high resolution output image, while the arbiter network is responsible for discriminating whether the generated image is sufficiently authentic. The generator network adopts a depth residual error network structure, which can effectively learn the mapping relation between input and output and increase the detail and definition of the image. The discriminator network adopts a convolutional neural network structure and is used for distinguishing the generated image from the real high-resolution image. SRGAN training process is a antagonistic process. The generator network improves the quality of the generated image by minimizing the discrimination error of the discriminator network for the generated image, while the discriminator network improves its own performance by maximizing the discrimination capability for the generated image and the real image. Through the countermeasure learning of the two networks, more details of the IVUS image can be learned finally.
The output layer of the image segmentation model is connected with the middle layer of the super-resolution submodel through the connecting layer of the super-resolution model, so that a segmentation image result obtained by the segmentation model can be introduced into the super-resolution model, the super-resolution image can weaken the blood region characteristics, and the non-blood region characteristics are enhanced. The connection details of the two sub-models in the image reconstruction model are shown in fig. 3, the output of the image segmentation model is single-channel, a layer with the same resolution as the output of the segmentation model is arranged in the middle layer of the super-resolution model, and the middle layer and the output of the segmentation model are selected to be processed by concencate (connection) layers of the neural network, so that a 2-channel image is obtained. And then the single-channel image is obtained through the convolution layer and GlabalPooling (global pooling) layer processing of the neural network. And then passing through other layers including an up-sampling layer to finally obtain the super-resolution image. Thus, the super-resolution model can learn the characteristics of the non-blood region more, and output IVUS images with better quality and more distinguishing characteristics.
And 204, processing the image to be reconstructed through the image segmentation sub-model to obtain a segmented image output by the image segmentation sub-model.
Optionally, the IVUS image is input to the image segmentation sub-model and the super-resolution sub-model at the same time, the image segmentation sub-model firstly processes the IVUS image, segments a blood region and a non-blood region in the IVUS image, outputs a segmented image, and transmits the segmented image to an intermediate layer of the super-resolution sub-model.
And 206, processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel as a reconstructed image corresponding to the image to be reconstructed.
Optionally, the image segmentation sub-model firstly processes the IVUS image, when the IVUS image is processed to the middle layer, the segmentation image is introduced, the features in the IVUS image and the segmentation image are processed at the same time, a super-resolution image is output, the output of the super-resolution sub-model is also the output of the image reconstruction model, the super-resolution image can be used as a reconstruction image corresponding to the IVUS image, the reconstruction image is divided into a blood area and a non-blood area, the difference between blood and an inner membrane in the IVUS image is enabled to be larger, the blood area is weakened, the non-blood area is strengthened, namely the blood area is darkened, and the non-blood area is brightened, which is equivalent to the image enhancement aiming at the non-blood area.
In the image reconstruction method, an image to be reconstructed is input into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model. Processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed. The output result of the segmentation model is sent to the super-resolution model, so that information sharing is realized, the super-resolution model learns the characteristics of the segmentation region more, the performance and the efficiency of the model are improved, the limit of the segmentation region can be clearer, the details of the segmentation region can be richer, and the image quality of intravascular ultrasound imaging is remarkably improved.
In one embodiment, the method for obtaining the image reconstruction model includes:
Acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; obtaining an initial training set according to the sample image, the first label image and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set. The training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image.
Inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; summing the first loss function and the second loss function to obtain a third loss function; and according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model to complete one iteration training.
And carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
Alternatively, the data acquisition may acquire IVUS images through an IVUS device. To make the trained model more generalizable, multiple subjects at multiple centers may be collected. After data acquisition, the process of preparing the training dataset is as follows:
(1) And manually labeling the collected IVUS image, labeling a blood region, and using the obtained segmentation data set for training a segmentation model.
(2) The acquired IVUS image is downsampled, which is used as an input to the training model. Training the super-resolution model requires collecting a set of paired low-resolution images and corresponding high-resolution images as training data. Because a high resolution image of the acquired IVUS image cannot be obtained, downsampling the IVUS image generates a low resolution image, which serves as a corresponding high resolution image.
(3) And (3) carrying out downsampling on the segmentation data set obtained in the step (1) as the same as that of the step (2), wherein the downsampled image data set is taken as GroundTruth (true value) of the segmentation model. The image dataset is denoted GroundTruth1.GroundTruth1 corresponds to a first label image.
(4) Multiplying the collected IVUS image with the corresponding segmented image obtained in the step (1), wherein the obtained image is GroudTruth of the super-resolution model. Such a set of image data sets is denoted GroundTruth. GroundTruth2 corresponds to a second label image.
After the training data set is obtained, in order to improve generalization capability and robustness of the model, preprocessing such as image enhancement is performed on the data set, and the method specifically comprises the following steps: random flipping, random clipping, random scaling, etc., to increase the diversity of the data set data.
Further, training the image reconstruction model to be trained by adopting a training set. As shown in FIG. 4, groudTruth is the GroundTruth image referred to in the data annotation. GroundTruth2 in the figure is the GroundTruth image referred to in the data annotation. And summing the loss function loss1 of the segmentation model and the loss function loss2 of the super-resolution model to obtain a trained loss function.
The first loss function, i.e., the loss function of the image segmentation model, in the image segmentation model Attention U-Net, the loss function employs a cross entropy loss function and a Dice loss function. The cross entropy loss function is a sort of loss function that measures the difference between the model output and the real label. For each pixel, the cross entropy between the probability distribution of the model output and the probability distribution of the real label is calculated. The Dice loss function is a segmentation loss function used for measuring the similarity between the model output and the real label. It measures similarity by calculating the ratio of the intersection of the model output and the real labels to their union. In the Attention U-Net, the cross entropy loss function and the Dice loss function are combined for use, and classification and segmentation accuracy can be comprehensively considered. Better segmentation of the blood and non-blood regions of the IVUS image is achieved.
The second loss function, i.e., the loss function of the super-resolution model SRGAN is composed of two parts: a perceptual loss function and an antagonistic loss function. The perceptual loss function is a measure of the quality of super resolution by computing the difference in characteristics between the generated image and the actual high resolution image. The perceptual loss function uses a VGG loss function, the difference being measured by calculating the mean square error between the characteristics of the generated image and the real image at certain layers in the VGG network. The contrast loss function encourages the generator to generate more realistic images by introducing a contrast (arbiter). The resistance loss function consists of two parts: generator loss and arbiter loss. The generator penalty is defined by encouraging the generated image to be discriminated as a true image by the discriminator. It uses a logarithmic loss function. The arbiter loss is defined by encouraging the arbiter to correctly distinguish the generated image from the real image. It also uses a logarithmic loss function. The final SRGAN loss function is to add the perceptual loss function and the antagonistic loss function. By contrast training, the generator and the discriminant can compete with each other and gradually increase the quality of the IVUS image super-resolution.
The third loss function, the overall loss function of the image reconstruction model, is the sum of the loss function of the segmentation model and the loss function of the super resolution model. The IVUS image is implemented to distinguish between blood and non-blood regions, while non-blood region detail is enhanced.
The trained image reconstruction model contained 2 trained sub-models, a super-resolution model and a segmentation model. The super-resolution model can improve the resolution of the image and increase the details of the image. The segmentation model may segment the blood portion and the non-blood portion. And 2 models are combined, so that the super-resolution model can pay more attention to the characteristics of a non-blood region, and an IVUS image with better quality and more distinguishing characteristics is output. This is important for doctors to distinguish between blood and intima, accurately assess vascular lesions, and guide treatment. After the IVUS image is processed by the image reconstruction model, the IVUS image can be richer in detail, and meanwhile, the boundaries of the blood area and the non-blood area are more obvious, so that a doctor is assisted in distinguishing blood from intima, accurately evaluating vascular lesions and guiding treatment. The effect is shown in fig. 5, where the left side is the original image and the right side is the result of the algorithm output. It can be seen that in the resulting map output by the image reconstruction model, the blood areas are darkened, the non-blood areas are brightened, and the non-blood areas are clearer, more detailed, and not somewhat blurred as in the original image.
In this embodiment, the output result of the segmentation model is sent to the super-resolution model, so as to realize information sharing, and the super-resolution model learns the characteristics of the segmentation region more, so as to improve the performance and efficiency of the model, so that the boundary of the segmentation region is clearer, and the details of the segmentation region are richer, thereby significantly improving the image quality of intravascular ultrasound imaging.
In one embodiment, the method further comprises: carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
Among them, FPGA (Field-Programmable GATE ARRAY), i.e., field-Programmable gate array, is a product of further development based on a Programmable device such as PAL, GAL, CPLD. The programmable device is used as a semi-custom circuit in the field of Application Specific Integrated Circuits (ASICs), which not only solves the defect of custom circuits, but also overcomes the defect of limited gate circuits of the original programmable device.
Alternatively, because the IVUS has high real-time requirements, rapid diagnosis and real-time application are expected, and the FPGA has excellent parallel computing capability and pipeline optimization capability, and the processing speed is high, the trained model is realized in the FPGA. The trained model is realized in the FPGA and comprises the following steps:
(1) The model is fixed-point. Because the trained model parameters are of the floating point type. The FPGA needs to fix these parameters to calculate. In addition, since the IVUS image has high real-time requirements and requires a very fast processing speed of the FPGA, when the FPGA performs the localization, a bit width different from the original model parameter is selected, and the parameter of the original model is a 32-bit floating point number (single precision) or a 16-bit floating point number (half precision). The FPGA performs 8-bit or 12-bit pointing of the parameters. This brings about a slight decrease in accuracy, but real-time performance is ensured. In general, a certain balance is achieved in terms of real-time and accuracy.
The method specifically comprises the following steps: the greater the bit width, the greater the accuracy of quantization, but the more logical resources are combined to take place. 2. The fractional part can be quantized to a range of 0 to 255 (integer) by dividing the integer and the fractional part by a bit width of 3 and determining the integer fixed point value by the bit width of the fractional part, for example, the fractional part is 8 bits and the maximum fixed point integer is 256.
(2) FPGA design and optimization: the model is converted into hardware circuitry on the FPGA. The method specifically comprises the following steps:
1) The operations of convolution, pooling, activation function and the like in the model are converted into a hardware circuit, parallel calculation and pipeline optimization are performed, and the calculation efficiency is improved.
2) Storage and interface design: since parameters of the segmentation model and the super-resolution model, an input image, and an output image are very large in size, an external memory (DDR) or the like is used to store these data.
3) Verification and testing: and verifying and testing the model realized by the FPGA, and ensuring the consistency of the model and the original model in terms of output quality and performance.
In the embodiment, according to the design requirement of the FPGA chip, the image reconstruction model is subjected to fixed-point processing to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; according to the hardware circuit design scheme, the memory model and the interface model, the FPGA chip design scheme is obtained, and the image reconstruction model can be transplanted to the FPGA for implementation.
In an exemplary embodiment, as shown in fig. 6, an image reconstruction model training and FPGA design method includes:
Acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; obtaining an initial training set according to the sample image, the first label image and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set. The training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image.
Inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; summing the first loss function and the second loss function to obtain a third loss function; according to the third loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to complete one-time iterative training; and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model. The image reconstruction model comprises an image segmentation sub-model and a super-resolution sub-model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model. The image reconstruction model is used for processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
Carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image reconstruction device for realizing the image reconstruction method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image reconstruction apparatus provided in the following may be referred to the limitation of the image reconstruction method hereinabove, and will not be repeated here.
In one exemplary embodiment, as shown in fig. 7, there is provided an image reconstruction apparatus 700 including: an image input module 701, a first processing module 702, and a second processing module 703, wherein:
An image input module 701, configured to input an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
A first processing module 702, configured to process an image to be reconstructed through the image segmentation sub-model, and obtain a segmented image output by the image segmentation sub-model;
The second processing module 703 is configured to process the to-be-reconstructed image and the segmented image through the super-resolution sub-model, and obtain a super-resolution image output by the super-resolution sub-model, which is used as a reconstructed image corresponding to the to-be-reconstructed image.
In one embodiment, the apparatus further comprises:
A model training module 704, configured to obtain a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image; inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; according to the first loss function and the second loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to finish one iteration training; and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
In one embodiment, model training module 704 is further configured to acquire an intravascular ultrasound image, determine a non-blood region in the intravascular ultrasound image, and acquire an intravascular ultrasound image that is segmented into non-blood regions as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; and obtaining a training set according to the sample image, the first label image and the second label image.
In one embodiment, model training module 704 is further configured to obtain an initial training set based on the sample image, the first label image, and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set.
In one embodiment, model training module 704 is further configured to sum the first loss function and the second loss function to obtain a third loss function; and according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model.
In one embodiment, the apparatus further comprises:
The hardware design module 705 is configured to perform fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip, so as to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
The respective modules in the above-described image reconstruction apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one exemplary embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing image data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of image reconstruction.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one exemplary embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of: inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model; processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image; inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; according to the first loss function and the second loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to finish one iteration training; and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; and obtaining a training set according to the sample image, the first label image and the second label image.
In one embodiment, the processor when executing the computer program further performs the steps of: obtaining an initial training set according to the sample image, the first label image and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set.
In one embodiment, the processor when executing the computer program further performs the steps of: summing the first loss function and the second loss function to obtain a third loss function; and according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model.
In one embodiment, the processor when executing the computer program further performs the steps of: carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model; processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image; inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; according to the first loss function and the second loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to finish one iteration training; and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; and obtaining a training set according to the sample image, the first label image and the second label image.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining an initial training set according to the sample image, the first label image and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set.
In one embodiment, the computer program when executed by the processor further performs the steps of: summing the first loss function and the second loss function to obtain a third loss function; and according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of: inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model; processing the image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model; and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, wherein the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image which correspond to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a corresponding high-resolution image of the sample image; inputting a training sample into an image reconstruction model to be trained, processing the sample image through an image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through a super-resolution sub-model to obtain a second image output by the super-resolution sub-model; acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image; according to the first loss function and the second loss function, model parameters of the image segmentation sub-model and the super-resolution sub-model are adjusted to finish one iteration training; and carrying out repeated iterative training on the image reconstruction model to be trained according to the training set to obtain the image reconstruction model.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image; downsampling an intravascular ultrasound image to obtain a sample image, and downsampling a split sample image to obtain a first label image; multiplying the intravascular ultrasound image with the corresponding segmented sample image to obtain a second label image; and obtaining a training set according to the sample image, the first label image and the second label image.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining an initial training set according to the sample image, the first label image and the second label image; and carrying out random overturn, random clipping, random scaling and image enhancement on the images in the initial training set to obtain a training set.
In one embodiment, the computer program when executed by the processor further performs the steps of: summing the first loss function and the second loss function to obtain a third loss function; and according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model.
In one embodiment, the computer program when executed by the processor further performs the steps of: carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value; determining a hardware circuit design scheme of the FPGA chip according to a model structure and integer fixed point values of the image reconstruction model; determining a memory model and an interface model according to model parameters of the image reconstruction model; acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an input image.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are both information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method of image reconstruction, the method comprising:
Inputting an image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in an image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
Processing an image to be reconstructed through the image segmentation sub-model to obtain a segmentation image output by the image segmentation sub-model;
and processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel as a reconstruction image corresponding to the image to be reconstructed.
2. The method according to claim 1, wherein the image reconstruction model is obtained by a method comprising:
Acquiring a training set; the training set comprises a plurality of training samples, each training sample comprises a sample image, and a first label image and a second label image corresponding to the sample image; the first label image comprises a target area which needs to be segmented in the sample image, and the second label image is a high-resolution image corresponding to the sample image;
and executing at least one training operation on the image reconstruction model to be trained based on the training set to obtain the image reconstruction model.
3. The method of claim 2, wherein the acquiring the training set comprises:
acquiring an intravascular ultrasound image, determining a non-blood area in the intravascular ultrasound image, and acquiring an intravascular ultrasound image of which the non-blood area is segmented as a segmented sample image;
Downsampling the intravascular ultrasound image to obtain the sample image, and downsampling the segmented sample image to obtain the first label image;
Multiplying the intravascular ultrasound image by the corresponding segmented sample image to obtain the second label image;
and obtaining the training set according to the sample image, the first label image and the second label image.
4. The method of claim 3, wherein the deriving the training set from the sample image, the first label image, and the second label image comprises:
preprocessing the images in the initial training set, and obtaining the training set according to the preprocessed sample image, the first label image and the second label image; the preprocessing includes at least one of random flipping, random cropping, random scaling, and image enhancement.
5. The method of claim 2, wherein performing a training operation on the image reconstruction model to be trained based on the training set comprises:
Inputting any training sample into the image reconstruction model to be trained, processing a sample image through the image segmentation sub-model to obtain a first image output by the image segmentation sub-model, and processing the sample image and the first image through the super-resolution sub-model to obtain a second image output by the super-resolution sub-model;
Acquiring a first loss function according to the first image and the first label image, and acquiring a second loss function according to the second image and the second label image;
summing the first loss function and the second loss function to obtain a third loss function;
And according to the third loss function, adjusting model parameters of the image segmentation sub-model and the super-resolution sub-model to complete one training operation.
6. The method according to claim 1, wherein the method further comprises:
carrying out fixed-point processing on the image reconstruction model according to the design requirement of the FPGA chip to obtain an integer fixed-point value;
determining a hardware circuit design scheme of the FPGA chip according to the model structure of the image reconstruction model and the integer fixed point value;
Determining a memory model and an interface model according to model parameters of the image reconstruction model;
acquiring an FPGA chip design scheme according to the hardware circuit design scheme, the memory model and the interface model; the FPGA chip is used for reconstructing an image of an input image.
7. An image reconstruction apparatus, the apparatus comprising:
The image input module is used for inputting the image to be reconstructed into an image segmentation sub-model and a super-resolution sub-model in the image reconstruction model; the output layer of the image segmentation sub-model is connected with the middle layer of the super-resolution sub-model through the connecting layer of the super-resolution sub-model, and the output resolution of the middle layer of the super-resolution sub-model is the same as that of the output layer of the image segmentation sub-model;
The first processing module is used for processing the image to be reconstructed through the image segmentation sub-model to obtain a segmented image output by the image segmentation sub-model;
The second processing module is used for processing the image to be reconstructed and the segmentation image through the super-resolution submodel to obtain a super-resolution image output by the super-resolution submodel, and the super-resolution image is used as a reconstruction image corresponding to the image to be reconstructed.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202410351059.9A 2024-03-25 2024-03-25 Image reconstruction method, image reconstruction device, computer equipment and storage medium Pending CN118052715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410351059.9A CN118052715A (en) 2024-03-25 2024-03-25 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410351059.9A CN118052715A (en) 2024-03-25 2024-03-25 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118052715A true CN118052715A (en) 2024-05-17

Family

ID=91050372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410351059.9A Pending CN118052715A (en) 2024-03-25 2024-03-25 Image reconstruction method, image reconstruction device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118052715A (en)

Similar Documents

Publication Publication Date Title
Pinaya et al. Unsupervised brain imaging 3D anomaly detection and segmentation with transformers
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
US9959615B2 (en) System and method for automatic pulmonary embolism detection
CN111951344B (en) Magnetic resonance image reconstruction method based on cascade parallel convolution network
CN111368849B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2021041772A1 (en) Dilated convolutional neural network system and method for positron emission tomography (pet) image denoising
CN113298831B (en) Image segmentation method and device, electronic equipment and storage medium
CN111814768B (en) Image recognition method, device, medium and equipment based on AI composite model
CN111369562A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111091575B (en) Medical image segmentation method based on reinforcement learning method
CN117078692B (en) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN113034507A (en) CCTA image-based coronary artery three-dimensional segmentation method
CN114511581B (en) Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN116091412A (en) Method for segmenting tumor from PET/CT image
CN115018863A (en) Image segmentation method and device based on deep learning
CN114663445A (en) Three-dimensional heart image segmentation method based on multi-scale edge perception
CN117197594B (en) Deep neural network-based heart shunt classification system
CN111681297B (en) Image reconstruction method, computer device, and storage medium
WO2024051018A1 (en) Pet parameter image enhancement method and apparatus, device, and storage medium
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium
CN118052715A (en) Image reconstruction method, image reconstruction device, computer equipment and storage medium
CN113658700B (en) Gate pulse high-pressure noninvasive evaluation method and system based on machine learning
Guttulsrud Generating Synthetic Medical Images with 3D GANs
CN115100306A (en) Four-dimensional cone-beam CT imaging method and device for pancreatic region
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination