CN113888404A - Remote reconstruction method of organ image model - Google Patents

Remote reconstruction method of organ image model Download PDF

Info

Publication number
CN113888404A
CN113888404A CN202010623742.5A CN202010623742A CN113888404A CN 113888404 A CN113888404 A CN 113888404A CN 202010623742 A CN202010623742 A CN 202010623742A CN 113888404 A CN113888404 A CN 113888404A
Authority
CN
China
Prior art keywords
resolution
image
tile
pixel
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010623742.5A
Other languages
Chinese (zh)
Inventor
张丹
石钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhixueyi Digital Technology Co ltd
Original Assignee
Chengdu Zhixueyi Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhixueyi Digital Technology Co ltd filed Critical Chengdu Zhixueyi Digital Technology Co ltd
Priority to CN202010623742.5A priority Critical patent/CN113888404A/en
Publication of CN113888404A publication Critical patent/CN113888404A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote reconstruction method of an organ image model, which comprises the following steps: acquiring a high-resolution image block and a low-resolution image block by a medical image scanning device; calculating a plurality of candidate high-resolution pixel sets and a reference high-resolution pixel set by a deep learning engine; generating a post-processing residual value by a deep learning engine; the pixel summation unit carries out pixel summation operation on the post-processing residual value and the low-resolution image block to obtain a forward and backward prediction high-resolution image block; a bi-directional predictor generates a super-resolution tile for the forward predicted high resolution tile and the backward predicted high resolution tile. The invention provides a remote reconstruction method of an organ image model, which effectively improves the three-dimensional image quality and the operation efficiency with lower software and hardware cost and simultaneously solves the problems of image distortion of the existing reconstruction method and the limited bandwidth and the limited storage space of the existing remote reconstruction method.

Description

Remote reconstruction method of organ image model
Technical Field
The invention relates to three-dimensional image reconstruction, in particular to a remote reconstruction method of an organ image model.
Background
The high-resolution three-dimensional image has higher quality and clear detail information, and is very important for the application fields of medical images and the like. The medical image scanning system needs to record a model file, compress the image and transmit the image to the cloud end, and the cloud end decompresses the model file and then transmits the image to the medical terminal for real-time monitoring by a user; meanwhile, the cloud end can also be used for storing a plurality of model files collected by the image scanning device so as to be called by a user at any time. However, the better image fluency and more image detail information inevitably need to increase the resolution and frame rate of the model file, but the capacity of the model file is also increased, thereby causing a greater burden to the storage space and cache of the cloud. If the super-resolution reconstruction technology is adopted, the image edge has distortion problems such as sawtooth phenomenon and the like. In addition, the reconstruction method in the existing scheme has a problem of low signal-to-noise ratio, and a suitable image enhancement mechanism cannot be provided for the medical image scanning system, so that a high-resolution three-dimensional image reconstruction method considering the storage space, the cache size, the network speed and the image quality of a cloud end is required to be provided.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a remote reconstruction method of an organ image model, which comprises the following steps:
collecting a plurality of frames of high-resolution image blocks and a plurality of frames of low-resolution image blocks by a medical image scanning device;
performing convolution operation on the high-resolution image block for multiple times by a deep learning engine to calculate a plurality of candidate high-resolution pixel sets, and performing the convolution operation for each candidate high-resolution pixel set at least once to generate a reference high-resolution pixel set;
after the deep learning engine carries out convolution operation on the reference high-resolution pixel set and the low-resolution image block for at least one time, a post-processing residual value is generated;
performing, by a pixel summation unit, a pixel summation operation on the post-processing residual values and the low-resolution tile to calculate a forward predicted high-resolution tile and a backward predicted high-resolution tile of the same timestamp; and
after receiving the forward predicted high resolution tile and the backward predicted high resolution tile by a bi-directional predictor, performing at least one convolution operation on the forward predicted high resolution tile and the backward predicted high resolution tile to generate a super resolution tile;
wherein the backward predicted high resolution tile is generated after the steps of generating the candidate pixel set and the reference pixel set, generating the post-processing residual value and generating the predicted high resolution tile are performed in sequence by one frame of the high resolution tile and one frame of the low resolution tile with a timestamp.
Preferably, the deep learning engine has an input layer, an embedded layer and an output layer, the input layer is used for inputting the blocks or residual values to be subjected to the convolution operation, the embedded layer is used for storing a plurality of parameters and used for determining the convolution kernel utilized by the convolution operation, and the output layer is used for outputting the operation result of the convolution operation.
Preferably, the multiple parameters are that the input layer may use different convolution kernels to perform multiple convolution operations on the high-resolution image block respectively to calculate multiple candidate high-resolution pixel sets, and the input layer may reuse a convolution kernel of a minimum pixel and perform the convolution operation on each candidate high-resolution pixel set simultaneously to calculate the reference high-resolution pixel set.
Preferably, the multiple parameters are that the convolution kernel can be utilized by the input layer, and the convolution operation is simultaneously performed on the reference high-resolution pixel set and the low-resolution image block so as to calculate an image superposition residual value.
Compared with the prior art, the invention has the following advantages:
the invention provides a remote reconstruction method of an organ image model, which effectively improves the three-dimensional image quality and the operation efficiency with lower software and hardware cost and simultaneously solves the problems of image distortion of the existing reconstruction method and the limited bandwidth and the limited storage space of the existing remote reconstruction method.
Drawings
Fig. 1 is a flowchart of a method for remote reconstruction of an organ image model according to an embodiment of the present invention.
Detailed Description
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details.
Aspects of the invention provide a method for remote reconstruction of an image model of an organ. The method is realized by the medical image scanning and reconstructing system. The system comprises: the system comprises a multi-layer cache module, a deep learning engine, a rendering reconstruction engine and a bidirectional predictor; the method comprises the steps that a multi-layer cache module receives multi-frame high-resolution image blocks with a lower frame rate and multi-frame low-resolution image blocks with a higher frame rate, which are collected by an image scanning device, a depth learning engine, a rendering reconstruction engine and a bidirectional predictor are used for performing super-resolution reconstruction model rendering to calculate out super-resolution image blocks, and the super-resolution image blocks are sequentially transmitted to a medical terminal to be used for a user to check three-dimensional images.
Fig. 1 is a flowchart of a method for remote reconstruction of an image model of an organ according to an embodiment of the present invention. The medical image scanning reconstruction system can be respectively in communication connection with the image scanning device and the medical terminal through a network, the image scanning device can acquire a high-resolution model file and a low-resolution model file and transmit the high-resolution model file and the low-resolution model file to the medical image scanning reconstruction system for caching, decoding and super-resolution reconstruction, and the frame rate of the high-resolution model file acquired by the image scanning device can be smaller than or equal to that of the low-resolution model file. After the super-resolution reconstruction model rendering is completed by the medical image scanning reconstruction system, a super-resolution image block with high frame rate and high resolution is generated, and after the super-resolution image block is cached and encoded, the super-resolution image block can be transmitted to the medical terminal in sequence so that a user can view the three-dimensional image with the super-resolution image quality.
After a medical image scanning reconstruction system receives a plurality of frames of high-resolution image blocks with a lower frame rate and a plurality of frames of low-resolution image blocks with a higher frame rate, which are collected by an image scanning device, the number of the image blocks collected by the image scanning device is determined according to the number of sampling frames T, and if T is 2, the image scanning device collects 1 frame of high-resolution image blocks and 1 frame of low-resolution image blocks; if T is 5, the image scanning device collects 1 frame of high resolution image block and 4 frames of low resolution image block. The sampling frame number T value determines the number of the image blocks continuously collected by the image scanning device, and the image blocks collected by the image scanning device at least comprise 1 frame of high-resolution image blocks. In the preferred embodiment, the rest of the tiles are low resolution tiles, thereby solving the problem of limited storage space and transmission bandwidth of the image scanning device. When the image scanning device finishes the acquisition of the image blocks, the image blocks are submitted to an image compression module for image compression, the size of the model file is reduced by the compression in a bit rate reduction mode, and then the image blocks are submitted to a first transmission module for transmission. The first transmission module divides the image blocks compressed by the image compression module into a plurality of messages in advance and transmits the messages to the medical image scanning reconstruction system.
After the medical image scanning and reconstructing system receives the plurality of messages transmitted by the first transmission module, the messages are firstly stored in the model cache temporarily, the model cache is then in communication connection with the image decoder to decompress the messages and further decode the image blocks, then the image decoder stores the image blocks in the multilayer cache module temporarily, and the multilayer cache module is then in communication connection with the storage module to store the decoded three-dimensional image which does not need to be stored for a long time. In a preferred embodiment, the model cache may be communicatively coupled directly to the storage module to effectively save storage space. And after the multilayer cache module receives the decoded image blocks, the decoded image blocks are transmitted to a deep learning engine, and then the deep learning engine can perform super-resolution reconstruction model rendering on the image blocks acquired by the image scanning device. Before the super-resolution reconstruction model is rendered, for the high-resolution image block and the low-resolution image block received by the depth learning engine, the rendering reconstruction engine can amplify the low-resolution image block to the same size as the high-resolution image block through interpolation modes such as nearest neighbor interpolation, bilinear interpolation and the like, further calculate to obtain the super-resolution image block, and temporarily store the super-resolution image block in the multilayer cache module. And then, after the super-resolution image blocks are subjected to image coding by the image coder, temporarily storing the super-resolution image blocks in the model cache. And finally, the model cache transmits the super-resolution image block to the medical terminal.
The deep learning engine comprises an input layer, an embedded layer and an output layer, wherein the input layer is used for inputting image blocks or residual values to be subjected to convolution operation; the embedded layer is used for storing a plurality of parameters so as to determine a pixel filtering unit for performing convolution operation by the deep learning engine; the output layer is used for outputting the operation result of the convolution operation; the offset calculation unit of the rendering reconstruction engine carries out pixel offset calculation to reduce offset difference generated by each continuous image block, and further screens out a pixel set with the minimum pixel offset with the low-resolution image block; the pixel summation unit of the rendering reconstruction engine is used for carrying out pixel summation operation so as to respectively generate image blocks which can be received by the bidirectional predictor; the bidirectional predictor is used for receiving the forward prediction high-resolution image block and the backward prediction high-resolution image block with the same timestamp so that the super-resolution image block can be calculated after the deep learning engine carries out convolution operation for at least one time.
The plurality of parameters stored by the embedding layer of the deep learning engine may include: if the prediction mode is the forward prediction mode, and the sampling frame number T is 3, the input layer may perform convolution operations on the high-resolution tile by using different convolution kernels respectively for multiple times, so that the input layer calculates multiple candidate high-resolution pixel sets from the T-th frame high-resolution tile as the T + 1-th frame, and then, to reduce the number of candidate tiles of each candidate high-resolution pixel set, the input layer may perform at least one convolution operation on each candidate high-resolution pixel set of the T + 1-th frame by using a smaller convolution kernel to generate a reference high-resolution pixel set. In a preferred embodiment, for the pixel set after the convolution operation is performed on each candidate high-resolution pixel set by the input layer, the pixel set with the minimum pixel offset with the low-resolution tile is screened out from the pixel set and used as a reference high-resolution pixel set; if the high-resolution block is in the reverse prediction mode, the input layer can carry out convolution operation on the high-resolution block for multiple times by using different convolution kernels respectively so that the input layer can calculate multiple candidate high-resolution pixel sets from the high-resolution block, then the multiple candidate high-resolution pixel sets are subjected to convolution operation for at least one time by using smaller convolution kernels to generate a reference high-resolution pixel set, and for the pixel set after the input layer carries out convolution operation on each candidate high-resolution pixel set, the pixel set with the minimum pixel offset with the low-resolution block is screened out from the pixel set to serve as the reference high-resolution pixel set.
In a preferred embodiment, after a pixel set with the minimum pixel offset with a low resolution block is screened out from a pixel set generated after a deep learning engine conducts convolution operation on each candidate high resolution pixel set, the deep learning engine conducts convolution operation on a reference high resolution pixel set and the reference low resolution block at the same time to generate an image superposition residual value, then in order to adjust the image quality of the image superposition residual value, the deep learning engine continues to conduct convolution operation on the image superposition residual value through the same or different convolution cores to generate a first post-processing residual value, then conducts convolution operation on the first post-processing residual value to generate a second post-processing residual value, and so on, unnecessary image information can be filtered from each post-processing residual value, and the collection details of the image can be increased through a learning mechanism of deep learning, and then the deep learning engine takes the finally generated post-processing residual values as forward post-processing residual values and backward post-processing residual values and transmits the forward post-processing residual values and the backward post-processing residual values to the rendering reconstruction engine.
Optionally, the deep learning engine may also directly use the image superposition residual value as a finally generated post-processing residual value, and transmit the post-processing residual value to the rendering reconstruction engine, and after the rendering reconstruction engine receives the forward post-processing residual value and the backward post-processing residual value, if the forward prediction mode is adopted, the rendering reconstruction engine performs pixel summation operation on the forward post-processing residual value and the low-resolution image block to generate a forward prediction high-resolution image block; if the backward prediction mode is adopted, the rendering reconstruction engine carries out pixel summation operation on the backward processing residual value and the low-resolution picture block so as to generate a backward prediction high-resolution picture block. And then transmitting the forward prediction high-resolution image block and the reverse prediction high-resolution image block with the same time stamp to a bidirectional predictor so as to continuously execute the step of generating the super-resolution image block, namely submitting the super-resolution image block to a deep learning engine for at least one convolution operation to generate the super-resolution image block.
When a post-processing residual value is generated, if the post-processing residual value is in a forward prediction mode, after a first post-processing residual value and a second post-processing residual value are obtained by the deep learning engine, an output layer of the deep learning engine determines that the finally generated post-processing residual value is a forward post-processing residual value, and then the finally generated post-processing residual value is submitted to a pixel summation unit so as to carry out pixel summation operation on the forward post-processing residual value and a low-resolution image block and generate a forward prediction high-resolution image block; on the contrary, if the prediction mode is the backward prediction mode, in order to adjust the image details of the image superposition residual value, the input layer may perform convolution operation on the image superposition residual value again by the same or different pixel filtering units, calculate out a first post-processing residual value and a second post-processing residual value by the same process, and so on, determine the finally generated post-processing residual value as a backward processing residual value, and then perform pixel summation operation on the backward processing residual value and the low-resolution image block to generate a backward prediction high-resolution image block.
In both forward and backward prediction modes, the input layer can perform convolution operation on the second post-processing residual value for any number of times to generate a third, … or nth post-processing residual value, and the nth post-processing residual value is used as a forward post-processing residual value or a backward post-processing residual value, and if the input layer performs convolution with the same convolution kernel each time, more unnecessary image noise can be filtered, and the peak signal-to-noise ratio can be increased.
Finally, the pixel summation unit retransmits the forward predicted high resolution tile and the backward predicted high resolution tile of the same timestamp to the bi-directional predictor to continue generating the super resolution tile.
If the medical terminal needs to check the non-real-time three-dimensional image, the multi-layer cache module firstly collects a plurality of decoded multi-frame high-resolution image blocks and a plurality of multi-frame low-resolution image blocks from the storage module and transmits the multi-frame high-resolution image blocks and the multi-frame low-resolution image blocks to the deep learning engine, or collects a plurality of un-decoded messages from the storage module, decodes the messages by the image decoder, transmits the messages to the deep learning engine, the rendering reconstruction engine and the bidirectional predictor to calculate out the super-resolution image blocks, then temporarily stores the super-resolution image blocks in the multi-layer cache module for caching, transmits the super-resolution image blocks to the image encoder for encoding, and transmits the super-resolution image blocks to the model cache after encoding. And after the image caching and encoding steps are executed, the second transmission module transmits each super-resolution image block to the medical terminal so that the medical terminal can check the non-real-time three-dimensional image.
In the process of generating a candidate high-resolution pixel set, processing residual values after generation and finally generating a super-resolution image block, the deep learning engine carries out convolution operation and can acquire more accurate image details after continuous training and learning.
In order to further improve the accuracy of image quantification, the invention takes the product of a volume factor tb related to the volume size of organs and tissues and a position attenuation factor ta related to the position as a model variable t required by reconstruction, and interpolates through the model variable t to perform moderation reconstruction.
The model variable t is used for describing the attenuation factor difference values at different positions, and the relationship between the reconstruction process and the attenuation factors at different positions can be established for rapid reconstruction. Preferred embodiments include: determining an attenuation factor h (x, y) for each scan position (x, y) in the scan plane; determining a position model variable ta corresponding to the attenuation distance of each scanning position (x, y) in the scanning plane relative to a reference position in the scanning plane according to the attenuation factor h (x, y); generating a projected image g (x, y) with at least one light source on the scanning plane, each pixel of the projected image corresponding to the scanning planeAnd the pixels in the original image correspond to the reference position; determining a corresponding attenuation distance according to the distance between each pixel and the pixel corresponding to the reference position so as to determine a corresponding position model variable ta of each pixel; performing a reconstruction operation on each pixel of the projected image g (x, y) according to the projected image g (x, y) and the attenuation factor h (x, y) to generate an initial reconstructed image v (x, y); and performing iterative operation on the scanning position (x, y) corresponding to each pixel of the projected image g (x, y), and forming an iterative reconstructed image v after k iterative operationsk+1(x, y), i.e.:
Figure BDA0002563976100000081
where t is the product of the position model variable ta and the volume factor tb.
The attenuation factor may be a full width half maximum value of the resolution of the distances of the other scan positions from the reference position centered on the reference position. The reference position is a center position on the scanning plane. And the attenuation distance refers to the radial distance of the scanning position (x, y) on the scanning plane relative to the reference position. Each pixel v(k+1)(x, y) correspond to different attenuation factors during reconstruction.
For convenience of calculation, the attenuation factor of each pixel of the reconstructed image is taken as the attenuation factor corresponding to the current pixel by the attenuation factor of the reference position. For each reconstructed image pixel v obtained after iteration for multiple timesk+1(x, y) for moderate compensation, model variable t is added. Since the model variable t is the product of ta and the volume factor tb, determining the volume factor tb requires first acquiring the attenuation factor h (x, y), which is the relationship between the distance of each scanning position (x, y) in the scanning plane relative to the center of the scanning plane and the full width at half maximum resolution possessed by the medical image scanning apparatus. The attenuation factor may be exceeded by giving the volume factor tb a variable relationship of decreasing from 2 to 1 over a range of specific multiples of the full width at half maximum of the system resolutionWithin a certain multiple of the full width at half maximum, the volume factor tb is 1.
Following the above example, the image v is iteratively reconstructedk+1And (x, y) is the optimized reconstructed image obtained by the model of the invention. The number k of iterative operations may be determined according to the requirements and the image reconstruction conditions. The product of the iterative operation times k and the model variable t is constant, and if the image is excessively reconstructed after iteration, the excessive influence caused by the iterative operation times can be restrained by reducing the value of the model variable t. According to the above manner, the iterative operation can be controlled to cause excessive image reconstruction by adjusting the iterative operation times k and the model variable t.
In another embodiment, the light source volume may be taken into account, i.e. the value of the volume factor tb is determined. Then, the result of multiplying the position model variable ta by the volume factor tb is used as an adjustment model variable t for image reconstruction. Therefore, the light source with small volume can generate clear images in image reconstruction. Even when a plurality of light sources with different volume sizes are used, a reconstructed image with better analysis effect can be obtained through the compensation of the volume factor tb.
If the multiplication result leads the model variable t to be larger than 2, the product between the model variable t and the iteration times k is a constant relation to adjust and increase the number of iterations so as to reduce the value of the model variable t and further avoid iteration divergence.
In addition, since medical images such as MRI are originally large amounts of two-dimensional gray scale data, a three-dimensional image model is formed by performing model rendering and reconstruction on the two-dimensional images. However, the three-dimensional image reconstructed by the two-dimensional image may not completely show the geometric shape and structure of the original measured organ, and because the amount of the input two-dimensional image data is too large, the amount of calculation is too large when each frame of two-dimensional image information is input one by one to reconstruct the original three-dimensional image. In a further aspect of the present invention, therefore, the quantifiable symmetry values are used to establish an optimal medial plane for assisting medical personnel in identifying and correcting biased three-dimensional images in clinical diagnosis, while ensuring that accurate quantitative symmetry values are obtained.
Because the gray scale value ranges of images presented by different tissues or organs of a human body after scanning are different, firstly, a threshold range is set after medical image data and related parameters are obtained, namely, the gray scale value range is used for a specific part in the images; and executing an error model containing a gray value, solving an optimal parameter for judging the similarity of the binary medical image after mirroring, continuously comparing the input parameters to obtain the optimal parameter, and substituting the optimal parameter into the central axis model or the central axis model for obtaining the optimal central axis. The input data volume is effectively reduced by inputting a specific threshold value in the set parameters, and the input image data is compared through the set gray value error model, so that a three-dimensional image can be quickly reconstructed, or the obtained optimal middle axial plane or optimal middle axial line is recalculated through an interpolation method and is used for correcting a deviation image caused by scanning of a medical instrument.
In summary, the invention provides a remote reconstruction method of an organ image model, which effectively improves the three-dimensional image quality and the operation efficiency with lower software and hardware costs, and simultaneously solves the problems of image distortion of the existing reconstruction method and the limited bandwidth and the limited storage space of the existing remote reconstruction method.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing system, centralized on a single computing system, or distributed across a network of computing systems, and optionally implemented in program code that is executable by the computing system, such that the program code is stored in a storage system and executed by the computing system. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (4)

1. A method for remote reconstruction of an image model of an organ, comprising:
collecting a plurality of frames of high-resolution image blocks and a plurality of frames of low-resolution image blocks by a medical image scanning device;
performing convolution operation on the high-resolution image block for multiple times by a deep learning engine to calculate a plurality of candidate high-resolution pixel sets, and performing the convolution operation for each candidate high-resolution pixel set at least once to generate a reference high-resolution pixel set;
after the deep learning engine carries out convolution operation on the reference high-resolution pixel set and the low-resolution image block for at least one time, a post-processing residual value is generated;
performing, by a pixel summation unit, a pixel summation operation on the post-processing residual values and the low-resolution tile to calculate a forward predicted high-resolution tile and a backward predicted high-resolution tile of the same timestamp; and
after receiving the forward predicted high resolution tile and the backward predicted high resolution tile by a bi-directional predictor, performing at least one convolution operation on the forward predicted high resolution tile and the backward predicted high resolution tile to generate a super resolution tile;
wherein the backward predicted high resolution tile is generated after the steps of generating the candidate pixel set and the reference pixel set, generating the post-processing residual value and generating the predicted high resolution tile are performed in sequence by one frame of the high resolution tile and one frame of the low resolution tile with a timestamp.
2. The method of claim 1, wherein the deep learning engine has an input layer for inputting a block or residual value to be subjected to a convolution operation, an embedding layer for storing a plurality of parameters for determining a convolution kernel to be utilized by the convolution operation, and an output layer for outputting an operation result of the convolution operation.
3. The method of claim 1, wherein the plurality of parameters are that the input layer can perform a plurality of convolution operations on the high resolution tile respectively using different convolution kernels to calculate the plurality of candidate high resolution pixel sets, and the input layer can perform the convolution operations on the respective candidate high resolution pixel sets simultaneously using a convolution kernel of a minimum pixel to calculate the reference high resolution pixel set.
4. The method of claim 1, wherein the plurality of parameters are such that the input layer can compute the image stack residual values by performing a convolution operation on the reference high resolution pixel set and the low resolution tile simultaneously using the convolution kernel.
CN202010623742.5A 2020-07-01 2020-07-01 Remote reconstruction method of organ image model Pending CN113888404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010623742.5A CN113888404A (en) 2020-07-01 2020-07-01 Remote reconstruction method of organ image model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010623742.5A CN113888404A (en) 2020-07-01 2020-07-01 Remote reconstruction method of organ image model

Publications (1)

Publication Number Publication Date
CN113888404A true CN113888404A (en) 2022-01-04

Family

ID=79012750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010623742.5A Pending CN113888404A (en) 2020-07-01 2020-07-01 Remote reconstruction method of organ image model

Country Status (1)

Country Link
CN (1) CN113888404A (en)

Similar Documents

Publication Publication Date Title
US11153566B1 (en) Variable bit rate generative compression method based on adversarial learning
Mentzer et al. Conditional probability models for deep image compression
JP4187514B2 (en) Method and apparatus for transmission and display of compressed digitized images
Bairagi et al. ROI-based DICOM image compression for telemedicine
US11412225B2 (en) Method and apparatus for image processing using context-adaptive entropy model
CN113259676B (en) Image compression method and device based on deep learning
US20230300354A1 (en) Method and System for Image Compressing and Coding with Deep Learning
JP2001525622A (en) Image compression method
JP2000299863A (en) Image compressing device
CN112348936B (en) Low-dose cone-beam CT image reconstruction method based on deep learning
EP2618309A1 (en) Methods and devices for pixel-prediction for compression of visual data
CN111641826B (en) Method, device and system for encoding and decoding data
CN112396672B (en) Sparse angle cone-beam CT image reconstruction method based on deep learning
EP3841528A1 (en) Data compression using integer neural networks
CN111797891A (en) Unpaired heterogeneous face image generation method and device based on generation countermeasure network
CN115984117A (en) Variational self-coding image super-resolution method and system based on channel attention
CN103688544B (en) Method for being encoded to digital image sequence
Zebang et al. Densely connected AutoEncoders for image compression
CN112135136B (en) Ultrasonic remote medical treatment sending method and device and receiving method, device and system
Poggi et al. Pruned tree-structured vector quantization of medical images with segmentation and improved prediction
JP2021150955A (en) Training method, image coding method, image decoding method, and device
CN113888404A (en) Remote reconstruction method of organ image model
Perlmutter et al. Medical image compression and vector quantization
CN111815515B (en) Object three-dimensional drawing method based on medical education
CN113949880B (en) Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination