CN113660444A - Image generation method and device - Google Patents

Image generation method and device Download PDF

Info

Publication number
CN113660444A
CN113660444A CN202111217070.9A CN202111217070A CN113660444A CN 113660444 A CN113660444 A CN 113660444A CN 202111217070 A CN202111217070 A CN 202111217070A CN 113660444 A CN113660444 A CN 113660444A
Authority
CN
China
Prior art keywords
image
frequency
low
frequency sub
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111217070.9A
Other languages
Chinese (zh)
Inventor
蔡鑫
崔亚轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Original Assignee
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd filed Critical Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Priority to CN202111217070.9A priority Critical patent/CN113660444A/en
Publication of CN113660444A publication Critical patent/CN113660444A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Abstract

The application provides an image generation method and device, and the method comprises the following steps: acquiring a first image located in front of the target position and a second image located behind the target position from the medical image sequence; performing wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image of the first image and a low-frequency sub-image and a high-frequency sub-image of the second image; calculating to obtain a low-frequency interpolation result based on the low-frequency sub-images of the first image and the second image; calculating to obtain a high-frequency interpolation result based on the high-frequency sub-images of the first image and the second image; performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to a target position; and inserting the target image into the medical image sequence to obtain the expanded medical image sequence. The method can improve the resolution of the medical image sequence in the image arrangement direction, effectively prevent high-frequency information from being damaged in the interpolation process, realize high-precision reduction and obtain more accurate target images.

Description

Image generation method and device
Technical Field
The present application relates to the field of image processing, and in particular, to an image generation method and apparatus.
Background
By acquiring a plurality of sectional images of an object along a specific direction, three-dimensional information of the interior of the object (e.g., human body, animal, building, etc.) can be acquired, which is particularly commonly used in medical scenarios. For example, in clinical diagnosis, tomographic imaging techniques such as CT and MRI are often used to scan the body of a patient layer by layer to obtain a medical image sequence containing image information in the body of the patient.
However, due to practical limitations, it is not practical to acquire the sectional images very densely. For example, when acquiring a CT image sequence of a patient, the number of scan layers must be minimized in order to avoid excessive damage to the patient's body from the imaging process. This results in a large distance between the corresponding spatial positions of two adjacent images in an image sequence, and image information between the two adjacent images cannot be acquired, so that the resolution of the image sequence in a specific direction is low.
Disclosure of Invention
In order to solve the technical problem, the present application provides an image generation method and apparatus.
In a first aspect, an image generation method is provided, including: acquiring a first image located in front of the target position and a second image located behind the target position from the medical image sequence; respectively performing wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image corresponding to the first image and a low-frequency sub-image and a high-frequency sub-image corresponding to the second image; calculating to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image; calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image; performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position; inserting the target image into the medical image sequence according to the target position to obtain an augmented medical image sequence.
In some embodiments of the first aspect, the calculating a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image includes: calculating to obtain the low-frequency interpolation result by a first interpolation algorithm based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image,
the calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image includes: and calculating to obtain the high-frequency interpolation result through a second interpolation algorithm based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
Further, in some embodiments, the wavelet reconstructing the low frequency interpolation result and the high frequency interpolation result to obtain the target image corresponding to the target position includes: respectively filtering the low-frequency interpolation result and the high-frequency interpolation result to obtain a low-frequency filtering result and a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency filtering result and the high-frequency filtering result to obtain the target image.
In some embodiments of the first aspect, the calculating a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image includes: respectively performing wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image; obtaining a secondary low-frequency interpolation result through a first interpolation algorithm based on a secondary low-frequency sub-image corresponding to the first image and a secondary low-frequency sub-image corresponding to the second image; obtaining a secondary high-frequency interpolation result through a second interpolation algorithm based on a secondary high-frequency sub-image corresponding to the first image and a secondary high-frequency sub-image corresponding to the second image; and performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result.
Further, in some embodiments, the performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result includes: respectively carrying out filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a secondary low-frequency filtering result and a secondary high-frequency filtering result; and performing wavelet reconstruction on the secondary low-frequency filtering result and the secondary high-frequency filtering result to obtain the low-frequency interpolation result.
Further, in some embodiments, the wavelet reconstructing the low frequency interpolation result and the high frequency interpolation result to obtain the target image corresponding to the target position includes: filtering the high-frequency interpolation result to obtain a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency filtering result to obtain the target image.
In some embodiments of the first aspect, the number of first images is N and the number of second images is M, where N is equal to or greater than 2 and M is equal to or greater than 2.
Further, in certain embodiments, N = M = 2.
In certain embodiments of the first aspect, the first interpolation algorithm is a three-dimensional interpolation algorithm and the second interpolation algorithm is a two-dimensional interpolation algorithm.
In a second aspect, there is provided an image generating apparatus comprising: an acquisition module for acquiring a first image located before a target position and a second image located after the target position from a medical image sequence; the decomposition module is used for respectively carrying out wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image which correspond to the first image and a low-frequency sub-image and a high-frequency sub-image which correspond to the second image; the low-frequency computing module is used for computing to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image; the high-frequency calculation module is used for calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image; the reconstruction module is used for performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position; and the inserting module is used for inserting the target image into the medical image sequence according to the target position so as to obtain an expanded medical image sequence.
In certain embodiments of the second aspect, the low frequency calculation module is to: calculating to obtain a low-frequency interpolation result through a first interpolation algorithm based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image; the high frequency calculation module is used for: and calculating to obtain the high-frequency interpolation result through a second interpolation algorithm based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
Further, in some embodiments, the reconstruction module is to: respectively filtering the low-frequency interpolation result and the high-frequency interpolation result to obtain a low-frequency filtering result and a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency filtering result and the high-frequency filtering result to obtain the target image.
In certain embodiments of the second aspect, the low frequency calculation module comprises: the secondary decomposition unit is used for respectively performing wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image; the secondary low-frequency computing unit is used for obtaining a secondary low-frequency interpolation result through a first interpolation algorithm based on a secondary low-frequency sub-image corresponding to the first image and a secondary low-frequency sub-image corresponding to the second image; a secondary high-frequency calculating unit, configured to obtain a secondary high-frequency interpolation result through a second interpolation algorithm based on a secondary high-frequency sub-image corresponding to the first image and a secondary high-frequency sub-image corresponding to the second image; and the secondary reconstruction unit is used for performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result.
Further, in some embodiments, the secondary reconstruction unit is configured to: respectively carrying out filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a secondary low-frequency filtering result and a secondary high-frequency filtering result; and performing wavelet reconstruction on the secondary low-frequency filtering result and the secondary high-frequency filtering result to obtain the low-frequency interpolation result.
Further, in some embodiments, the reconstruction module is to: filtering the high-frequency interpolation result to obtain a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency filtering result to obtain the target image.
In certain embodiments of the second aspect, the number of first images is N and the number of second images is M, where N is equal to or greater than 2 and M is equal to or greater than 2.
Further, in certain embodiments, N = M = 2.
In certain embodiments of the second aspect, the first interpolation algorithm is a three-dimensional interpolation algorithm and the second interpolation algorithm is a two-dimensional interpolation algorithm.
In a third aspect, an image generating apparatus is provided, including: a memory for storing computer instructions; a processor for executing the computer instructions stored in the memory, the processor being configured to perform the image generation method provided by the first aspect when the computer instructions stored in the memory are executed.
In a fourth aspect, a computer-readable storage medium is provided, in which a program is stored, which, when executed, implements the image generating method provided by the first aspect.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform the image generation method provided by the first aspect.
According to the image generation method and device provided by the embodiment of the application, the target image corresponding to the target position can be obtained through an interpolation algorithm based on a plurality of adjacent images in the actually acquired medical image sequence, so that the resolution of the medical image sequence in the image arrangement direction is improved. Meanwhile, the image generation method and the image generation device provided by the embodiment of the application can give consideration to both high-frequency information and low-frequency information in the image, and effectively prevent the high-frequency information from being damaged in the interpolation process, so that the image information on the target position is restored at high precision, and a more accurate target image is obtained.
Drawings
Fig. 1 is a flowchart of an image generation method according to an embodiment of the present application.
Fig. 2 is a flowchart of an image generation method according to another embodiment of the present application.
Fig. 3 is a flowchart of an image generation method according to another embodiment of the present application.
Fig. 4 is a flowchart of an image generation method according to another embodiment of the present application.
Fig. 5 is a flowchart of an image generation method according to another embodiment of the present application.
Fig. 6 is a schematic diagram of performing wavelet decomposition on a two-dimensional image in the embodiment of the present application.
Fig. 7 is a schematic diagram of an implementation process of an image generation method according to an embodiment of the present application.
Fig. 8 is an example of a medical image.
Fig. 9 is an example of a plurality of sub-images generated after the example medical image shown in fig. 8 is subjected to a secondary wavelet decomposition.
Fig. 10 is an example of a target image generated based on an image generation method provided in an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an image generating apparatus according to another embodiment of the present application.
Fig. 13 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
As described above, when acquiring the internal information of the specific object, the sectional images of the specific object cannot be acquired too densely due to the limitation of practical factors such as cost and adverse effect, and the resolution of the acquired image sequence in the acquisition direction is low, and the image information at a position that is not actually acquired cannot be directly acquired from the image sequence.
The above-described technical problem will be described below by taking a CT image in a medical image series as an example.
In many clinical diagnostic scenarios, the doctor needs to grasp and judge the patient's condition through the CT image. However, the process of CT imaging exposes the patient to radiation, which subjects the patient to the risk of cell damage and even cancer induction. Therefore, for the sake of maintaining the health of the patient, it is necessary to minimize the number of scanning layers during the photographing process. This causes the problem that the resolution of the CT image sequence obtained by imaging is too low in the direction perpendicular to the slice plane, which may result in that the doctor may not be able to accurately grasp the condition of the patient. Further, this problem also directly affects the application effect of the CT image sequence, for example, when performing 3D reconstruction, the reconstruction resolution is greatly reduced by the low resolution image sequence.
For example, in the MPR (Multi Planar reconstruction) process using a horizontal dicom (digital Image Communications in medicine) sequence (three-dimensional Image), the Spacing between two pixels in the x-direction and the y-direction is determined by a Pixel Spacing tag, and the Spacing between two pixels in the z-direction is determined by the spatial distance between imagepositioning components of two images. That is, the resolution of the sequence in the x, y directions and the resolution in the z direction may be different.
In order to solve the technical problem, an image interpolation mode can be adopted, an interpolation image between two adjacent images in the image sequence is generated based on the acquired image sequence, and the interpolation image is made to approach to image information on a real position which is not acquired as much as possible, so that the resolution of the image sequence in the image arrangement direction is improved.
In the existing image interpolation technology, a single two-dimensional interpolation algorithm is usually adopted to directly perform interpolation processing on an input image to obtain an interpolated image. Commonly used two-dimensional interpolation algorithms include three types: a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a bicubic interpolation algorithm.
However, the conventional image interpolation techniques have certain defects regardless of the two-dimensional interpolation algorithm, and cannot accurately restore image information at a target position. In the fields of medicine and the like, if the accuracy of image interpolation cannot be improved, the resolution of an image sequence in the image arrangement direction cannot be effectively improved, but the overall reliability of the image sequence is weakened, the judgment of a doctor on the state of an illness of a patient is influenced, and more serious hidden dangers are brought.
The conventional image interpolation technique and its drawbacks are explained below.
Nearest neighbor interpolation algorithm
The nearest neighbor interpolation algorithm is the simplest interpolation algorithm, also called a zeroth order interpolation algorithm, and the gray value of the input pixel point closest to the position of the target pixel point is obtained and used as the interpolation result of the target pixel point.
When interpolation is carried out on the two-dimensional image by utilizing a nearest neighbor interpolation algorithm, 4 adjacent input pixel points around a target pixel point are firstly obtained, 1 input pixel point which is closest to the target pixel point is selected from the 4 adjacent input pixel points, and the gray value of the input pixel point is taken as the gray value of the target pixel point.
It can be understood that, by such an interpolation method, it is likely that the image generated by interpolation is discontinuous in gray scale, and obvious jaggy is generated at the position where the gray scale changes in the image, resulting in extremely low image accuracy.
Bilinear interpolation algorithm
The bilinear interpolation algorithm can also be called a first-order interpolation algorithm, a final interpolation result is obtained through cubic interpolation operation, and the accuracy is improved compared with that of a nearest neighbor interpolation algorithm.
When the two-dimensional image is interpolated by the bilinear interpolation algorithm, 4 adjacent input pixel points around the target pixel point also need to be acquired. Firstly, dividing 4 input pixel points into 2 pairs along the row direction (or the column direction), respectively performing first-order linear interpolation on the 2 input pixel points based on the position of a target pixel point in the row direction to obtain 2 first-order interpolation results, and then performing first-order linear interpolation on the 2 first-order interpolation results based on the position of the target pixel point in the column direction (or the row direction), and finally obtaining an interpolation result corresponding to the position of the target pixel point.
It can be understood that the calculation process of the bilinear interpolation algorithm is more complicated than that of the nearest neighbor interpolation algorithm, an image with discontinuous gray scale cannot be generated, the accuracy is improved, and the defect of the nearest neighbor interpolation algorithm is basically overcome. However, in the process of performing first-order interpolation, high-frequency components in the input image participate in the operation simultaneously with low-frequency components, resulting in weakening of the high-frequency components. That is, the bilinear interpolation algorithm naturally has a low-pass filtering property, so that high frequency components in the image are damaged, and as a result, the contour portion in the image may be blurred.
Bicubic interpolation algorithm
The Bicubic interpolation algorithm (Bicubic interpolation algorithm) can also be called a cubic convolution interpolation algorithm, is a more complex interpolation mode, and is further improved compared with a bilinear interpolation algorithm. In the bicubic interpolation algorithm, 16 adjacent input pixel points around a target pixel point need to be adopted. In addition, different from the first two algorithms which only consider the gray values of the input pixel points, the bicubic interpolation algorithm introduces the gray value change rate among the input pixel points, so that the gray value corresponding to the target pixel point can be obtained according to the position of the target pixel point, and a more accurate interpolation result is obtained.
In the bicubic interpolation algorithm, in order to obtain the gray value variation trend of 16 input pixel points, the cubic interpolation operation needs to be performed on the gray values of 16 input pixel points. Specifically, the bicubic interpolation algorithm adopts a cubic polynomial s (x) shown in formula 1 as a basis function, not only considering the influence caused by the gray value of each input pixel point, but also considering the influence caused by the change rate of the gray value among the input pixel points, so as to obtain an interpolation function which is closest to the change trend in theory.
s(x) = {1-2|x|2+|x|3, 0≤|x|<1 ; 4-8|x|+5|x|2- |x|3, 1 ≤|x|<2, 0, | x | ≧ 2} (formula 1)
However, although the accuracy of the bicubic interpolation algorithm is further improved compared with the bilinear interpolation algorithm, the problem of high-frequency component damage is still not completely solved. In the process of calculating the change trend of the gray value, the high-frequency component can still be weakened, and the problem that the outline part in the image is not clear can still exist.
Hereinafter, the scheme provided in the present application will be described in detail based on examples of the present application.
In order to overcome the problem of low accuracy of the generated interpolation image caused by high-frequency component damage in the prior art, embodiments of the present application provide an image generation method, in which, after obtaining sub-images of each input image in different frequency bands by performing wavelet decomposition (wavelet transform), interpolation processing is performed on the basis of the plurality of sub-images corresponding to the plurality of input images by taking the frequency bands as a unit, so as to obtain interpolation results of each frequency band, and finally, wavelet reconstruction (wavelet inverse transform) is performed on the interpolation results of each frequency band, so as to obtain a target image.
According to the image generation method provided by the embodiment of the application, the resolution of the medical image sequence in the image arrangement direction is improved, meanwhile, the high-frequency information and the low-frequency information in the image can be considered, and the high-frequency information is effectively prevented from being damaged in the interpolation process, so that the image information on the target position is restored at high precision, and a more accurate target image is obtained.
Hereinafter, embodiments of the present application will be described in more detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image generation method according to an embodiment of the present application. The method may be performed by an electronic device (e.g., the image generation device shown in fig. 13).
As shown in fig. 1, the image generation method of the present embodiment includes the steps of:
s110: a first image located before the target position and a second image located after the target position are acquired from the sequence of medical images.
The medical image sequence is obtained through actual acquisition operation, and includes a plurality of images acquired for each cross section of a specific object, wherein a position corresponding to each image is a position where the corresponding cross section is located.
In order to improve the resolution of the medical image sequence in the image arrangement direction, it is necessary to obtain image information at a position (hereinafter referred to as a target position) between the respective sectional images where actual image acquisition is not performed. That is, it is necessary to obtain a target image corresponding to the target position.
Here, the first image located before the target position refers to an image in the medical image sequence whose corresponding position is before the target position. Similarly, the second image located after the target position refers to an image in the medical image sequence whose corresponding position is after the target position. It should be understood that "before" and "after" represent relative positional relationships in space, and are used only for distinguishing the orientation relationship between the position corresponding to each image in the spatial sequence of images and the target position.
In the embodiment of the present application, the first image may be 1 sheet or a plurality of sheets, and similarly, the second image may be 1 sheet or a plurality of sheets. The number of first images and the number of second images may be the same or different.
In a preferred embodiment, the number of the first images is N, and the number of the second images is M, where N is greater than or equal to 2, and M is greater than or equal to 2. That is, the target image may be obtained based on the plurality of first images and the plurality of second images. By adopting a plurality of images before and after the target position and generating the target image based on the information in the plurality of images before and after, more spatial information can be introduced into the operation process, so that the obtained target image is more accurate.
In a preferred embodiment, N and M may be equal, thereby making the calculation process simpler. For example, N = M =2, that is, the target image corresponding to the target position may be obtained based on two first images located before the target position and two second images located after the target position. When N = M =2 is set, the image generation method provided in the embodiment of the present application can reduce the amount of calculation and the calculation time while ensuring high accuracy of the generated target image, thereby taking into account the quality and cost of image generation.
S120: and respectively carrying out wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image corresponding to the first image and a low-frequency sub-image and a high-frequency sub-image corresponding to the second image.
Wavelet decomposition, also called wavelet transform, refers to a signal analysis method that shifts and scales a wavelet basis with finite length and attenuation characteristics to realize signal representation. The wavelet transform is mainly classified into Discrete Wavelet Transform (DWT) and Continuous Wavelet Transform (CWT), wherein the discrete wavelet transform is commonly used for frequency domain decomposition processing of an image. In an embodiment of the present application, a haar wavelet transform may be employed to wavelet decompose the first image and the second image. The haar wavelet transform is one of the simplest wavelet transforms and was also the earliest proposed wavelet transform.
For a two-dimensional image signal, a two-dimensional discrete wavelet decomposition may be implemented by filtering in the horizontal direction and/or the vertical direction, respectively, so as to obtain a plurality of sub-images of the image signal. For example, if a two-dimensional image is filtered in the horizontal direction, a low-frequency component and a high-frequency component of the two-dimensional image in the horizontal direction, that is, a low-frequency sub-image and a high-frequency sub-image, can be obtained; if a two-dimensional image is filtered in the vertical direction, a low-frequency sub-image and a high-frequency sub-image of the two-dimensional image in the vertical direction can be obtained. Further, if the two-dimensional image is filtered in both the horizontal direction and the vertical direction, four sub-images of the two-dimensional image can be obtained, including one low-frequency sub-image and three high-frequency sub-images. In the sub-images obtained by wavelet decomposition, the low-frequency sub-image is an approximate representation of the original image, and the high-frequency sub-image retains the detail characteristics of the original image.
Fig. 6 is a schematic diagram of performing wavelet decomposition on a two-dimensional image in the embodiment of the present application. In a preferred embodiment, the two-dimensional image may be subjected to wavelet decomposition by filtering in both horizontal and vertical directions, and the specific formula is shown in formula 2:
fLL r(x, y) = hV∗ (hH∗ fr−1(2x, 2y))
fHL r(x, y) = gV∗ (hH∗fr−1(2x, 2y))
fLH r(x, y) = hV∗ (gH∗fr−1(2x, 2y))
fHH r(x, y) = gV∗ (gH∗ fr−1(2x, 2y)) (formula 2)
Wherein x and y are pixel coordinates, f is an image function, r is a wavelet decomposition scale, and gNHigh pass filter in horizontal direction, gVIs a high-pass filter of vertical direction, hHA low-pass filter in the horizontal direction, hVThe low-pass filter in the vertical direction represents the convolution operation.
Specifically, as shown in fig. 6, four sub-images corresponding to each two-dimensional image may be respectively represented by LL, HL, LH, and HH, where the LL sub-image is a low-frequency sub-image (representing an approximation of the image), and the HL sub-image (representing a horizontal direction singular characteristic of the image), the LH sub-image (representing a vertical direction singular characteristic of the image), and the HH sub-image (representing a diagonal edge characteristic of the image) are all high-frequency sub-images.
In an embodiment of the present application, each of the first image and the second image may be calculated based on the above formula, and four sub-images corresponding to each image may be generated. For example, as shown in fig. 7, when N = M =2, 16 sub-images in total including four low-frequency sub-images (four LL sub-images) and 12 high-frequency sub-images (four HL sub-images, four LH sub-images, and four HH sub-images) are generated by wavelet decomposition.
S130: and calculating to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image.
S140: and calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
Here, S130 and S140 may be executed simultaneously or sequentially, and the order of S130 and S140 is not limited in the embodiments of the present application.
Still taking N = M =2 as an example, as shown in fig. 7, after obtaining the low-frequency sub-image and the high-frequency sub-image corresponding to each of the first image and the second image, the sub-images are subdivided by frequency bands, that is, the LL sub-image of the first image and the LL sub-image of the second image are grouped together, the HL sub-image of the first image and the HL sub-image of the second image are grouped together, the LH sub-image of the first image and the LH sub-image of the second image are grouped together, and the HH sub-image of the first image and the HH sub-image of the second image are grouped together.
Further, by performing image interpolation processing for each group separately, one low-frequency interpolation result (LL interpolation result) and three high-frequency interpolation results (HL interpolation result, LH interpolation result, and HH interpolation result) can be obtained, that is, an LL sub-image, an HL sub-image, an LH sub-image, and an HH sub-image corresponding to the target position can be obtained.
In one embodiment, the same interpolation algorithm may be used for the calculation of the low frequency interpolation result and the high frequency interpolation result.
Preferably, as shown in fig. 2, in some embodiments, the low-frequency interpolation result may be calculated by using a first interpolation algorithm, and the high-frequency interpolation result may be calculated by using a second interpolation algorithm. That is, S130 may include:
s230: and calculating to obtain a low-frequency interpolation result through a first interpolation algorithm based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image.
Meanwhile, S140 may include:
s240: and calculating to obtain a high-frequency interpolation result through a second interpolation algorithm based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
Here, S230 and S240 may be executed simultaneously or sequentially, and the order of S230 and S240 is not limited in the embodiments of the present application.
In the embodiment, the low-frequency sub-image and the high-frequency sub-image are respectively processed by adopting different interpolation algorithms, so that interpolation operation can be performed more specifically according to different characteristics of the low-frequency sub-image and the high-frequency sub-image, and a more accurate interpolation result can be obtained.
Preferably, the first interpolation algorithm may be a three-dimensional interpolation algorithm, for example, a Tricubic (cubic) interpolation algorithm, a Trilinear interpolation algorithm, or the like may be adopted, so that a more accurate operation is performed on the low-frequency sub-image with a larger information amount, and the reduction accuracy is improved; the second interpolation algorithm may be a two-dimensional interpolation algorithm, for example, a lagrange interpolation algorithm, a bicubic interpolation algorithm, a bilinear interpolation algorithm, or the like may be used to process the high-frequency sub-images with relatively small information amount, so as to reduce the amount of calculation.
The three-dimensional interpolation algorithm is different from the two-dimensional interpolation algorithm, and can not be limited to the information in the plane where the target pixel point is located, so that the information on a plurality of planes near the target pixel point can participate in the interpolation operation, and the accuracy of the interpolation result is further improved.
For example, the following briefly describes the Tricubic interpolation algorithm:
briefly, the Tricubic interpolation algorithm may be understood as a three-dimensional model of a Bicubic (Bicubic) interpolation algorithm. As described above, the bicubic interpolation algorithm needs to use 16 (4 × 4) input pixels in a plane, and takes a cubic polynomial as a basis function, and considers the influence of the gray value and the influence of the gray value change rate of each input pixel, thereby obtaining an interpolation functionf(x,y). By analogy, under the three-dimensional condition, the Tricubic interpolation algorithm needs to obtain an interpolation function by fitting 64 (4 × 4 × 4) input pixel points on 4 planes in the spacef(x, y, z) Therefore, the interpolation result is obtained based on the spatial information around the target position.
It can be understood that, in the embodiment of the present application, interpolation processing is performed on the low-frequency sub-image by using the Tricubic interpolation algorithm, so that not only can information on multiple planes near the target pixel point participate in the interpolation operation, but also a nonlinear interpolation function in a three-dimensional space can be obtained, so that the nonlinear interpolation function is closer to a real gray value change trend, and a more accurate interpolation result is obtained.
Because the amount of information contained in the low-frequency sub-image is large, if the interpolation image of the low-frequency sub-image is generated by adopting a two-dimensional interpolation algorithm, the pixel relationship in a three-dimensional space is easily ignored, and artifacts are easily caused in the generated image. For example, in a medical image sequence, if an organ appearing in the previous image does not appear in the subsequent image in two adjacent CT images, it is indicated that the edge of the organ is located at a position between the two images. That is, assuming that 10 further images are acquired at a position between the two images, there may be 5 images showing the organ and 5 images not showing the organ among the 10 images. However, if 10 interpolated images are generated from two images based on a two-dimensional interpolation algorithm, since the pixel relationship in the three-dimensional space is ignored and the spatial information does not participate in the operation, it is difficult to judge the exact position of the edge of the organ, which may result in that 9 images showing the organ are included in the 10 images. That is, of the 9 images, 4 images were displayed with the artifact.
In order to solve the above problems, based on the image generation method provided by the embodiment of the present application, the three-dimensional interpolation algorithm is adopted to perform interpolation processing on the low-frequency sub-images, so that the surrounding space information can be comprehensively considered, and the edge information in the image can be effectively restored, thereby greatly reducing the probability of occurrence of artifacts, and improving the authenticity of the generated image. Meanwhile, a two-dimensional interpolation algorithm can be adopted to perform interpolation processing on the high-frequency sub-images with relatively small information quantity, so that the problems of overlarge calculated quantity and increased cost are avoided.
S150: and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position.
Wavelet reconstruction, also known as inverse wavelet transform, is the inverse process of wavelet decomposition used to recombine sub-images to generate a complete image.
As described above, the low-frequency interpolation result and the high-frequency interpolation result are the low-frequency sub-image and the high-frequency sub-image corresponding to the target position. Therefore, the low-frequency sub-image and the high-frequency sub-image are subjected to wavelet reconstruction, and the target image corresponding to the target position can be generated.
Still taking N = M =2 as an example, as shown in fig. 7, LL 'is the interpolation result of four LL sub-images, HL' is the interpolation result of four HL sub-images, LH 'is the interpolation result of four LH sub-images, and HH' is the interpolation result of four HH sub-images. And performing wavelet reconstruction on the four interpolation results, wherein the generated image is an interpolation image between the first image and the second image, namely the target image corresponding to the target position.
S160: the target image is inserted into the sequence of medical images according to the target position to obtain an augmented sequence of medical images.
Alternatively, after the target image is obtained, the target image may be inserted into the medical image sequence (i.e. between the first image and the second image) according to its corresponding target position, thereby obtaining an expanded medical image sequence with a higher resolution in the image arrangement direction. The expanded medical image sequence can provide more accurate and comprehensive information in subsequent application scenes, for example, when the medical image sequence inserted into the target image is adopted for 3D reconstruction, the resolution of the reconstructed 3D model can be greatly improved.
According to the image generation method provided by the embodiment of the application, the resolution of the medical image sequence in the image arrangement direction is improved, meanwhile, the high-frequency information and the low-frequency information in the image can be considered, and the high-frequency information is effectively prevented from being damaged in the interpolation process, so that the image information on the target position is restored at high precision, and a more accurate target image is obtained. Further, the image generation method provided by the embodiment of the application can effectively utilize the spatial information near the target position, reduce the probability of the occurrence of artifacts in the generated target image, and further improve the authenticity of the target image.
Fig. 3 is a schematic flowchart of an image generation method according to another embodiment of the present application. The method may be performed by an electronic device (e.g., the image generation device shown in fig. 13).
Optionally, as shown in fig. 3, in this embodiment, S150 in the embodiment shown in fig. 1 or fig. 2 may specifically include:
s351: and respectively carrying out filtering processing on the low-frequency interpolation result and the high-frequency interpolation result to obtain a low-frequency filtering result and a high-frequency filtering result.
S352: and performing wavelet reconstruction on the low-frequency filtering result and the high-frequency filtering result to obtain a target image.
Wherein, the filtering process may adopt gaussian filtering. Gaussian filtering is a linear smoothing filter that can be used to smooth an image.
In the embodiment of the present application, by performing filtering processing on the low-frequency interpolation result and the high-frequency interpolation result respectively, and performing wavelet reconstruction based on the low-frequency filtering result (i.e., the filtered low-frequency interpolation result) and the high-frequency filtering result (i.e., the filtered high-frequency interpolation result), it is possible to effectively filter out a singular value that may be generated in the process of combining the low-frequency signal feature and the high-frequency signal feature, and at the same time, smooth an artifact that may remain in a target image, and further reduce the range of the artifact.
Fig. 4 is a schematic flowchart of an image generation method according to another embodiment of the present application. The method may be performed by an electronic device (e.g., the image generation device shown in fig. 13).
As shown in fig. 4, in the present embodiment, S130 in the foregoing embodiment may further include:
s431: and respectively carrying out wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image.
As described above, the low-frequency sub-image is an approximate representation of the original image, and therefore contains more information. In this embodiment, in order to further improve the authenticity of the target image, the wavelet decomposition process may be performed on the low-frequency sub-images of the first image and the second image again, and more abundant information is extracted from the low-frequency sub-images for the interpolation operation.
Specifically, by performing wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image, a secondary low-frequency sub-image and a secondary high-frequency sub-image of the low-frequency sub-image corresponding to the first image, and a secondary low-frequency sub-image and a secondary high-frequency sub-image of the low-frequency sub-image corresponding to the second image can be obtained.
See the examples in fig. 8, 9. Fig. 8 shows an example of a medical image (for example, a first image), and fig. 9 shows a result obtained by performing a first wavelet decomposition (in the horizontal direction and the vertical direction) on the image shown in fig. 8, and then performing a second wavelet decomposition on the obtained low-frequency sub-image. Here, the low-frequency sub-image and the high-frequency sub-image obtained by the primary wavelet decomposition are denoted by LL1 and HL1, LH1, and HH1, respectively, and the secondary low-frequency sub-image and the secondary high-frequency sub-image obtained by the secondary wavelet decomposition of the LL1 sub-image are denoted by LL2 and HL2, LH2, and HH2, respectively. Through the wavelet decomposition twice, 7 sub-images corresponding to the image shown in fig. 8 can be obtained, including 1 low-frequency sub-image LL2 and 6 high-frequency sub-images HL1, LH1, HH1, HL2, LH2, and HH 2.
S432: and obtaining a secondary low-frequency interpolation result through a first interpolation algorithm based on the secondary low-frequency sub-image corresponding to the first image and the secondary low-frequency sub-image corresponding to the second image.
S433: and obtaining a secondary high-frequency interpolation result through a second interpolation algorithm based on the secondary high-frequency sub-image corresponding to the first image and the secondary high-frequency sub-image corresponding to the second image.
Here, S432 and S433 may be executed simultaneously or sequentially, and the order of S230 and S240 is not limited in the embodiments of the present application.
Similarly to S230 and S240 in the foregoing embodiment, after obtaining the secondary low-frequency sub-image corresponding to the first image and the secondary low-frequency sub-image corresponding to the second image, respectively, the secondary sub-images may be subdivided by frequency bands, and interpolation operations may be performed, respectively, to obtain a secondary low-frequency interpolation result and a secondary high-frequency interpolation result of the (primary) low-frequency sub-image corresponding to the target image.
For example, when a scheme of performing wavelet decomposition simultaneously in the horizontal direction and the vertical direction is adopted, an LL2 sub-image, an HL2 sub-image, an LH2 sub-image, and an HH2 sub-image corresponding to each of the first images, and an LL2 sub-image, an HL2 sub-image, an LH2 sub-image, and an HH2 sub-image corresponding to the second image can be obtained. These two-dimensional sub-images are divided, that is, an LL2 sub-image of the first image and an LL2 sub-image of the second image are grouped, an HL2 sub-image of the first image and an HL2 sub-image of the second image are grouped, an LH2 sub-image of the first image and an LH2 sub-image of the second image are grouped, and an HH2 sub-image of the first image and an HH2 sub-image of the second image are grouped. Further, by performing image interpolation processing for each group separately, one low-frequency interpolation result (LL 2 interpolation result) and three high-frequency interpolation results (HL 2 interpolation result, LH2 interpolation result, and HH2 interpolation result) can be obtained, that is, an LL2 sub-image, an HL2 sub-image, an LH2 sub-image, and an HH2 sub-image corresponding to an LL1 sub-image of the target image can be obtained.
In the embodiment, the secondary low-frequency sub-image and the secondary high-frequency sub-image are respectively processed by adopting different interpolation algorithms, so that interpolation operation can be performed more specifically according to different characteristics of low frequency and high frequency, and a more accurate secondary interpolation result is obtained. Preferably, the first interpolation algorithm is a three-dimensional interpolation algorithm, and the second interpolation algorithm is a two-dimensional interpolation algorithm.
S434: and performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a low-frequency interpolation result.
Similarly to S150, after obtaining the secondary low-frequency sub-image and the secondary high-frequency sub-image corresponding to the low-frequency sub-image of the target image, performing wavelet reconstruction on the secondary low-frequency sub-image and the secondary high-frequency sub-image, so as to obtain a low-frequency interpolation result of the target image. The low-frequency interpolation result obtained by the image generation method provided by the embodiment retains richer information, and the accuracy of the target image can be further improved.
Further, S150 is executed, and wavelet reconstruction is performed on the low-frequency interpolation result obtained after the second decomposition reconstruction and the high-frequency interpolation result obtained through S140, so as to obtain the target image.
According to the image generation method provided by the embodiment of the application, the first image and the second image in the medical image sequence are subjected to secondary wavelet decomposition, more information near the target position can be obtained, a more accurate target image is obtained through interpolation processing, and the authenticity of the target image is further improved.
Optionally, as shown in fig. 5, in some embodiments, S434 may specifically include:
s5341: and respectively carrying out filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a secondary low-frequency filtering result and a secondary high-frequency filtering result.
S5342: and performing wavelet reconstruction on the secondary low-frequency filtering result and the secondary high-frequency filtering result to obtain a low-frequency interpolation result.
Wherein, the filtering process may adopt gaussian filtering.
In the embodiment of the present application, by performing filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result respectively, and performing wavelet reconstruction based on the secondary low-frequency filtering result (i.e., the filtered secondary low-frequency interpolation result) and the secondary high-frequency filtering result (i.e., the filtered secondary high-frequency interpolation result), it is possible to effectively filter out a singular value that may be generated during the combination process of the low-frequency signal characteristic and the high-frequency signal characteristic, and at the same time, smooth an artifact that may remain in the low-frequency interpolation result, and further reduce the range of the artifact.
Meanwhile, in this embodiment, S150 may specifically include:
s551: and filtering the high-frequency interpolation result to obtain a high-frequency filtering result.
S552: and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency filtering result to obtain a target image.
Since the low-frequency interpolation result has been filtered before the second reconstruction, in this embodiment, the high-frequency interpolation result may be filtered only to obtain a high-frequency filtering result, and the low-frequency interpolation result and the high-frequency filtering result may be wavelet reconstructed to obtain the target image.
As an example of a target image, fig. 10 is a target image generated based on the image generation method provided in the embodiment of the present application. As shown in fig. 10, in the present example, two first images before the target position and two second images after the target position in the medical image sequence are used, and by implementing the image generation method provided in the above embodiment of the present application, the target image corresponding to the target position is obtained. It can be seen that in the target image, the boundary of the displayed object is clear and has no artifact, and the transition part in the image is relatively smooth. The example proves that the image generation method provided by the embodiment of the application can effectively obtain the target image, and the target image has good accuracy and authenticity.
According to the image generation method provided by the embodiment of the application, the existing image in the medical image sequence is decomposed into the low-frequency sub-image and the high-frequency sub-image in a wavelet decomposition mode, the low-frequency sub-image and the high-frequency sub-image are subjected to interpolation operation respectively, and then the obtained low-frequency interpolation result and the obtained high-frequency interpolation result are subjected to wavelet reconstruction to obtain the target image, so that the problem that the target image is low in accuracy due to loss of high-frequency components in the interpolation process is effectively solved; meanwhile, by respectively adopting a three-dimensional interpolation algorithm and a two-dimensional interpolation algorithm for the low-frequency sub-image and the high-frequency sub-image, the global information in the space around the target position can be effectively utilized, the generation of artifacts is reduced, the generated target image has higher authenticity, and the overhigh calculation cost can be avoided.
It should be understood that the image generation method provided by the embodiment of the present application can be used in image interpolation scenarios of various spatial image sequences, and especially has a good use prospect in scenarios of medical image sequences, for example, can be used in imaging technologies such as MRI, CT, and the like, and can also provide accurate assistance in application scenarios such as PET-CT fusion, 3D reconstruction, organ segmentation, image registration, and the like.
The following describes embodiments of the apparatus of the present application, and the image generating apparatus in the following embodiments can be used to execute the image generating method provided by the above embodiments, and therefore, portions not described in detail in the following embodiments can be referred to the above method embodiments.
Fig. 11 is a schematic structural diagram of an image generating apparatus 1100 according to an embodiment of the present application. The image generation apparatus 1100 may be used to perform the image generation method shown in fig. 1.
As shown in fig. 11, the image generation apparatus 1100 provided by the present embodiment may include:
an acquisition module 1110 for acquiring a first image located before a target position and a second image located after the target position from a sequence of medical images;
a decomposition module 1120, configured to perform wavelet decomposition on the first image and the second image respectively to obtain a low-frequency sub-image and a high-frequency sub-image corresponding to the first image and a low-frequency sub-image and a high-frequency sub-image corresponding to the second image;
a low-frequency calculating module 1130, configured to calculate a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image;
a high frequency calculation module 1140, configured to calculate a high frequency interpolation result based on the high frequency sub-image of the first image and the high frequency sub-image of the second image;
a reconstruction module 1150, configured to perform wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to a target position;
an insertion module 1160 is configured to insert the target image into the medical image sequence according to the target position to obtain an augmented medical image sequence.
According to the image generation device provided by the embodiment of the application, the resolution of the medical image sequence in the image arrangement direction is improved, meanwhile, the high-frequency information and the low-frequency information in the image can be considered, the high-frequency information is effectively prevented from being damaged in the interpolation process, and therefore the image information on the target position is restored with high precision, and a more accurate target image is obtained. Further, the image generation device provided by the embodiment of the application can effectively utilize the spatial information near the target position, reduce the probability of the occurrence of artifacts in the generated target image, and further improve the authenticity of the target image.
Optionally, in some embodiments, the low frequency calculation module 1130 may be specifically configured to: calculating to obtain a low-frequency interpolation result through a first interpolation algorithm based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image; the high frequency calculation module 1140 may be specifically configured to: and calculating to obtain a high-frequency interpolation result through a second interpolation algorithm based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
Preferably, in some embodiments, the low-frequency interpolation result may be calculated by using a first interpolation algorithm, the high-frequency interpolation result may be calculated by using a second interpolation algorithm, and the low-frequency sub-image and the high-frequency sub-image are processed by using different interpolation algorithms, so that the interpolation operation can be performed more specifically according to different characteristics of the low-frequency sub-image and the high-frequency sub-image, and a more accurate interpolation result can be obtained.
Optionally, in some embodiments, the reconfiguration module 1150 may be specifically configured to: respectively filtering the low-frequency interpolation result and the high-frequency interpolation result to obtain a low-frequency filtering result and a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency filtering result and the high-frequency filtering result to obtain a target image.
Fig. 12 is a schematic structural diagram of an image generating apparatus 1200 according to another embodiment of the present application. The image generation apparatus 1200 may be used to perform the image generation method shown in fig. 4.
As shown in fig. 12, the image generation apparatus 1200 is different from the image generation apparatus 1100 of the foregoing embodiment in that the low frequency calculation module 1130 may include:
the secondary decomposition unit 1231 is configured to perform wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image, respectively, to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image;
a secondary low frequency calculation unit 1232, configured to obtain a secondary low frequency interpolation result through a first interpolation algorithm based on the secondary low frequency sub-image corresponding to the first image and the secondary low frequency sub-image corresponding to the second image;
a secondary high frequency calculation unit 1233, configured to obtain a secondary high frequency interpolation result through a second interpolation algorithm based on the secondary high frequency sub-image corresponding to the first image and the secondary high frequency sub-image corresponding to the second image;
the secondary reconstruction unit 1234 is configured to perform wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a low-frequency interpolation result.
According to the image generation device provided by the embodiment of the application, the first image and the second image in the medical image sequence are subjected to secondary wavelet decomposition, more information near the target position can be acquired, so that a more accurate target image is obtained through interpolation processing, and the authenticity of the target image is further improved.
Optionally, in some embodiments, the quadratic reconstruction unit 1234 may be specifically configured to: respectively carrying out filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a secondary low-frequency filtering result and a secondary high-frequency filtering result; and performing wavelet reconstruction on the secondary low-frequency filtering result and the secondary high-frequency filtering result to obtain a low-frequency interpolation result.
Optionally, in some embodiments, the reconfiguration module 1150 may be specifically configured to: filtering the high-frequency interpolation result to obtain a high-frequency filtering result; and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency filtering result to obtain a target image.
In the embodiment of the present application, by performing filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result respectively, and performing wavelet reconstruction based on the secondary low-frequency filtering result (i.e., the filtered secondary low-frequency interpolation result) and the secondary high-frequency filtering result (i.e., the filtered secondary high-frequency interpolation result), it is possible to effectively filter out a singular value that may be generated during the combination process of the low-frequency signal characteristic and the high-frequency signal characteristic, and at the same time, smooth an artifact that may remain in the low-frequency interpolation result, and further reduce the range of the artifact.
Fig. 13 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application. As shown in fig. 13, the image generating apparatus includes: a memory 1310 for storing computer instructions; a processor 1320 for executing the computer instructions stored in the memory 1310, wherein when the computer instructions are executed, the processor 1320 is configured to execute the image generation method provided by any of the above embodiments.
Other embodiments of the present application also provide a computer-readable storage medium on which a program is stored, which when executed, implements the image generation method according to any of the above embodiments. It is understood that the storage medium can be any tangible medium, such as: floppy disks, CD-ROMs, DVDs, hard drives, network media, or the like.
Other embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the image generation method as described in any of the above embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-mentioned apparatuses, devices, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or modules may be combined, or may be integrated into another system, or some features may be omitted, or not executed. The indirect coupling or communication connection of the devices or units may be electrical, mechanical or other.
It should also be noted that in the devices, apparatuses, and methods of the present application, the modules or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The block diagrams of devices, apparatuses, modules and units referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements and configurations must be made in the manner shown in the block diagrams. Those skilled in the art will appreciate that the devices, apparatus, systems, etc. may be connected, arranged, or configured in any manner. Words such as "comprising," "including," "having," and the like are open-ended words to "including, but not limited to," and may be used interchangeably therewith unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The previous description is provided to enable any person skilled in the art to make or use the present application. Various modifications to the above-described aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the above aspects but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is intended to be illustrative and descriptive of the present technology. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed above. While a number of exemplary aspects and embodiments have been discussed above, other variations, modifications, changes, additions, and sub-combinations will readily occur to those skilled in the art based on the foregoing. Such other variations, modifications, changes, additions, sub-combinations and the like, which fall within the spirit and principles of the present application, are intended to be included within the scope of the present application.

Claims (11)

1. An image generation method, comprising:
acquiring a first image located in front of the target position and a second image located behind the target position from the medical image sequence;
respectively performing wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image corresponding to the first image and a low-frequency sub-image and a high-frequency sub-image corresponding to the second image;
calculating to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image;
calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image;
performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position;
inserting the target image into the sequence of medical images according to the target position to obtain an augmented sequence of medical images,
wherein, the calculating to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image comprises:
respectively performing wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image;
obtaining a secondary low-frequency interpolation result through a first interpolation algorithm based on a secondary low-frequency sub-image corresponding to the first image and a secondary low-frequency sub-image corresponding to the second image;
obtaining a secondary high-frequency interpolation result through a second interpolation algorithm based on a secondary high-frequency sub-image corresponding to the first image and a secondary high-frequency sub-image corresponding to the second image;
and performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result.
2. The image generation method according to claim 1,
the calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image includes:
and calculating to obtain the high-frequency interpolation result through the second interpolation algorithm based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image.
3. The image generation method according to claim 1, wherein the performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position includes:
respectively filtering the low-frequency interpolation result and the high-frequency interpolation result to obtain a low-frequency filtering result and a high-frequency filtering result;
and performing wavelet reconstruction on the low-frequency filtering result and the high-frequency filtering result to obtain the target image.
4. The image generation method according to claim 1, wherein the performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result includes:
respectively carrying out filtering processing on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain a secondary low-frequency filtering result and a secondary high-frequency filtering result;
and performing wavelet reconstruction on the secondary low-frequency filtering result and the secondary high-frequency filtering result to obtain the low-frequency interpolation result.
5. The image generation method according to claim 4,
the wavelet reconstruction of the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position includes:
filtering the high-frequency interpolation result to obtain a high-frequency filtering result;
and performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency filtering result to obtain the target image.
6. The image generation method according to any one of claims 1 to 5, characterized in that:
the number of the first images is N, the number of the second images is M, wherein N is larger than or equal to 2, and M is larger than or equal to 2.
7. The image generation method according to claim 6, characterized in that: n = M = 2.
8. The image generation method according to any one of claims 1 to 5, characterized in that: the first interpolation algorithm is a three-dimensional interpolation algorithm, and the second interpolation algorithm is a two-dimensional interpolation algorithm.
9. An image generation apparatus, comprising:
an acquisition module for acquiring a first image located before a target position and a second image located after the target position from a medical image sequence;
the decomposition module is used for respectively carrying out wavelet decomposition on the first image and the second image to obtain a low-frequency sub-image and a high-frequency sub-image which correspond to the first image and a low-frequency sub-image and a high-frequency sub-image which correspond to the second image;
the low-frequency computing module is used for computing to obtain a low-frequency interpolation result based on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image;
the high-frequency calculation module is used for calculating to obtain a high-frequency interpolation result based on the high-frequency sub-image of the first image and the high-frequency sub-image of the second image;
the reconstruction module is used for performing wavelet reconstruction on the low-frequency interpolation result and the high-frequency interpolation result to obtain a target image corresponding to the target position;
an insertion module for inserting the target image into the sequence of medical images according to the target position to obtain an augmented sequence of medical images,
wherein the low frequency calculation module comprises:
the secondary decomposition unit is used for respectively performing wavelet decomposition on the low-frequency sub-image of the first image and the low-frequency sub-image of the second image to obtain a secondary low-frequency sub-image and a secondary high-frequency sub-image of each low-frequency sub-image;
the secondary low-frequency computing unit is used for obtaining a secondary low-frequency interpolation result through a first interpolation algorithm based on a secondary low-frequency sub-image corresponding to the first image and a secondary low-frequency sub-image corresponding to the second image;
a secondary high-frequency calculating unit, configured to obtain a secondary high-frequency interpolation result through a second interpolation algorithm based on a secondary high-frequency sub-image corresponding to the first image and a secondary high-frequency sub-image corresponding to the second image;
and the secondary reconstruction unit is used for performing wavelet reconstruction on the secondary low-frequency interpolation result and the secondary high-frequency interpolation result to obtain the low-frequency interpolation result.
10. An image generation device characterized by comprising:
a memory for storing computer instructions;
a processor for executing computer instructions stored in the memory, the processor for performing the image generation method of any of claims 1-8 when the memory-stored computer instructions are executed.
11. A computer-readable storage medium having a program stored thereon, wherein the program, when executed, implements the image generation method according to any one of claims 1 to 8.
CN202111217070.9A 2021-10-19 2021-10-19 Image generation method and device Pending CN113660444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217070.9A CN113660444A (en) 2021-10-19 2021-10-19 Image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217070.9A CN113660444A (en) 2021-10-19 2021-10-19 Image generation method and device

Publications (1)

Publication Number Publication Date
CN113660444A true CN113660444A (en) 2021-11-16

Family

ID=78494635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217070.9A Pending CN113660444A (en) 2021-10-19 2021-10-19 Image generation method and device

Country Status (1)

Country Link
CN (1) CN113660444A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170301095A1 (en) * 2015-12-31 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. Methods and systems for image processing
CN108171654A (en) * 2017-11-20 2018-06-15 西北大学 Chinese character image super resolution ratio reconstruction method with AF panel
CN109658354A (en) * 2018-12-20 2019-04-19 上海联影医疗科技有限公司 A kind of image enchancing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170301095A1 (en) * 2015-12-31 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. Methods and systems for image processing
CN108171654A (en) * 2017-11-20 2018-06-15 西北大学 Chinese character image super resolution ratio reconstruction method with AF panel
CN109658354A (en) * 2018-12-20 2019-04-19 上海联影医疗科技有限公司 A kind of image enchancing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
武士想: "医学图像层间插值方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄海赟: "基于小波的医学图像插值", 《自动化学报》 *

Similar Documents

Publication Publication Date Title
US11896360B2 (en) Systems and methods for generating thin image slices from thick image slices
Rueda et al. Single-image super-resolution of brain MR images using overcomplete dictionaries
Jia et al. A new sparse representation framework for reconstruction of an isotropic high spatial resolution MR volume from orthogonal anisotropic resolution scans
JP2001057677A (en) Image processing method, system and recording medium
CN113160380B (en) Three-dimensional magnetic resonance image super-resolution reconstruction method, electronic equipment and storage medium
EP3965053A1 (en) Method for enhancing object contour of image in real-time video
US7778493B2 (en) Pixelation reconstruction for image resolution and image data transmission
Wu et al. Denoising of 3D brain MR images with parallel residual learning of convolutional neural network using global and local feature extraction
Rousseau et al. A groupwise super-resolution approach: application to brain MRI
Liu et al. Low-dose CT noise reduction based on local total variation and improved wavelet residual CNN
Li et al. Unpaired low‐dose computed tomography image denoising using a progressive cyclical convolutional neural network
US8805122B1 (en) System, method, and computer-readable medium for interpolating spatially transformed volumetric medical image data
Rousseau et al. A supervised patch-based image reconstruction technique: Application to brain MRI super-resolution
CN102890817A (en) Method and system for processing medical images
JP7106741B2 (en) Learning method, learning device, generative model and program
Malczewski Super-Resolution with compressively sensed MR/PET signals at its input
CN113660444A (en) Image generation method and device
Iwamoto et al. Super-resolution of MR volumetric images using sparse representation and self-similarity
Kwon et al. A fast 3D adaptive bilateral filter for ultrasound volume visualization
CN113592745A (en) Unsupervised MRI image restoration method based on antagonism domain self-adaptation
Wang et al. Brain MRI Super-resolution Reconstruction using a Multi-level and Parallel Conv-Deconv Network
Kalavathi et al. Noise removal in MR brain images using 2D wavelet based bivariate shrinkage method
Soto et al. Thermal noise estimation and removal in MRI: a noise cancellation approach
Šroubek et al. PET image reconstruction using prior information from CT or MRI
CN113920214A (en) PET parameter imaging method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211116

RJ01 Rejection of invention patent application after publication