CN111275635A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111275635A
CN111275635A CN202010034019.3A CN202010034019A CN111275635A CN 111275635 A CN111275635 A CN 111275635A CN 202010034019 A CN202010034019 A CN 202010034019A CN 111275635 A CN111275635 A CN 111275635A
Authority
CN
China
Prior art keywords
image
optimized
displacement field
images
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010034019.3A
Other languages
Chinese (zh)
Other versions
CN111275635B (en
Inventor
王军搏
韩冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd filed Critical Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd
Priority to CN202010034019.3A priority Critical patent/CN111275635B/en
Publication of CN111275635A publication Critical patent/CN111275635A/en
Application granted granted Critical
Publication of CN111275635B publication Critical patent/CN111275635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method and device. In the embodiment of the invention, the image to be optimized is determined, the image to be optimized is one frame of image in continuous multi-frame images with the same imaging target, at least one frame of image in a preset range is selected as a first image in the multi-frame images, for each first image, a trained machine learning model is used for obtaining a displacement field between the first image and the image to be optimized, the first image is subjected to spatial transformation according to the displacement field to obtain a second image with each pixel point aligned with the corresponding pixel point of the image to be optimized, and the second images corresponding to all the first images are fused with the image to be optimized to obtain the target image.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The medical imaging technology is a technology for collecting internal tissue structure or physiological metabolic information of a human body in a non-invasive manner by using various medical imaging devices, and related imaging devices include a CT (Computed Tomography) device, a magnetic resonance device, a PET (Positron Emission Tomography) device, an X-ray machine, an ultrasound device, and the like. Due to certain constraints, the medical images generated during or after image acquisition may suffer from reduced image quality.
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides an image processing method and device, which can improve the image quality.
According to a first aspect of embodiments of the present invention, there is provided an image processing method, the method including:
determining an image to be optimized, wherein the image to be optimized is one frame image in continuous multi-frame images of the same imaging target;
selecting at least one frame image in a preset range from the multiple frame images as a first image;
for each first image, acquiring a displacement field between the first image and the image to be optimized by using a trained machine learning model;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized;
and fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
According to a second aspect of the embodiments of the present invention, there is provided an image processing apparatus including:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining an image to be optimized, and the image to be optimized is one frame image in continuous multi-frame images with the same imaging target;
the selection module is used for selecting at least one frame image in a preset range from the multi-frame images as a first image;
a displacement field obtaining module, configured to obtain, for each first image, a displacement field between the first image and the image to be optimized by using a trained machine learning model;
the transformation module is used for carrying out space transformation on the first image according to the displacement field to obtain a second image in which each pixel point is aligned with the corresponding pixel point of the image to be optimized;
and the fusion module is used for fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the image to be optimized is determined, the image to be optimized is one of continuous multi-frame images with the same imaging target, at least one frame of image in a preset range is selected as a first image in the multi-frame images, for each first image, a trained machine learning model is used for obtaining a displacement field between the first image and the image to be optimized, the first image is subjected to spatial transformation according to the displacement field to obtain a second image with each pixel point aligned with the corresponding pixel point of the image to be optimized, and the second images corresponding to all the first images are fused with the image to be optimized to obtain the target image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present specification and together with the description, serve to explain the principles of the specification.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
FIG. 2 is an exemplary diagram of a displacement field provided by an embodiment of the present invention.
Fig. 3 is a functional block diagram of an image processing apparatus according to an embodiment of the present invention.
Fig. 4 is a hardware structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of embodiments of the invention, as detailed in the following claims.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used to describe various information in embodiments of the present invention, the information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the process of acquiring medical images, a plurality of factors can cause the image quality to be reduced. Of these, there are two typical cases.
One is the degradation of image quality due to low dose. For example, in CT scanning or X-ray scanning, in order to reduce the radiation to which the patient is exposed, the radiation dose is often reduced by reducing the bulb voltage or current in the device, and the image quality is affected by the increase of image noise and the reduction of signal-to-noise ratio.
Another is the degradation of image quality due to motion. The patient or device usually generates active or passive motion during data acquisition, resulting in image blurring or artifacts, resulting in reduced image quality.
In the related art, for the case of image quality degradation caused by low dose, the image is filtered and denoised by image post-processing, for example, by constructing a filter function, or by constructing a more accurate noise model to identify a specific noise distribution. The method is usually based on the current image to be processed for calculation, only can realize noise reduction to a certain degree, and simultaneously can introduce the problems of image distortion, detail loss and the like.
Aiming at the situation that the image quality is reduced due to movement, a sensor is added at the equipment end to track the movement situation in the related technology, and the movement compensation is carried out according to the movement information during imaging so as to achieve the purpose of improving the image quality. The method has a good compensation effect on simple rigid motion, can not accurately compensate elastic motion such as patient wriggling, and causes image ghosting and blurring, and has high requirements on equipment precision.
The image processing method of the present invention will be described below by way of examples.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the image processing method may include:
s101, determining an image to be optimized, wherein the image to be optimized is one frame image in continuous multi-frame images of the same imaging target.
S102, selecting at least one frame image in a preset range from the multiple frame images as a first image.
S103, for each first image, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model.
And S104, performing spatial transformation on the first image according to the displacement field to obtain a second image in which each pixel point is aligned with the corresponding pixel point of the image to be optimized.
And S105, fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
It should be noted that, in the present embodiment, the multi-frame image may be a temporally continuous multi-frame image or a spatially continuous multi-frame image. The image to be optimized needs to be one of continuous multi-frame images, for example, one of continuous multi-frame images acquired at different times at the same position, or one of continuous multi-frame images acquired at adjacent spatial positions.
In a multi-frame image, adjacent frame images contain overlapping portions on contents, and optimization of an image to be optimized refers to an operation on the overlapping portions of the contents.
The images in the multi-frame images may be images generated by a video device in a general sense, or may refer to intermediate images generated during data acquisition or processing.
The image here may be a medical image or an image in another field.
In application, one frame of image can be selected from multiple frames of scanned images according to actual requirements to serve as an image to be optimized. For example, in a real-time imaging system, the image acquired most recently in time may be selected as the image to be optimized.
In step S102, the preset range may be a time period T in which the image to be optimized is located or a spatial range S in which the position of the image to be optimized is located.
For example. M is an integer greater than 1, and n is an integer.
In the case where the multi-frame image is a temporally continuous multi-frame image, the image I to be optimizedtIs t, the predetermined range may be the time interval [ t-m, t + n ]]The time interval has a time length T, except for the image I to be optimizedtBesides, m + n frame images with time points of t-m, …, t-2, t-1, t +2, … and t + n are contained in the time period, and can be selected as the first image. Wherein, when n is 0, it represents the image I to be optimizedtIs the newest image if n>0, this indicates that there is a delay of n frames.
In the case where the multi-frame image is a spatially continuous multi-frame image, the image I to be optimizedpThe predetermined range may be a position interval [ p-m, p + n ]]The size of this location interval is S, except for the image I to be optimizedpIn addition, m + n frame images with the position points of p-m, …, p-2, p-1, p +2, … and p + n are included in the spatial range S, and these images can be selected as the first image.
The above is an example of the listed preset ranges, and is not used to limit the preset ranges. In other embodiments, other preset ranges may be set as needed, for example, the preset range may be m + n frames of images in the multi-frame image sequence that are arranged before the image I to be optimized, or the preset range may also be m + n frames of images in the multi-frame image sequence that are arranged after the image I to be optimized.
The second image is obtained after the first image is subjected to spatial transformation, and each pixel point of the second image is aligned with the corresponding pixel point of the image to be optimized, namely the positions of the pixel points with the consistent content structures in the two frames of images of the second image and the image to be optimized are consistent.
Because the pixel point position of the image content structure in the second image is the same as the pixel point position of the image content structure in the image to be optimized, when the second image is fused with the image to be optimized, the signal components in the image can be superposed and amplified, meanwhile, the intensity of random noise is reduced, the effect of equivalently improving the radiation dose or prolonging the scanning time is achieved, more accurate noise suppression is realized, the signal-to-noise ratio of the image under the condition of low-dose scanning is improved, and the blurring or the artifact caused by motion can be effectively suppressed or even eliminated, so that the image quality is improved.
The fusion mode may be any one of the fusion modes in the related art. For example, one possible way of fusion is: and adding pixel values of a plurality of frames of second images obtained after the first image space transformation and the image to be optimized according to pixel positions, and then averaging to obtain a fused target image.
In step S104, according to the displacement field, spatial transformation may be performed on all pixel points on the first image to obtain a second image.
The displacement field between the two frames of images refers to a matrix formed by the relative displacement of pixel points with consistent content structures in the two frames of images. The displacement field is described herein with reference to the accompanying drawings.
FIG. 2 is an exemplary diagram of a displacement field provided by an embodiment of the present invention. As shown in FIG. 2, the triangles in FIG. 2(a) and FIG. 2(b) are the images of the same object captured at different times or different positions, and FIG. 2(a) shows the image I to be optimizedtOr IpEach small square in the figure approximately represents a pixel in the image, the image comprises a triangle with A, B, C points as vertexes, and the pixel positions can be respectively represented as i13、i51、i56. FIG. 2(B) shows a frame image adjacent to the image to be optimized in the time period T or in the spatial range S, the image includes a triangle with three points A ', B' and C 'as vertexes, and the pixel positions thereof can be respectively represented as i'02、i’52、i’57. For simplicity of description, only the six pixel points are illustrated here. The displacement amounts of the three points A ', B ' and C ' in the two frame images relative to the three point A, B, C in the image to be optimized in the x direction are respectively represented as 1, -1, and the displacement amounts in the y direction are respectively represented as 1, 0 and 0. The displacement can be represented by two image matrixes (i.e. displacement fields) with the same size as the image to be optimized, for example, as shown in fig. 2(c) x-direction displacement field and 2(d) y-direction displacement field, the value in the image matrixes represents that the position of the pixel point in the two frames of images, which is shifted by the corresponding value along the x-direction or the y-direction, can be converted into the same form as the image to be optimized.
In this embodiment, the number of the obtained second images is the same as the number of the first images, and each frame of the first images corresponds to one frame of the second images. In addition, in this embodiment, the number of displacement fields to be acquired is the same as the number of first images, that is, when there are m + n frames of first images (where m frames of first images are images acquired before the image to be optimized, and n frames of first images are images acquired after the image to be optimized), m + n times of displacement fields need to be calculated.
In an exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model may include:
inputting the first image and the image to be optimized into a trained first machine learning model, and outputting a displacement field between the first image and the image to be optimized by the first machine learning model.
The size of the input image of the first machine learning model and the size of the image matrix corresponding to the output displacement field are the same as the size of the image to be optimized.
When a first machine learning model is trained by utilizing multiple groups of sample data, each group of sample data comprises an image to be optimized, a frame of image adjacent to the image to be optimized and a displacement field between the two frames of images, wherein the image to be optimized and the frame of image adjacent to the image to be optimized serve as input images of the model, and the displacement field between the two frames of images serves as a label.
In the embodiment, the trained first machine learning model is used for acquiring the displacement field between the two frames of images, so that the speed is high and the efficiency is high.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
respectively carrying out down-sampling processing on the first image and the image to be optimized to obtain a first down-sampled image corresponding to the first image and a second down-sampled image corresponding to the image to be optimized;
inputting the first downsampled image and the second downsampled image into a trained second machine learning model to obtain a middle displacement field output by the second machine learning model;
and interpolating the intermediate displacement field to obtain a displacement field between the first image and the image to be optimized.
In this embodiment, the size of the input image of the second machine learning model and the size of the image matrix corresponding to the output intermediate displacement field are both smaller than the size of the image to be optimized. And the size of the image matrix corresponding to the middle displacement field is the same as the size of the image subjected to the down sampling of the image to be optimized. And the displacement field between the first image and the image to be optimized is obtained by interpolating the middle displacement field.
When the second machine learning model is trained by using multiple groups of sample data, each group of sample data comprises an image obtained by down-sampling an image to be optimized, an image obtained by down-sampling a frame of image adjacent to the image to be optimized, and a middle displacement field between the two frames of image.
Assuming that the size of the image to be optimized is 100 × 100, and the size of the image to be optimized is 50 × 50 through downsampling, the size of the input image of the second machine learning model is 50 × 50, and the size of the image matrix corresponding to the middle displacement field output by the second machine learning model is 50 × 50.
According to the image processing method and device, the calculated amount of the second machine learning model is reduced through down-sampling processing of the image, the calculating speed of the model is increased, the processing speed of the whole optimization process is increased, and the efficiency is further improved.
In an exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model may include:
inputting the first image and the image to be optimized into a trained third machine learning model to obtain a sparse displacement field output by the third machine learning model;
and interpolating the sparse displacement field to obtain a displacement field between the first image and the image to be optimized.
And the size of the image matrix corresponding to the sparse displacement field is smaller than that of the image to be optimized.
In this embodiment, the size of the input image of the third machine learning model is the same as the size of the image to be optimized, and the size of the image matrix corresponding to the sparse displacement field output by the third machine learning model is smaller than the size of the image to be optimized.
When the third machine learning model is trained by using multiple groups of sample data, each group of sample data comprises an image to be optimized, a frame of image adjacent to the image to be optimized, and a sparse displacement field between the two frames of images.
Assuming that the size of the image to be optimized is 100 × 100, the size of the input image of the second machine learning model is 100 × 100, and the size of the image matrix corresponding to the sparse displacement field output by the second machine learning model is smaller than 100 × 100.
According to the method, the third machine learning model for outputting the sparse displacement field is adopted, so that the calculated amount is reduced, the calculation speed of the model is increased, the processing speed of the whole optimization process is increased, and the efficiency is further improved.
In an exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model may include:
acquiring a first displacement field between the first image and an intermediate image, and acquiring a second displacement field between the intermediate image and the image to be optimized;
performing spatial transformation on the first image according to the displacement field to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized, which may include:
acquiring an intermediate transformation image obtained by performing space transformation on all pixel points on the first image according to the first displacement field;
and according to the second displacement field, performing space transformation on all pixel points on the intermediate transformation image to obtain a second image.
The embodiment simplifies and accelerates the displacement field calculation by using an accumulation mode, and improves the processing speed. The accumulation method is to calculate a displacement field and a spatial transformation on the basis of an image obtained by spatially transforming a first image, which has been subjected to the calculation of the displacement field and is spatially transformed using the displacement field.
Taking the image sequence collected in time sequence as an example, when the first image I to be optimized istWhen optimizing, all images I in the time period Tt-m、…、It-2、It-1、It+1、It+2、…、It+nSeparately calculating the displacement field Pt-m、…、Pt-2、Pt-1、Pt+1、Pt+2、…、Pt+nAnd respectively carrying out space transformation to obtain a space-transformed image Tt-m、…、Tt-2、Tt-1、Tt+1、Tt+2,…、Tt+n. The first optimization requires the calculation of the displacement field m + n times.
For the second image I to be optimizedt+1When optimized, due to the spatially transformed image Tt-m+1、…、Tt-2、Tt-1Has been compared with ItThe displacement fields are the same, only the current image I to be optimized needs to be calculatedt+1And image It、It+2、It+3、…、It+n+1With a displacement field P in betweent、Pt+2、Pt+3、…、Pt+n+1In combination with a displacement field PtFor the image T which is subjected to the previous spatial transformationt-m+1、…、Tt-2、Tt-1And image ItPerforming a spatial transformation and using the displacement field Pt+2、Pt+3、…、Pt+n+1Respectively for image It+2、It+3、…、It+n+1Space transformation is carried out, and the image I to be optimized currently can be obtainedt+1The second optimization only needs to calculate the displacement field 1+ n times.
In an exemplary implementation process, the method may further include:
obtaining a first optimized image obtained after the first image is optimized;
performing spatial transformation on the first optimized image to obtain a third image with each pixel point aligned with the corresponding pixel point of the image to be optimized;
fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image, wherein the fusing comprises the following steps:
and fusing the second images corresponding to all the first images and the third images corresponding to all the first images with the image to be optimized to obtain a target image.
For example, suppose I1~I10For a temporally successive acquired sequence of imagesFor image I4Optimizing to obtain optimized image M4. To the image I5When performing the optimization, selecting image I2、I3、I4、I6As the first image, the image I needs to be calculated separately5And I2 Displacement field 2 between, image I5And I3 Displacement field 3 between, image I5And I4 Displacement field 4 between, image I5And I6With a displacement field 6 in between, and then for the image I according to the displacement field 22Carrying out spatial transformation to obtain an image T2From the displacement field 3 to the image I3Carrying out spatial transformation to obtain an image T3According to the displacement field 4 for the image I4And an image M4Carrying out spatial transformation to obtain an image T4And image F4From the displacement field 6 to the image I6Carrying out spatial transformation to obtain an image T6. Using images T2、T3、T4、F4、T6And image I5Fusing to obtain an image I5The corresponding target image.
The embodiment utilizes the optimized high-quality image to be fused with the image to be optimized, so that the image quality can be improved.
The image processing method provided by the embodiment of the invention comprises the steps of determining an image to be optimized, wherein the image to be optimized is one of continuous multi-frame images with the same imaging target, selecting at least one frame image in a preset range as a first image from the multi-frame images, acquiring a displacement field between the first image and the image to be optimized by using a trained machine learning model for each first image, carrying out spatial transformation on the first image according to the displacement field to obtain a second image with each pixel point aligned with the corresponding pixel point of the image to be optimized, fusing the second images corresponding to all the first images with the image to be optimized to obtain the target image, so that the noise in the image can be weakened, the signal-to-noise ratio of the image scanned at low dose can be improved, the image blurring or distortion caused by movement can be effectively inhibited, and the image quality can be improved.
For the condition of low-dose scanning, the image processing method provided by the embodiment of the invention can effectively improve the signal-to-noise ratio of the image, simultaneously avoid the introduction of image distortion and detail loss, and improve the image quality.
The image processing method provided by the embodiment of the invention has a good effect of inhibiting artifacts and blurring caused by motion, and is not only suitable for rigid motion, but also suitable for elastic motion. In addition, the image processing method provided by the embodiment of the invention does not improve the requirement on the precision of the equipment.
Based on the above method embodiment, the embodiment of the present invention further provides corresponding apparatus, device, and storage medium embodiments.
Fig. 3 is a functional block diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 3, in the present embodiment, the image processing apparatus may include:
a determining module 310, configured to determine an image to be optimized, where the image to be optimized is one of consecutive multi-frame images of the same imaging target;
a selecting module 320, configured to select, from the multiple frames of images, at least one frame of image within a preset range as a first image;
a displacement field obtaining module 330, configured to, for each first image, obtain a displacement field between the first image and the image to be optimized by using a trained machine learning model;
a transformation module 340, configured to perform spatial transformation on the first image according to the displacement field, so as to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized;
and a fusion module 350, configured to fuse the second images corresponding to all the first images with the image to be optimized to obtain a target image.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained first machine learning model, and outputting a displacement field between the first image and the image to be optimized by the first machine learning model.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
respectively carrying out down-sampling processing on the first image and the image to be optimized to obtain a first down-sampled image corresponding to the first image and a second down-sampled image corresponding to the image to be optimized;
inputting the first downsampled image and the second downsampled image into a trained second machine learning model to obtain a middle displacement field output by the second machine learning model, wherein the size of an image matrix corresponding to the middle displacement field is the same as the size of the image subjected to downsampling of the image to be optimized;
and interpolating the intermediate displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained third machine learning model to obtain a sparse displacement field output by the third machine learning model, wherein the size of an image matrix corresponding to the sparse displacement field is smaller than that of the image to be optimized;
and interpolating the sparse displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
acquiring a first displacement field between the first image and an intermediate image, and acquiring a second displacement field between the intermediate image and the image to be optimized;
the transformation module 340 may be specifically configured to:
acquiring an intermediate transformation image obtained by performing space transformation on all pixel points on the first image according to the first displacement field;
and according to the second displacement field, performing space transformation on all pixel points on the intermediate transformation image to obtain a second image.
In an exemplary implementation process, the method may further include:
the acquisition module is used for acquiring a first optimized image obtained after the first image is optimized;
the optimization transformation module is used for carrying out space transformation on the first optimization image to obtain a third image with each pixel point aligned with the corresponding pixel point of the image to be optimized;
the fusion module 350 is specifically configured to: and fusing the second images corresponding to all the first images and the third images corresponding to all the first images with the image to be optimized to obtain a target image.
In an exemplary implementation process, the preset range is a time period T in which the image to be optimized is located or a spatial range S in which a position of the image to be optimized is located.
In one exemplary implementation, the multi-frame image is a temporally continuous multi-frame image or a spatially continuous multi-frame image.
The embodiment of the invention also provides the electronic equipment. Fig. 4 is a hardware structure diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 4, the electronic apparatus includes: an internal bus 401, and a memory 402, a processor 403, and an external interface 404, which are connected through the internal bus, wherein,
the processor 403 is configured to read the machine-readable instructions in the memory 402 and execute the instructions to implement the following operations:
determining an image to be optimized, wherein the image to be optimized is one frame image in continuous multi-frame images of the same imaging target;
selecting at least one frame image in a preset range from the multiple frame images as a first image;
for each first image, acquiring a displacement field between the first image and the image to be optimized by using a trained machine learning model;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized;
and fusing the second images corresponding to all the first images with the images to be optimized corresponding to all the first images to obtain the target image.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained first machine learning model, and outputting a displacement field between the first image and the image to be optimized by the first machine learning model.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
respectively carrying out down-sampling processing on the first image and the image to be optimized to obtain a first down-sampled image corresponding to the first image and a second down-sampled image corresponding to the image to be optimized;
inputting the first downsampled image and the second downsampled image into a trained second machine learning model to obtain a middle displacement field output by the second machine learning model, wherein the size of an image matrix corresponding to the middle displacement field is the same as the size of the image subjected to downsampling of the image to be optimized;
and interpolating the intermediate displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained third machine learning model to obtain a sparse displacement field output by the third machine learning model, wherein the size of an image matrix corresponding to the sparse displacement field is smaller than that of the image to be optimized;
and interpolating the sparse displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
acquiring a first displacement field between the first image and an intermediate image, and acquiring a second displacement field between the intermediate image and the image to be optimized;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized, including:
acquiring an intermediate transformation image obtained by performing space transformation on all pixel points on the first image according to the first displacement field;
and according to the second displacement field, performing space transformation on all pixel points on the intermediate transformation image to obtain a second image.
In an exemplary implementation, the method further includes:
obtaining a first optimized image obtained after the first image is optimized;
performing spatial transformation on the first optimized image to obtain a third image with each pixel point aligned with the corresponding pixel point of the image to be optimized;
fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image, wherein the fusing comprises the following steps:
and fusing the second images corresponding to all the first images and the third images corresponding to all the first images with the image to be optimized to obtain a target image.
In an exemplary implementation process, the preset range is a time period T in which the image to be optimized is located or a spatial range S in which a position of the image to be optimized is located.
In one exemplary implementation, the multi-frame image is a temporally continuous multi-frame image or a spatially continuous multi-frame image.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements the following operations:
determining an image to be optimized, wherein the image to be optimized is one frame image in continuous multi-frame images of the same imaging target;
selecting at least one frame image in a preset range from the multiple frame images as a first image;
for each first image, acquiring a displacement field between the first image and the image to be optimized by using a trained machine learning model;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized;
and fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained first machine learning model, and outputting a displacement field between the first image and the image to be optimized by the first machine learning model.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
respectively carrying out down-sampling processing on the first image and the image to be optimized to obtain a first down-sampled image corresponding to the first image and a second down-sampled image corresponding to the image to be optimized;
inputting the first downsampled image and the second downsampled image into a trained second machine learning model to obtain a middle displacement field output by the second machine learning model, wherein the size of an image matrix corresponding to the middle displacement field is the same as the size of the image subjected to downsampling of the image to be optimized;
and interpolating the intermediate displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
inputting the first image and the image to be optimized into a trained third machine learning model to obtain a sparse displacement field output by the third machine learning model, wherein the size of an image matrix corresponding to the sparse displacement field is smaller than that of the image to be optimized;
and interpolating the sparse displacement field to obtain a displacement field between the first image and the image to be optimized.
In one exemplary implementation, obtaining a displacement field between the first image and the image to be optimized by using a trained machine learning model includes:
acquiring a first displacement field between the first image and an intermediate image, and acquiring a second displacement field between the intermediate image and the image to be optimized;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized, including:
acquiring an intermediate transformation image obtained by performing space transformation on all pixel points on the first image according to the first displacement field;
and according to the second displacement field, performing space transformation on all pixel points on the intermediate transformation image to obtain a second image.
In an exemplary implementation, the method further includes:
obtaining a first optimized image obtained after the first image is optimized;
performing spatial transformation on the first optimized image to obtain a third image with each pixel point aligned with the corresponding pixel point of the image to be optimized;
fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image, wherein the fusing comprises the following steps:
and fusing the second images corresponding to all the first images and the third images corresponding to all the first images with the image to be optimized to obtain a target image.
In an exemplary implementation process, the preset range is a time period T in which the image to be optimized is located or a spatial range S in which a position of the image to be optimized is located.
In one exemplary implementation, the multi-frame image is a temporally continuous multi-frame image or a spatially continuous multi-frame image.
For the device and apparatus embodiments, as they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (9)

1. An image processing method, characterized in that the method comprises:
determining an image to be optimized, wherein the image to be optimized is one frame image in continuous multi-frame images of the same imaging target;
selecting at least one frame image in a preset range from the multiple frame images as a first image;
for each first image, acquiring a displacement field between the first image and the image to be optimized by using a trained machine learning model;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized;
and fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
2. The method of claim 1, wherein obtaining a displacement field between the first image and the image to be optimized using a trained machine learning model comprises:
inputting the first image and the image to be optimized into a trained first machine learning model, and outputting a displacement field between the first image and the image to be optimized by the first machine learning model.
3. The method of claim 1, wherein obtaining a displacement field between the first image and the image to be optimized using a trained machine learning model comprises:
respectively carrying out down-sampling processing on the first image and the image to be optimized to obtain a first down-sampled image corresponding to the first image and a second down-sampled image corresponding to the image to be optimized;
inputting the first downsampled image and the second downsampled image into a trained second machine learning model to obtain a middle displacement field output by the second machine learning model, wherein the size of an image matrix corresponding to the middle displacement field is the same as the size of the image subjected to downsampling of the image to be optimized;
and interpolating the intermediate displacement field to obtain a displacement field between the first image and the image to be optimized.
4. The method of claim 1, wherein obtaining a displacement field between the first image and the image to be optimized using a trained machine learning model comprises:
inputting the first image and the image to be optimized into a trained third machine learning model to obtain a sparse displacement field output by the third machine learning model, wherein the size of an image matrix corresponding to the sparse displacement field is smaller than that of the image to be optimized;
and interpolating the sparse displacement field to obtain a displacement field between the first image and the image to be optimized.
5. The method of claim 1, wherein obtaining a displacement field between the first image and the image to be optimized using a trained machine learning model comprises:
acquiring a first displacement field between the first image and an intermediate image, and acquiring a second displacement field between the intermediate image and the image to be optimized;
according to the displacement field, performing spatial transformation on the first image to obtain a second image in which each pixel point is aligned with a corresponding pixel point of the image to be optimized, including:
acquiring an intermediate transformation image obtained by performing space transformation on all pixel points on the first image according to the first displacement field;
and according to the second displacement field, performing space transformation on all pixel points on the intermediate transformation image to obtain a second image.
6. The method of claim 1, further comprising:
obtaining a first optimized image obtained after the first image is optimized;
performing spatial transformation on the first optimized image to obtain a third image with each pixel point aligned with the corresponding pixel point of the image to be optimized;
fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image, wherein the fusing comprises the following steps:
and fusing the second images corresponding to all the first images and the third images corresponding to all the first images with the image to be optimized to obtain a target image.
7. The method according to claim 1, wherein the preset range is a time period T in which the image to be optimized is located or a spatial range S in which a position of the image to be optimized is located.
8. The method according to claim 1, wherein the multi-frame image is a temporally continuous multi-frame image or a spatially continuous multi-frame image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining an image to be optimized, and the image to be optimized is one frame image in continuous multi-frame images with the same imaging target;
the selection module is used for selecting at least one frame image in a preset range from the multi-frame images as a first image;
a displacement field obtaining module, configured to obtain, for each first image, a displacement field between the first image and the image to be optimized by using a trained machine learning model;
the transformation module is used for carrying out space transformation on the first image according to the displacement field to obtain a second image in which each pixel point is aligned with the corresponding pixel point of the image to be optimized;
and the fusion module is used for fusing the second images corresponding to all the first images with the image to be optimized to obtain a target image.
CN202010034019.3A 2020-01-13 2020-01-13 Image processing method and device Active CN111275635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010034019.3A CN111275635B (en) 2020-01-13 2020-01-13 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010034019.3A CN111275635B (en) 2020-01-13 2020-01-13 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111275635A true CN111275635A (en) 2020-06-12
CN111275635B CN111275635B (en) 2024-03-08

Family

ID=71000189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010034019.3A Active CN111275635B (en) 2020-01-13 2020-01-13 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111275635B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330729A (en) * 2020-11-27 2021-02-05 中国科学院深圳先进技术研究院 Image depth prediction method and device, terminal device and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909904B (en) * 2017-03-02 2020-06-02 中科视拓(北京)科技有限公司 Human face obverse method based on learnable deformation field
US10699410B2 (en) * 2017-08-17 2020-06-30 Siemes Healthcare GmbH Automatic change detection in medical images
CN110503619B (en) * 2019-06-27 2021-09-03 北京奇艺世纪科技有限公司 Image processing method, device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330729A (en) * 2020-11-27 2021-02-05 中国科学院深圳先进技术研究院 Image depth prediction method and device, terminal device and readable storage medium
CN112330729B (en) * 2020-11-27 2024-01-12 中国科学院深圳先进技术研究院 Image depth prediction method, device, terminal equipment and readable storage medium

Also Published As

Publication number Publication date
CN111275635B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
Isaac et al. Super resolution techniques for medical image processing
EP1074001B1 (en) Image processing method, system and apparatus for forming an overview image of an elongated scene
US7378660B2 (en) Computer program, method, and system for hybrid CT attenuation correction
US20180220997A1 (en) System and method for accelerated clutter filtering in ultrasound blood flow imaging using randomized ultrasound data
US20060235295A1 (en) Method for movement-compensation in imaging
DE102011083643A1 (en) Method, computer system and CT system for determining a motion field and for motion-compensating reconstruction with this motion field
CN102292743B (en) Nuclear image reconstruction
JP2014508282A (en) Method and apparatus for detecting and correcting motion in wrist mode PET data with gate signal
KR101351583B1 (en) Medical image imaging method, medical diagnostic apparatus thereof, and recording device thereof
DE102011083647A1 (en) Motion compensated computer tomography-image data set creating method, involves reconstructing final image data set using movement-compensating reconstruction method based on reconstruction algorithm and movement field
Godtliebsen et al. A nonlinear Gaussian filter applied to images with discontinuities
CN101663691A (en) The spatial-temporal warping of different pre-captured medical images
CN111161182B (en) MR structure information constrained non-local mean guided PET image partial volume correction method
US7778493B2 (en) Pixelation reconstruction for image resolution and image data transmission
CN110728730B (en) Image reconstruction method, device, CT equipment and CT system
DE102011083646A1 (en) Method for determination of motion field of heart of patient, involves determining motion field using extreme value of image based on motion-compensating reconstructed tomographic image data sets
CN106780651B (en) Cardiac spiral CT image reconstruction method and device
CN111275635A (en) Image processing method and device
CN111462273B (en) Image processing method, device, CT equipment and CT system
US7379615B2 (en) Method for space-time filtering of noise in radiography
US7970191B2 (en) System and method for simultaneously subsampling fluoroscopic images and enhancing guidewire visibility
CN111798534A (en) Image reconstruction method and device, console equipment and CT system
Yim et al. A deep convolutional neural network for simultaneous denoising and deblurring in computed tomography
CN111050648A (en) Radiographic apparatus
Brankov et al. Motion-compensated 4D processing of gated SPECT perfusion studies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240204

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant after: Shenyang Neusoft Medical Systems Co.,Ltd.

Country or region after: China

Address before: Room 336, 177-1, Chuangxin Road, Hunnan New District, Shenyang City, Liaoning Province

Applicant before: Shenyang advanced medical equipment Technology Incubation Center Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant