WO2023082306A1 - Image processing method and apparatus, and electronic device and computer-readable storage medium - Google Patents

Image processing method and apparatus, and electronic device and computer-readable storage medium Download PDF

Info

Publication number
WO2023082306A1
WO2023082306A1 PCT/CN2021/131635 CN2021131635W WO2023082306A1 WO 2023082306 A1 WO2023082306 A1 WO 2023082306A1 CN 2021131635 W CN2021131635 W CN 2021131635W WO 2023082306 A1 WO2023082306 A1 WO 2023082306A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
initial
feature points
dimensional feature
Prior art date
Application number
PCT/CN2021/131635
Other languages
French (fr)
Chinese (zh)
Inventor
李鑫宇
肖鹏
兰嘉友
董超群
谢庆国
Original Assignee
苏州瑞派宁科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州瑞派宁科技有限公司 filed Critical 苏州瑞派宁科技有限公司
Publication of WO2023082306A1 publication Critical patent/WO2023082306A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/404Angiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Definitions

  • the present application relates to the field of image processing, and in particular, relates to an image processing method, device, electronic device and computer-readable storage medium for rigid body motion.
  • non-rigid motion mainly refers to the motion of a small range, for example, the movement of the mouse trunk
  • rigid motion mainly refers to the motion of a large range, such as the motion of the head of the mouse.
  • the motion correction method based on the projection domain For the correction of rigid motion, the following two methods are mainly used: the motion correction method based on the projection domain and the motion correction method based on the image domain.
  • Motion correction methods based on the image domain usually use the Motion Incorporated Reconstruction (MIR) algorithm combined with motion information.
  • MIR Motion Incorporated Reconstruction
  • the MIR algorithm was proposed by Feng Qiao and Tinsu Pan et al. in 2006. It is a voxel-based motion correction algorithm.
  • the MIR algorithm is based on two assumptions: one is that the imaging target only moves between adjacent data frames, and the imaging target remains stationary within each data frame; the other is that the imaging target is composed of countless spatial points, and the The size is infinitesimally small, and the concentration of radioactive substances in these spatial points does not change with the movement of these points, that is, during the data acquisition process, the concentration of radioactive substances in each part of the imaging target is constant, and the concentration of radioactive substances only increases with the space of the imaging target. Movement and redistribution in space.
  • the image processing process using the motion correction method based on the image domain includes the following steps.
  • the mapping relationship matrix M t between image voxels is obtained, also called the motion matrix.
  • the registration of CT images is realized by using a non-rigid transformation model based on cubic B-splines and a mean square error similarity criterion.
  • the motion matrix M t and the distribution of the concentration of radioactive substances in the field of view (Field of View, FOV for short) area satisfy the formula (1).
  • X t represents the distribution of the concentration of radioactive substances in the FOV area at time t
  • X 0 represents the distribution of the concentration of radioactive substances in the FOV area at time 0.
  • Y is a one-dimensional vector representing the projection data
  • E(Y) is the expected value of the projection data Y
  • X is a one-dimensional vector composed of image voxels
  • G is the system response matrix.
  • two-dimensional and three-dimensional images can also be converted to one-dimensional images for the above-mentioned processing.
  • the system response matrix G can be calculated by actual measurement, Monte Carlo (MC) simulation or mathematical analysis.
  • Calculating the system response matrix G using the actual measurement method includes: measuring the same point source at different positions of the PET system imaging field of view, and then constructing the entire system response matrix G by processing the responses in the projection space, these responses include the geometric information of the system and The physics of the process is detected, and the response is parameterized to correct for point source locations and smooth projection noise.
  • Calculating the system response matrix G using the Monte Carlo simulation method includes: in space, according to the size of the voxel and its position in the space, simulate each voxel in the field of view, and obtain the coincidence count and the total number of emitted photons. Ratio, the probability that this voxel is detected by the current LineOfResponse (LOR) is obtained, which is the system response matrix G.
  • LOR LineOfResponse
  • Line tracing is a simple and effective method for calculating the system response matrix G. This method abstracts the response line as an actual ray, and takes the length of the ray passing through each voxel as the probability that the photon emitted by the voxel is detected by the response line, so as to obtain the system response matrix G.
  • the solid corner model is a more accurate and complex model than the line tracing method.
  • the calculated vertical angle formed by each voxel in the space and a pair of detectors is taken as the probability that the photons emitted by the voxel are detected by the pair of detectors, so as to obtain the system response matrix G.
  • the solid angle model is more in line with the actual PET imaging process.
  • formula (2) is modified into formula (3).
  • Equation (2) can be modified to the discrete form shown in Equation (4).
  • G * [(GM 0 ) T ,(GM 1 ) T ,...,(GM m-1 ) T ] T formula (6)
  • Equation (4) can be rewritten as Equation (7), which is consistent with the form of Equation (2) of PET imaging model for stationary objects.
  • the 4D projection data Y * in formula (5) and formula (6) and the y* of each data frame can be used
  • the system response matrix G * of motion information replaces the system response matrix and projection data matrix in formula (2). Therefore, the voxel-based iterative reconstruction algorithm MLEM, which is applied to the PET imaging model formula (2) of a stationary object, is also applicable to formula (7).
  • voxel j Y i is the coincidence event count on the i-th line of response (LOR) actually measured
  • Gij represents the probability that the gamma photon emitted by voxel j is detected by the scintillation crystals at both ends of the i-th LOR
  • M represents the total number of lines of response (LOR)
  • N represents the sum of voxels
  • j 0 is the voxel of the inner layer.
  • X k is the estimation of the reference position image obtained in the kth iteration
  • S is the sensitivity image, which can be calculated by formula (10).
  • the disadvantage of the existing technology for image correction is that when obtaining the mapping relationship matrix M t , only one medical imaging device can be used, and an additional 4D CT device is required for CT scanning, and the movement of the object in the two devices needs to be consistent . It not only increases the cost of imaging, but also limits the algorithm to only target regular small-scale movements, such as breathing, heartbeat and other movements. It is also difficult to ensure that the movement of the imaging target in the two devices is completely consistent, and image registration Usually involves an optimization process with a large number of parameters. Therefore, the method of obtaining mapping information between voxels through image registration technology is only suitable for small-scale motion correction, and the accuracy of the obtained motion information cannot be guaranteed when the motion range is large. In addition, when using CT images for registration, a large amount of calculation is required, and the accuracy is not high.
  • This application proposes an image processing method, device, electronic device and computer-readable storage medium for rigid body motion, to solve the problems of high acquisition cost, complex calculation and low accuracy when acquiring motion information of an active imaging target .
  • an image processing method includes: photographing an imaging target and acquiring an initial frame image and continuous frame images; calculating initial three-dimensional feature points of the initial frame images; calculating the continuous Continuous frame 3D feature points of frame images; matching the initial 3D feature points with the continuous frame 3D feature points to obtain an RT matrix; using the RT matrix to correct the image to obtain a reconstructed image.
  • the image recording device is used to shoot the imaging target and obtain the initial frame image and continuous frame images.
  • the image recording device includes any device that can record the imaging target plane image or 3D image, such as a camera, DV, video camera, 3D scanner etc.
  • a pair of image recording devices of the same model are placed in parallel on both sides of the imaging target to form a binocular vision system to photograph the imaging target.
  • the image recording device is used to record the motion of the imaging target in the form of a video
  • the video includes an initial frame image at an initial moment and continuous frame images at a non-initial moment, wherein the initial The frame images and the continuous frame images respectively include images captured by a pair of image recording devices at the same time.
  • the step of calculating the initial three-dimensional feature points of the initial frame image includes: using a scale-invariant feature transformation method to extract two-dimensional feature points of the initial frame image; removing two-dimensional feature points of the initial frame image The background image of the three-dimensional feature points; matching the two-dimensional feature points of the initial frame image after background removal; triangulating the matching information, thereby obtaining the initial three-dimensional feature points of the initial frame image.
  • the calculating the continuous frame 3D feature points of the continuous frame images includes: using a scale invariant feature transformation method to extract the continuous frame image 2D feature points; removing the continuous frame images The background image of the two-dimensional feature points; matching the two-dimensional feature points of the continuous frame images after the background is removed; and triangulating the matching information, so as to obtain the initial three-dimensional feature points of the continuous frame images.
  • the step of triangulating the matching information includes: calculating the depth Z of any point P on the imaging target under the coordinates of the image recording device by formula (7-1); by formula (7-2) Calculate the abscissa X and ordinate Y of any point P on the imaging target under the camera coordinates;
  • the step of obtaining the RT matrix includes: minimizing errors between the initial 3D feature points and the 3D feature points of consecutive frames by using a singular value decomposition method to obtain the RT matrix.
  • the error is minimized by formula (9-1):
  • R and T represent the rotation matrix and translation matrix respectively
  • J represents the error
  • Q represents the error
  • the singular value decomposition (Singular Value Decomposition, referred to as SVD) method is carried out by formula (10-1):
  • U, S, and V respectively represent the first unitary matrix, diagonal matrix and second unitary matrix after singular value decomposition
  • SVD represents the singular value decomposition of the physical quantities in brackets
  • H represents the re-centered point set.
  • the covariance matrix between , the covariance matrix H is determined by the formula (10-2):
  • the rotation matrix R and the translation matrix T are respectively determined by formulas (11-1), (11-2):
  • ⁇ P and ⁇ Q are the central positions of set P and set Q respectively.
  • the images include CT images, MR images, PET images, PET-CT images, PET-MR images, and CT-MR images.
  • using the RT matrix to correct the PET image to obtain a reconstructed image includes: performing a PET scan on the imaging target to obtain sinogram data; using the RT matrix to correct the sinusoid image data to reconstruct the image.
  • the image processing method before using the RT matrix to correct the image, the image processing method further includes correcting the voxel value of the image by formula (14-1):
  • X t represents the distribution of radioactive material concentration in the FOV area at time t
  • X 0 represents the distribution of radioactive material concentration in the FOV area at the initial time
  • W t is the interpolation weight of all voxel positions at time t.
  • the step of obtaining the RT matrix includes: setting several radioactive point sources on the imaging target, and identifying the radioactive point sources by reconstructing images within the acquisition time frame, according to three
  • the plane composed of the non-collinear radioactive point sources obtains the pose of the imaging target, and the pose includes the position of the imaging target and the rotation angle of the imaging target around the three-dimensional coordinate axis, by calculating The movement between gets the RT matrix.
  • an image processing device includes a frame image acquisition unit for capturing an imaging target and acquiring initial frame images and continuous frame images; an initial three-dimensional feature point calculation unit for calculating The initial three-dimensional feature point of the initial frame image; the continuous frame three-dimensional feature point calculation unit, used to calculate the continuous frame three-dimensional feature point of the continuous frame image; the RT matrix calculation unit, used to match the initial three-dimensional feature point and the The three-dimensional feature points of the continuous frames are used to obtain an RT matrix; the image reconstruction unit is used to correct the image by using the RT matrix to obtain a reconstructed image.
  • an electronic device including one or more processors; a storage device for storing a computer program; when the computer program is executed by the one or more processors, the One or more processors implement the method as in any one of the preceding.
  • a computer-readable storage medium on which program instructions are stored, and when the program instructions are executed, the method described in any one of the preceding items is implemented.
  • a simple but higher-precision image recording device is used to obtain motion information of an object while it is moving, which ensures the accuracy of motion information acquisition and improves the accuracy of motion-corrected images.
  • a large number of complicated image registration calculations are avoided, and the complexity of calculations is reduced.
  • the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.
  • Fig. 1 shows a flowchart of an image processing method according to an example embodiment of the present application.
  • Fig. 2 shows a schematic diagram of triangulation according to an example embodiment of the present application.
  • Fig. 3 shows a schematic diagram of a mapping process of voxels on a two-dimensional graph according to an exemplary embodiment of the present application.
  • Fig. 4 shows an image correction device for rigid body motion according to an exemplary embodiment of the present application.
  • Figure 5a shows the correction results when the object is stationary.
  • Figure 5b shows the reconstruction results before object motion correction.
  • Fig. 5c shows a reconstruction result after object motion correction according to an exemplary embodiment of the present application.
  • Fig. 6 shows a block diagram of another electronic device for rigid body motion according to an embodiment of the present application.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this application will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
  • the same reference numerals denote the same or similar parts in the drawings, and thus their repeated descriptions will be omitted.
  • Fig. 1 shows a flowchart of an image processing method according to an example embodiment of the present application. An image processing method according to an exemplary embodiment of the present application will be described in detail below with reference to FIG. 1 .
  • the exemplary embodiment shown in FIG. 1 corrects motion artifacts generated during PET scanning of awake imaging targets. Since the final imaging of the instrument is only aimed at the head of the imaging target, the movement of the head of the active imaging target can be simplified as a rigid body motion.
  • step S101 an imaging target is photographed and initial frame images and continuous frame images are acquired.
  • the movement of the imaging target is recorded in the form of video by using a pair of image recording devices placed in parallel on both sides of the imaging target.
  • the image recording device includes a plane image or a 3D image capable of recording the imaging target Any device, such as camera, DV, camcorder, 3D scanner, etc.
  • the video captured by a pair of image recording devices includes an initial frame image at an initial moment and continuous frame images at a non-initial moment, wherein the initial frame image and continuous frame images respectively include images captured by a pair of image recording devices at the same moment.
  • the image recording devices are placed in parallel for the purpose of unifying the coordinate system in the post-calculation process and reducing the computational complexity.
  • the image recording equipment can be set at any angle and in any number, and the corresponding coordinate system conversion can be performed during data processing in the later stage.
  • the initial frame image is the frame image at the beginning moment of the video captured by the image recording device, and the images at other moments in the video are continuous frame images.
  • step S101 a pair of image recording devices of the same model, such as cameras, are placed in parallel on the left and right sides of the imaging target to form a binocular vision system to record information about object movement in the form of video.
  • step S103 the initial three-dimensional feature points of the initial frame image are calculated.
  • the initial three-dimensional feature points of the initial frame image are calculated through the following steps:
  • DSA Digital subtraction angiography
  • step S105 three-dimensional feature points of consecutive frames of images are calculated.
  • step S105 the method for calculating the three-dimensional feature points of the continuous frames is the same as the method for calculating the three-dimensional feature points of the initial frame in step S103, and will not be repeated here.
  • step S103 and step S105 The principle of triangulation in step S103 and step S105 will be described below.
  • the so-called triangulation refers to recovering the three-dimensional position information of the feature points of the object according to the two-dimensional projection positions of different image recording devices through the feature points of the object.
  • FIG. 2 it is a schematic diagram of triangulation according to an exemplary embodiment of the present application.
  • the depth Z of the point P in the coordinates of the image recording device can be obtained through the formula (11).
  • step S107 the initial 3D feature points are matched with the 3D feature points of consecutive frames to obtain an RT matrix.
  • the step of calculating the RT matrix includes:
  • the error J of the initial 3D feature points and the 3D feature points of consecutive frames is minimized by the Singular value decomposition (SVD) method, as shown in formula (13).
  • J is the error between the initial three-dimensional feature point and the three-dimensional feature point of the continuous frame
  • R is the rotation matrix
  • Rp i represents the multiplication of the rotation matrix R and the i-th point p i in the point set P
  • T is the translation matrix
  • p i , q i represent the i-th three-dimensional feature point in the sets P and Q respectively
  • i is a natural number less than n
  • n is the number of three-dimensional feature points in the sets P and Q.
  • Singular value decomposition is performed on the covariance matrix H by formula (17).
  • U, S, and V respectively represent the first unitary matrix, diagonal matrix and second unitary matrix after singular value decomposition
  • SVD represents the singular value decomposition of the physical quantities in brackets.
  • H covariance matrix
  • U after singular value decomposition is a unitary matrix of order m ⁇ m
  • the diagonal matrix S is a diagonal matrix of order m ⁇ n
  • the elements on the diagonal are covariance matrices
  • the singular value of H, the second unitary matrix V is a unitary matrix of order n ⁇ n.
  • the translation matrix T is calculated using the rotation matrix R obtained from formula (18).
  • the R and T matrices obtained by formula (18) and formula (19) represent the six-axis motion information of the rigid body motion of the object.
  • the rigid body motion of an object can be decomposed into translational and rotational motions around three axes in space, namely x, y, and z axes, and the obtained motion of the object around the three axes can form an RT matrix.
  • the RT matrix is also called a rotation-translation (Rotation-Translation, RT for short) matrix, and is used to describe six-axis motion information of an object.
  • the six axes are translational and rotational movements in the directions of x, y and z axes.
  • the R and T matrices obtained by formula (18) and formula (19) are combined to obtain the RT matrix.
  • the step of obtaining the RT matrix includes: setting several radioactive point sources on the imaging target, and identifying the radioactive point sources by reconstructing images within the acquisition time frame, according to three non-collinear radioactive point sources
  • the pose of the imaging target is obtained from the surface composed of sources, the pose includes the position of the imaging target and the rotation angle of the imaging target around the three-dimensional coordinate axis, and the RT matrix is obtained by calculating the motion between consecutive frames.
  • step S109 the image is corrected using the RT matrix to obtain a reconstructed image.
  • the step of correcting the PET image using the RT matrix includes:
  • step S107 Perform PET scans on living targets to obtain sinogram data. Divide the sinogram data according to time frames, and use the RT matrix in step S107 to correct the data in the current time frame to the initial position through formula (9). That is, the RT matrix is used to replace the mapping relationship matrix M t in formula (9), so as to obtain a corrected reconstructed image.
  • step S109 the voxel value of the image needs to be corrected by formula (20).
  • X t represents the distribution of radioactive material concentration in the FOV area at time t
  • X 0 represents the distribution of radioactive material concentration in the FOV area at time 0
  • W t is the interpolation weight of all voxel positions at time t.
  • the spatial coordinate transformation of the center point of each voxel can be performed according to the motion information to obtain its corresponding voxel position. Since the point obtained after spatial coordinate transformation is not necessarily the voxel center of the image, in order to improve the positioning accuracy, the voxel position at time t is transformed back to the corresponding voxel position at the initial time according to the motion information, and the transformed voxel position is used.
  • the surrounding voxel values are calculated by an interpolation algorithm to calculate the voxel value of the transformed voxel position, as shown in formula (21).
  • the activity value f( ri ,0) located at the spatial point r i is regarded as the weighted sum of the voxel values of all voxels adjacent to the spatial point r i .
  • the weight value ⁇ ij is taken as the ratio of the overlapping area or volume between the voxel centered on the position of the spatial point r i and its nearby voxels to the total area or volume of the voxel.
  • Fig. 3 shows a schematic diagram of a mapping process of voxels on a two-dimensional graph according to an exemplary embodiment of the present application.
  • the gray dot in the figure indicates that a voxel center A at time t is mapped to its corresponding original position B at time 0, and the voxel value of the original position B can be compared to B near the position at time 0
  • the interpolation calculation of voxels is obtained.
  • the RT matrix may be acquired using point labeling techniques.
  • the specific steps include: paste a small and light radioactive point source on the head of the experimental subject (such as a mouse), reconstruct the image within the acquisition time frame, identify the highlighted point in the image, that is, it is considered a point source, and the default point source position Keep still relative to the head of the rat.
  • the head pose of the rat can be obtained.
  • the head pose includes the position of the object and three rotation angles of the object around the xyz three axes, which are used to represent the orientation of the object.
  • the motion matrix RT can be obtained by calculating the motion between consecutive frames.
  • a simple but higher-precision external camera is used to obtain motion information of a moving object while it is moving, thereby ensuring the accuracy of motion information acquisition and improving the accuracy of motion-corrected images.
  • a large number of complicated image registration calculations are avoided, and the complexity of calculations is reduced.
  • the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.
  • Fig. 4 shows an image processing device according to an exemplary embodiment of the present application.
  • the image processing device shown in FIG. 4 includes a frame image acquisition unit 401 , an initial 3D feature point calculation unit 403 , a continuous frame 3D feature point calculation unit 405 , an RT matrix calculation unit 407 and an image reconstruction unit 409 .
  • the frame image acquiring unit 401 uses a camera to capture an imaging target and acquire initial frame images and continuous frame images.
  • the method described in the embodiment in FIG. 1 may be used for camera setting and image acquisition, and details are not repeated here.
  • the initial three-dimensional feature point calculating unit 403 is used to calculate the initial three-dimensional feature point of the initial frame image.
  • the calculation of the initial three-dimensional feature points can use the method described in step S103 in the embodiment of FIG. 1 , which will not be repeated here.
  • the continuous frame three-dimensional feature point calculation unit 405 is used for calculating continuous frame three-dimensional feature points of the continuous frame images.
  • the calculation of the three-dimensional feature points of consecutive frames can use the method described in step S105 in the embodiment of FIG. 1 , which will not be repeated here.
  • the RT matrix calculation unit 407 is used to match the initial 3D feature points with the 3D feature points of consecutive frames to obtain an RT matrix.
  • the calculation of the RT matrix can use the method described in step S107 in the embodiment of FIG. 1 , which will not be repeated here.
  • the image reconstruction unit 409 is configured to use the RT matrix to correct the PET image to obtain a reconstructed image.
  • the method described in step S109 in the embodiment of FIG. 1 can be used for image reconstruction, which will not be repeated here.
  • Figure 5a shows the correction results when the object is stationary.
  • Fig. 5b shows a reconstruction result before object motion correction
  • Fig. 5c shows a reconstruction result after object motion correction according to an exemplary embodiment of the present application. It can be seen from FIGS. 5 a to 5 c that by using the image processing method provided by the present application, artifacts caused by object motion can be basically eliminated, thereby improving the quality of the reconstructed image.
  • the PET image is corrected by the RT matrix in the above embodiment.
  • the above method can also be applied to motion correction of other medical images such as CT images and MR images, or PET-CT images.
  • PET-MR images, CT-MR images in one or more images of motion correction, in the correction process of these medical images only need to replace the corresponding M t matrix by RT matrix, which belongs to those skilled in the art What can be easily realized according to the teaching of the present invention will not be repeated here.
  • Fig. 6 shows a block diagram of another electronic device for rigid body motion according to an embodiment of the present application.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
  • the electronic device takes the form of a general-purpose computing device.
  • Components of the electronic device may include, but are not limited to: at least one processor 210, at least one memory 220, a bus 230 connecting different system components (including the memory 220 and the processor 210), a display unit 240, and the like.
  • the memory 220 stores program codes, and the program codes can be executed by the processor 210, so that the processor 210 executes the methods described in this specification according to various exemplary embodiments of the present application.
  • the processor 210 may execute the method as shown in FIG. 1 .
  • the memory 220 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 2201 and/or a cache storage unit 2202 , and may further include a read-only storage unit (ROM) 2203 .
  • RAM random access storage unit
  • ROM read-only storage unit
  • Memory 220 may also include programs/utilities 2204 having a set (at least one) of program modules 2205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, which Each or some combination of the examples may include the implementation of a network environment.
  • Bus 230 may represent one or more of several types of bus structures, including a memory cell bus or memory cell controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local area using any of a variety of bus structures. bus.
  • the electronic device can also communicate with one or more external devices 300 (such as keyboards, pointing devices, bluetooth devices, etc.), and can also communicate with one or more devices that allow the user to interact with the image correction device, and/or communicate with the device that enables the user to interact with the image correction device
  • the electronic device is capable of communicating with any device (eg, router, modem, etc.) that communicates with one or more other computing devices. Such communication may occur through input/output (I/O) interface 250 .
  • the electronic device can also communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 260 .
  • networks eg, a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet
  • the network adapter 260 can communicate with other modules of the electronic device through the bus 230 . It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and Data backup storage system, etc.
  • a software product may utilize any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer readable storage medium may include a data signal carrying readable program code in baseband or as part of a carrier wave traveling as part of a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable storage medium may also be any readable medium other than a readable storage medium that can send, propagate or transport a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the readable storage medium may be transmitted by any suitable medium, including but not limited to wireless, cable, optical cable, RF, etc., or any suitable combination of the above.
  • Program codes for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language—such as C or a similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, using an Internet service provider). business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider
  • the above-mentioned computer-readable medium carries one or more program instructions, and when the above-mentioned one or more program instructions are executed by one device, the computer-readable medium can realize the above-mentioned functions.
  • modules in the above embodiments can be distributed in the device according to the description of the embodiment, or can be changed correspondingly to one or more devices that are only different from the embodiment. Multiple modules in the above embodiments may be combined into one module, or one module may be further split into multiple sub-modules.
  • a software product may utilize any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer readable storage medium may include a data signal carrying readable program code in baseband or as part of a carrier wave traveling as part of a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable storage medium may also be any readable medium other than a readable storage medium that can send, propagate or transport a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the readable storage medium may be transmitted by any suitable medium, including but not limited to wireless, cable, optical cable, RF, etc., or any suitable combination of the above.
  • Program codes for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language—such as C or a similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, using an Internet service provider). business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider
  • the above-mentioned computer-readable medium carries one or more program instructions, and when the above-mentioned one or more program instructions are executed by one device, the computer-readable medium can realize the above-mentioned functions.
  • modules in the above embodiments can be distributed in the device according to the description of the embodiment, and corresponding changes can also be made in one or more devices that are only different from the embodiment.
  • Multiple modules in the above embodiments may be combined into one module, or one module may be further split into multiple sub-modules.
  • a simple but higher-precision external camera is used to obtain motion information of a moving object while it is moving, thereby ensuring the accuracy of motion information acquisition and improving the accuracy of motion-corrected images.
  • a large number of complex image registration calculations are avoided, and the complexity of calculations is reduced.
  • the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

Provided in the present application are an image processing method and apparatus, and an electronic device and a computer-readable storage medium. The image correction method comprises: photographing an imaging target and acquiring an initial image frame and continuous image frames; calculating initial three-dimensional feature points of the initial image frame; calculating continuous frame three-dimensional feature points of the continuous image frames; matching the initial three-dimensional feature points and the continuous frame three-dimensional feature points, so as to obtain an RT matrix; and correcting an image by using the RT matrix, so as to obtain a reconstructed image. According to the example embodiments of the present application, motion information of an object is acquired by using an image recording device which is simple but has higher precision, thereby improving the accuracy of a motion corrected image, and reducing the calculation complexity.

Description

图像处理方法、装置、电子设备以及计算机可读存储介质Image processing method, device, electronic device, and computer-readable storage medium 技术领域technical field
本申请涉及图像处理领域,具体而言,涉及一种用于刚体运动的图像处理方法、装置、电子设备以及计算机可读存储介质。The present application relates to the field of image processing, and in particular, relates to an image processing method, device, electronic device and computer-readable storage medium for rigid body motion.
背景技术Background technique
目前,在针对清醒成像目标采集到的图像进行校正时,主要针对以下两种运动进行校正:非刚性运动和刚性运动。其中,非刚性运动主要是指较小范围的运动,例如,老鼠躯干运动,而刚性运动主要是指较大范围的运动,例如,老鼠头部运动。At present, when correcting images collected by awake imaging targets, the following two types of motion are mainly corrected: non-rigid motion and rigid motion. Among them, the non-rigid motion mainly refers to the motion of a small range, for example, the movement of the mouse trunk, while the rigid motion mainly refers to the motion of a large range, such as the motion of the head of the mouse.
针对刚性运动的校正,主要采用以下两种方法:基于投影域的运动校正方法以及基于图像域的运动校正方法。For the correction of rigid motion, the following two methods are mainly used: the motion correction method based on the projection domain and the motion correction method based on the image domain.
基于图像域的运动校正方法通常采用结合运动信息的图像重建(Motion Incorporated Reconstruction,简称MIR)算法。MIR算法是Feng Qiao和Tinsu Pan等人在2006年提出的,是一种基于体素的运动校正算法。MIR算法基于两个假设:一是假设成像目标只在相邻数据帧之间发生运动,在各数据帧内部成像目标保持静止;二是成像目标是由无数的空间点组成的,这些空间点的尺寸无限小,并且这些空间点的放射性物质的浓度不随这些点的运动而改变,即在数据采集过程中,成像目标各部位的放射性物质的浓度恒定,放射性物质的浓度仅随着成像目标的空间运动而在空间中重新分布。Motion correction methods based on the image domain usually use the Motion Incorporated Reconstruction (MIR) algorithm combined with motion information. The MIR algorithm was proposed by Feng Qiao and Tinsu Pan et al. in 2006. It is a voxel-based motion correction algorithm. The MIR algorithm is based on two assumptions: one is that the imaging target only moves between adjacent data frames, and the imaging target remains stationary within each data frame; the other is that the imaging target is composed of countless spatial points, and the The size is infinitesimally small, and the concentration of radioactive substances in these spatial points does not change with the movement of these points, that is, during the data acquisition process, the concentration of radioactive substances in each part of the imaging target is constant, and the concentration of radioactive substances only increases with the space of the imaging target. Movement and redistribution in space.
采用基于图像域的运动校正方法进行图像处理的过程包括如下步骤。The image processing process using the motion correction method based on the image domain includes the following steps.
首先,通过使用PET(Positron Emission Tomography,简称PET)图像上的各数据帧对应的CT图像之间的图像配准技术,获得图像体素之间的映射关系矩阵M t,也称为运动矩阵。其中,CT图像的配准通过使用基于三次B样条的非刚性变换模型以及均方误差相似性标准来实现。 First, by using the image registration technology between the CT images corresponding to each data frame on the PET (Positron Emission Tomography, PET) image, the mapping relationship matrix M t between image voxels is obtained, also called the motion matrix. Among them, the registration of CT images is realized by using a non-rigid transformation model based on cubic B-splines and a mean square error similarity criterion.
运动矩阵M t,与视场角(Field of View,简称FOV)区域的放射性物质的浓度的分布满足公式(1)。 The motion matrix M t , and the distribution of the concentration of radioactive substances in the field of view (Field of View, FOV for short) area satisfy the formula (1).
X t=M tX 0   公式(1) X t = M t X 0 formula (1)
其中,X t表示t时刻FOV区域的放射性物质的浓度的分布,X 0表示0时刻FOV区域的放射性物质的浓度的分布。 Wherein, X t represents the distribution of the concentration of radioactive substances in the FOV area at time t, and X 0 represents the distribution of the concentration of radioactive substances in the FOV area at time 0.
其次,构建PET系统对于静止物体的成像模型,如公式(2)所示。Secondly, construct the imaging model of the PET system for the stationary object, as shown in formula (2).
E(Y)=GX   公式(2)E(Y)=GX formula (2)
其中,Y为一维向量,表示投影数据,E(Y)为投影数据Y的期望值;X为图像体素组成的一维向量;G为系统响应矩阵。另外,二维和三维图像也可以转换为一维图像以进行上述处理。Among them, Y is a one-dimensional vector representing the projection data, E(Y) is the expected value of the projection data Y; X is a one-dimensional vector composed of image voxels; G is the system response matrix. In addition, two-dimensional and three-dimensional images can also be converted to one-dimensional images for the above-mentioned processing.
系统响应矩阵G可通过实际测量、蒙特卡罗(Monte Carlo,简称MC)仿真或者数学 解析的方法计算。The system response matrix G can be calculated by actual measurement, Monte Carlo (MC) simulation or mathematical analysis.
利用实际测量方法计算系统响应矩阵G包括:在PET系统成像视野的不同位置分别测量同一个点源,然后通过处理投影空间中的响应来构建整个系统响应矩阵G,这些响应包括系统的几何信息以及探测过程的物理信息,并且响应会被参数化以校正点源位置并平滑投影噪声。Calculating the system response matrix G using the actual measurement method includes: measuring the same point source at different positions of the PET system imaging field of view, and then constructing the entire system response matrix G by processing the responses in the projection space, these responses include the geometric information of the system and The physics of the process is detected, and the response is parameterized to correct for point source locations and smooth projection noise.
利用蒙特卡罗仿真方法计算系统响应矩阵G包括:在空间中,根据体素的大小和在空间中的位置,对视野内每一个体素进行仿真,获取其符合计数量和总发射光子数的比值,得到这个体素被当前响应线(LineOfResponse,简称LOR)探测到的概率,即为系统响应矩阵G。Calculating the system response matrix G using the Monte Carlo simulation method includes: in space, according to the size of the voxel and its position in the space, simulate each voxel in the field of view, and obtain the coincidence count and the total number of emitted photons. Ratio, the probability that this voxel is detected by the current LineOfResponse (LOR) is obtained, which is the system response matrix G.
基于几何模型的计算方法,也即数学解析方法,因几何模型的不同分为许多种类,如线追踪、立体角等方法。线追踪是一种简单、有效的用于计算系统响应矩阵G的方法。此方法将响应线抽象为一条实际光线,将此光线穿过每个体素的长度作为该体素发出的光子被此条响应线探测到的概率,从而获得系统响应矩阵G。立体角模型是一种比起线追踪法更加精确并且复杂的模型。该方法将计算出的空间中每个体素对一对探测器所形成的空间对顶角的大小作为该体素发出的光子被此对探测器探测到的概率,从而获得系统响应矩阵G。相比于线追踪法,立体角模型更符合实际PET的成像过程。Calculation methods based on geometric models, that is, mathematical analysis methods, are divided into many types due to different geometric models, such as line tracing, solid angle and other methods. Line tracing is a simple and effective method for calculating the system response matrix G. This method abstracts the response line as an actual ray, and takes the length of the ray passing through each voxel as the probability that the photon emitted by the voxel is detected by the response line, so as to obtain the system response matrix G. The solid corner model is a more accurate and complex model than the line tracing method. In this method, the calculated vertical angle formed by each voxel in the space and a pair of detectors is taken as the probability that the photons emitted by the voxel are detected by the pair of detectors, so as to obtain the system response matrix G. Compared with the line tracing method, the solid angle model is more in line with the actual PET imaging process.
再次,基于在每个数据帧内部成像目标保持静止的假设,将公式(2)修改为公式(3)。Again, based on the assumption that the imaging target remains stationary within each data frame, formula (2) is modified into formula (3).
E{Y t}=GM tX 0   公式(3) E{Y t }=GM t X 0 formula (3)
由于所有数据帧一一对应于图像序列X 0,…,X m-1、投影数据Y 0,…,Y m-1和运动矩阵M 0,…,M m-1,其中m为数据帧的数量,因此,可将公式(2)修改为公式(4)所示的离散形式。 Since all data frames correspond to image sequences X 0 ,…,X m-1 , projection data Y 0 ,…,Y m-1 and motion matrices M 0 ,…,M m-1 , where m is the Quantity, therefore, Equation (2) can be modified to the discrete form shown in Equation (4).
Figure PCTCN2021131635-appb-000001
Figure PCTCN2021131635-appb-000001
其中M 0≡I,为单位矩阵。令: Where M 0 ≡I is the identity matrix. make:
Figure PCTCN2021131635-appb-000002
Figure PCTCN2021131635-appb-000002
G *=[(GM 0) T,(GM 1) T,…,(GM m-1) T] T   公式(6) G * =[(GM 0 ) T ,(GM 1 ) T ,...,(GM m-1 ) T ] T formula (6)
公式(4)可改写为公式(7),与PET对静止目标成像模型公式(2)形式一致。Equation (4) can be rewritten as Equation (7), which is consistent with the form of Equation (2) of PET imaging model for stationary objects.
E(Y *)=G *X 0   公式(7) E(Y * )=G * X0 formula (7)
由于得到的PET系统对于成像目标的成像模型与PET系统对于静态目标的成像模型的形式一致,所以,可用公式(5)和公式(6)中的4D投影数据Y *和融入了各数据帧的运动信息的系统响应矩阵G *替换公式(2)中的系统响应矩阵和投影数据矩阵。因此,应用于静止目标的PET成像模型公式(2)的基于体素的迭代重建算法MLEM算同样适用公式(7)。 Since the obtained imaging model of the PET system for the imaging target is in the same form as the imaging model of the PET system for the static target, the 4D projection data Y * in formula (5) and formula (6) and the y* of each data frame can be used The system response matrix G * of motion information replaces the system response matrix and projection data matrix in formula (2). Therefore, the voxel-based iterative reconstruction algorithm MLEM, which is applied to the PET imaging model formula (2) of a stationary object, is also applicable to formula (7).
基于公式(2)的MLEM算法的迭代更新方程如公式(8)所示。The iterative update equation of the MLEM algorithm based on formula (2) is shown in formula (8).
Figure PCTCN2021131635-appb-000003
素j的值,Y i为实际测量的第i条响应线(LOR)上的符合事件计数,G ij表示由体素j发出的伽马光子被第i条LOR两端的闪烁晶体探测到的概率,M表示响应线(LOR)的总数, N表示体素的总和,j 0为内层的体素。
Figure PCTCN2021131635-appb-000003
The value of voxel j, Y i is the coincidence event count on the i-th line of response (LOR) actually measured, Gij represents the probability that the gamma photon emitted by voxel j is detected by the scintillation crystals at both ends of the i-th LOR , M represents the total number of lines of response (LOR), N represents the sum of voxels, and j 0 is the voxel of the inner layer.
将Y *、G *代入MLEM迭代更新方程可以得到基于公式(7)的更新方程,如公式(9)所示。 Substituting Y * and G * into the MLEM iterative update equation can get the update equation based on formula (7), as shown in formula (9).
Figure PCTCN2021131635-appb-000004
Figure PCTCN2021131635-appb-000004
其中,X k为第k次迭代得到的参考位置图像的估计,S为灵敏度图像,可由公式(10)计算得到。 Among them, X k is the estimation of the reference position image obtained in the kth iteration, and S is the sensitivity image, which can be calculated by formula (10).
Figure PCTCN2021131635-appb-000005
Figure PCTCN2021131635-appb-000005
为方便理解,在以下矩阵运算中采用如下运算规则:如果矩阵A和矩阵B的行列数相同,那么AB、A/B分别表示对应位置元素的相乘和相除。For the convenience of understanding, the following operation rules are adopted in the following matrix operations: If the number of rows and columns of matrix A and matrix B are the same, then AB and A/B represent the multiplication and division of elements at corresponding positions, respectively.
通过公式(9)可知,所有数据帧的数据都参与了参考图像的重建。It can be seen from the formula (9) that the data of all data frames are involved in the reconstruction of the reference image.
现有技术进行图像校正的缺点是在获取映射关系矩阵M t时,不能只利用一种医学影像设备,还需要额外的4D CT设备进行CT扫描,且物体在两台设备中的运动需要保持一致。不仅增加了成像成本,也限制了算法只能针对于规律的小范围运动,例如呼吸、心跳等运动,还很难保证成像目标在两种设备中的运动是完全一致的,并且图像配准技术通常涉及大量参数的优化过程。因此,通过图像配准技术获取体素之间的映射信息的方法只适用于小范围的运动校正,当运动范围较大时获得的运动信息的精度无法保证。另外,利用CT图像进行配准时,需要很大的计算量,准确性不高。 The disadvantage of the existing technology for image correction is that when obtaining the mapping relationship matrix M t , only one medical imaging device can be used, and an additional 4D CT device is required for CT scanning, and the movement of the object in the two devices needs to be consistent . It not only increases the cost of imaging, but also limits the algorithm to only target regular small-scale movements, such as breathing, heartbeat and other movements. It is also difficult to ensure that the movement of the imaging target in the two devices is completely consistent, and image registration Usually involves an optimization process with a large number of parameters. Therefore, the method of obtaining mapping information between voxels through image registration technology is only suitable for small-scale motion correction, and the accuracy of the obtained motion information cannot be guaranteed when the motion range is large. In addition, when using CT images for registration, a large amount of calculation is required, and the accuracy is not high.
发明内容Contents of the invention
本申请提出一种用于刚体运动的图像处理方法、装置、电子设备以及计算机可读存储介质,用以解决在获取活动的成像目标的运动信息时采集成本高、计算复杂以及准确性低的问题。This application proposes an image processing method, device, electronic device and computer-readable storage medium for rigid body motion, to solve the problems of high acquisition cost, complex calculation and low accuracy when acquiring motion information of an active imaging target .
根据本申请的一方面,提出一种图像处理方法,所述图像处理方法包括:拍摄成像目标并获取初始帧图像和连续帧图像;计算所述初始帧图像的初始三维特征点;计算所述连续帧图像的连续帧三维特征点;匹配所述初始三维特征点和所述连续帧三维特征点,以得到RT矩阵;利用所述RT矩阵对图像进行校正,得到重建图像。According to one aspect of the present application, an image processing method is proposed, the image processing method includes: photographing an imaging target and acquiring an initial frame image and continuous frame images; calculating initial three-dimensional feature points of the initial frame images; calculating the continuous Continuous frame 3D feature points of frame images; matching the initial 3D feature points with the continuous frame 3D feature points to obtain an RT matrix; using the RT matrix to correct the image to obtain a reconstructed image.
根据本申请的一些实施例,利用影像记录设备拍摄成像目标并获取初始帧图像和连续帧图像,影像记录设备包括可以记录成像目标平面影像或者3D影像的任何设备,比如相机、DV、摄像机、3D扫描仪等。According to some embodiments of the present application, the image recording device is used to shoot the imaging target and obtain the initial frame image and continuous frame images. The image recording device includes any device that can record the imaging target plane image or 3D image, such as a camera, DV, video camera, 3D scanner etc.
根据本申请的一些实施例,在所述成像目标的两侧平行放置一对型号相同的所述影像记录设备,组成双目视觉系统以拍摄所述成像目标。According to some embodiments of the present application, a pair of image recording devices of the same model are placed in parallel on both sides of the imaging target to form a binocular vision system to photograph the imaging target.
根据本申请的一些实施例,利用所述影像记录设备以视频的形式记录所述成像目标的运动,所述视频包括初始时刻的初始帧图像和非初始时刻的连续帧图像,其中,所述初始帧图像和所述连续帧图像分别包括同一时刻一对所述影像记录设备拍摄的图像。According to some embodiments of the present application, the image recording device is used to record the motion of the imaging target in the form of a video, and the video includes an initial frame image at an initial moment and continuous frame images at a non-initial moment, wherein the initial The frame images and the continuous frame images respectively include images captured by a pair of image recording devices at the same time.
根据本申请的一些实施例,计算所述初始帧图像的初始三维特征点的步骤包括:利用尺度不变特征变换方法提取所述初始帧图像的二维特征点;去除所述初始帧图像的二维特征点的背景图像;匹配去除背景后的所述初始帧图像的二维特征点;三角化匹配信息,从而得到所述初始帧图像的所述初始三维特征点。According to some embodiments of the present application, the step of calculating the initial three-dimensional feature points of the initial frame image includes: using a scale-invariant feature transformation method to extract two-dimensional feature points of the initial frame image; removing two-dimensional feature points of the initial frame image The background image of the three-dimensional feature points; matching the two-dimensional feature points of the initial frame image after background removal; triangulating the matching information, thereby obtaining the initial three-dimensional feature points of the initial frame image.
根据本申请的一些实施例,所述计算所述连续帧图像的连续帧三维特征点,包括:利用尺度不变特征变换方法提取所述连续帧图像的二维特征点;去除所述连续帧图像的二维特征点的背景图像;匹配去除背景后的所述连续帧图像的二维特征点;三角化匹配信息,从而得到所述连续帧图像的所述初始三维特征点。According to some embodiments of the present application, the calculating the continuous frame 3D feature points of the continuous frame images includes: using a scale invariant feature transformation method to extract the continuous frame image 2D feature points; removing the continuous frame images The background image of the two-dimensional feature points; matching the two-dimensional feature points of the continuous frame images after the background is removed; and triangulating the matching information, so as to obtain the initial three-dimensional feature points of the continuous frame images.
根据本申请的一些实施例,所述三角化匹配信息的步骤包括:通过公式(7-1)计算所述成像目标上任一点P在影像记录设备坐标下的深度Z;通过公式(7-2)计算所述成像目标上任一点P在相机坐标下的横坐标X与纵坐标Y;According to some embodiments of the present application, the step of triangulating the matching information includes: calculating the depth Z of any point P on the imaging target under the coordinates of the image recording device by formula (7-1); by formula (7-2) Calculate the abscissa X and ordinate Y of any point P on the imaging target under the camera coordinates;
Figure PCTCN2021131635-appb-000006
分别为点P在两个所述影像记录设备的图像帧上的投影与光轴图像交点的横坐标距离;y1为点P在一个所述影像记录设备的图像帧上的投影与光轴图像交点的纵坐标距离。
Figure PCTCN2021131635-appb-000006
Respectively, the abscissa distance between the projection of point P on the image frames of the two image recording devices and the intersection point of the optical axis image; y1 is the intersection point of the projection of point P on the image frame of one of the image recording devices and the optical axis image The ordinate distance of .
根据本申请的一些实施例,得到所述RT矩阵的步骤包括:通过奇异值分解方法使所述初始三维特征点和所述连续帧三维特征点的误差最小化,以得到所述RT矩阵。According to some embodiments of the present application, the step of obtaining the RT matrix includes: minimizing errors between the initial 3D feature points and the 3D feature points of consecutive frames by using a singular value decomposition method to obtain the RT matrix.
根据本申请的一些实施例,通过公式(9-1)使所述误差最小化:According to some embodiments of the present application, the error is minimized by formula (9-1):
Figure PCTCN2021131635-appb-000007
Figure PCTCN2021131635-appb-000007
其中,R、T分别表示旋转矩阵和平移矩阵,J表示所述误差,p i、q i分别为所述初始三维特征点的集合P={p 1,…,p n}和所述连续帧三维特征点的集合Q={q 1,…,q n}中的第i个元素,i、n均为自然数,i≤n。 Among them, R and T represent the rotation matrix and translation matrix respectively, J represents the error, p i and q i represent the set of initial three-dimensional feature points P={p 1 ,...,p n } and the continuous frame The ith element in the set Q={q 1 ,...,q n } of three-dimensional feature points, i and n are both natural numbers, i≤n.
根据本申请的一些实施例,所述奇异值分解(Singular Value Decomposition,简称SVD)方法通过公式(10-1)进行:According to some embodiments of the present application, the singular value decomposition (Singular Value Decomposition, referred to as SVD) method is carried out by formula (10-1):
[U,S,V]=SVD(H)   公式(10-1)[U,S,V]=SVD(H) Formula (10-1)
其中,U、S、V分别表示经过奇异值分解后的第一酉矩阵、对角矩阵以及第二酉矩阵,SVD表示对括号内的物理量进行奇异值分解,H为重新中心化的点集之间的协方差矩阵,协方差矩阵H通过公式(10-2)确定:Among them, U, S, and V respectively represent the first unitary matrix, diagonal matrix and second unitary matrix after singular value decomposition, SVD represents the singular value decomposition of the physical quantities in brackets, and H represents the re-centered point set. The covariance matrix between , the covariance matrix H is determined by the formula (10-2):
Figure PCTCN2021131635-appb-000008
Figure PCTCN2021131635-appb-000008
其中,p i'和q i'分别表示重新中心化后的三维特征点,p i'=p iP,q i'=q iQWherein, p i ' and q i ' respectively denote the re-centered three-dimensional feature points, p i '=pi P , q i '=q iQ .
根据本申请的一些实施例,所述旋转矩阵R和所述平移矩阵T分别通过公式(11-1)、(11-2)确定:According to some embodiments of the present application, the rotation matrix R and the translation matrix T are respectively determined by formulas (11-1), (11-2):
R=VU T   公式(11-1) R = V U T formula (11-1)
Figure PCTCN2021131635-appb-000009
Figure PCTCN2021131635-appb-000009
其中,μ P和μ Q分别为集合P、集合Q的中心位置。 Among them, μ P and μ Q are the central positions of set P and set Q respectively.
根据本申请的一些实施例,所述图像包括CT图像、MR图像、PET图像、PET-CT图像、PET-MR图像以及CT-MR图像。According to some embodiments of the present application, the images include CT images, MR images, PET images, PET-CT images, PET-MR images, and CT-MR images.
根据本申请的一些实施例,所述利用所述RT矩阵对PET图像进行校正,得到重建图像,包括:对所述成像目标进行PET扫描,得到正弦图数据;利用所述RT矩阵对所述正弦图数据进行校正,以重建图像。According to some embodiments of the present application, using the RT matrix to correct the PET image to obtain a reconstructed image includes: performing a PET scan on the imaging target to obtain sinogram data; using the RT matrix to correct the sinusoid image data to reconstruct the image.
根据本申请的一些实施例,在利用所述RT矩阵对所述图像进行校正之前,所述图像处 理方法还包括通过公式(14-1)对所述图像的体素值进行校正:According to some embodiments of the present application, before using the RT matrix to correct the image, the image processing method further includes correcting the voxel value of the image by formula (14-1):
X t=W tRTX 0   公式(14-1) X t =W t RTX 0 Formula (14-1)
其中,X t表示t时刻FOV区域的放射性物质的浓度的分布,X 0表示初始时刻FOV区域的放射性物质的浓度的分布,W t为t时刻所有体素位置的插值的权重。 Among them, X t represents the distribution of radioactive material concentration in the FOV area at time t, X 0 represents the distribution of radioactive material concentration in the FOV area at the initial time, and W t is the interpolation weight of all voxel positions at time t.
根据本申请的一些实施例,得到所述RT矩阵的步骤包括:在所述成像目标上设置若干个放射性点源,通过对采集时间帧内的图像进行重建识别所述放射性点源,根据三个不共线的所述放射性点源组成的面得到所述成像目标的位姿,所述位姿包括所述成像目标的位置和所述成像目标绕三维坐标轴的旋转角度,通过计算连续帧之间的运动得到所述RT矩阵。According to some embodiments of the present application, the step of obtaining the RT matrix includes: setting several radioactive point sources on the imaging target, and identifying the radioactive point sources by reconstructing images within the acquisition time frame, according to three The plane composed of the non-collinear radioactive point sources obtains the pose of the imaging target, and the pose includes the position of the imaging target and the rotation angle of the imaging target around the three-dimensional coordinate axis, by calculating The movement between gets the RT matrix.
根据本申请的一方面,提出一种图像处理装置,所述图像处理装置包括帧图像获取单元,用于拍摄成像目标并获取初始帧图像和连续帧图像;初始三维特征点计算单元,用于计算所述初始帧图像的初始三维特征点;连续帧三维特征点计算单元,用于计算所述连续帧图像的连续帧三维特征点;RT矩阵计算单元,用于匹配所述初始三维特征点和所述连续帧三维特征点,以得到RT矩阵;图像重建单元,用于利用所述RT矩阵对图像进行校正,得到重建图像。According to one aspect of the present application, an image processing device is proposed. The image processing device includes a frame image acquisition unit for capturing an imaging target and acquiring initial frame images and continuous frame images; an initial three-dimensional feature point calculation unit for calculating The initial three-dimensional feature point of the initial frame image; the continuous frame three-dimensional feature point calculation unit, used to calculate the continuous frame three-dimensional feature point of the continuous frame image; the RT matrix calculation unit, used to match the initial three-dimensional feature point and the The three-dimensional feature points of the continuous frames are used to obtain an RT matrix; the image reconstruction unit is used to correct the image by using the RT matrix to obtain a reconstructed image.
根据本申请的一方面,提出一种电子设备,包括一个或多个处理器;存储装置,用于存储计算机程序;当所述计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一所述的方法。According to one aspect of the present application, an electronic device is proposed, including one or more processors; a storage device for storing a computer program; when the computer program is executed by the one or more processors, the One or more processors implement the method as in any one of the preceding.
根据本申请的一方面,提出一种计算机可读存储介质,其上存储有程序指令,所述程序指令被执行时实现如前任一项所述的方法。According to one aspect of the present application, a computer-readable storage medium is provided, on which program instructions are stored, and when the program instructions are executed, the method described in any one of the preceding items is implemented.
根据本申请的一些示例实施例,利用简单但精度更高的影像记录设备,在物体进行运动的同时获取它的运动信息,保证了运动信息获取的准确性,提升了运动校正图像的准确性。同时,避免了进行大量复杂的图像配准计算,降低了计算的复杂度。而且,所提供的算法不仅只适用于小范围的运动校正,对范围较大的运动校正同样适用,大大降低了成本。According to some exemplary embodiments of the present application, a simple but higher-precision image recording device is used to obtain motion information of an object while it is moving, which ensures the accuracy of motion information acquisition and improves the accuracy of motion-corrected images. At the same time, a large number of complicated image registration calculations are avoided, and the complexity of calculations is reduced. Moreover, the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the following briefly introduces the drawings that need to be used in the description of the embodiments.
图1示出了根据本申请示例实施例的一种图像处理方法流程图。Fig. 1 shows a flowchart of an image processing method according to an example embodiment of the present application.
图2示出了根据本申请示例实施例的三角化原理图。Fig. 2 shows a schematic diagram of triangulation according to an example embodiment of the present application.
图3示出了根据本申请示例实施例的一种体素在二维图形的映射过程示意图。Fig. 3 shows a schematic diagram of a mapping process of voxels on a two-dimensional graph according to an exemplary embodiment of the present application.
图4示出了根据本申请示例实施例的一种用于刚体运动的图像校正装置。Fig. 4 shows an image correction device for rigid body motion according to an exemplary embodiment of the present application.
图5a示出了物体静止时的校正结果。Figure 5a shows the correction results when the object is stationary.
图5b示出了物体运动校正前的重建结果。Figure 5b shows the reconstruction results before object motion correction.
图5c示出了根据本申请示例实施例对物体运动矫正后的重建结果。Fig. 5c shows a reconstruction result after object motion correction according to an exemplary embodiment of the present application.
图6示出了根据本申请实施例的又一种用于刚体运动的电子设备框图。Fig. 6 shows a block diagram of another electronic device for rigid body motion according to an embodiment of the present application.
具体实施方式Detailed ways
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且 不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本申请将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this application will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus their repeated descriptions will be omitted.
所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而没有这些特定细节中的一个或更多,或者可以采用其它的方式、组元、材料、装置或操作等。在这些情况下,将不详细示出或描述公知结构、方法、装置、实现、材料或者操作。The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided in order to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of these specific details, or other methods, components, materials, devices or operations may be used. In these instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail.
附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解,而有的操作/步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flow charts shown in the drawings are only exemplary illustrations, and do not necessarily include all contents and operations/steps, nor must they be performed in the order described. For example, some operations/steps can be decomposed, and some operations/steps can be combined or partly combined, so the actual order of execution may be changed according to the actual situation.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。The terms "first", "second" and the like in the description and claims of the present application and the above drawings are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "include" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally further includes For other steps or units inherent in these processes, methods, products or devices.
下面将参照附图,对根据本申请的具体实施例进行详细说明。Specific embodiments according to the present application will be described in detail below with reference to the accompanying drawings.
图1示出根据本申请示例实施例的图像处理方法流程图。下面参照图1对根据本申请示例实施例的一种图像处理方法进行详细说明。Fig. 1 shows a flowchart of an image processing method according to an example embodiment of the present application. An image processing method according to an exemplary embodiment of the present application will be described in detail below with reference to FIG. 1 .
根据本申请的一些示例实施例,图1所示的示例实施例是对清醒成像目标在PET扫描过程中产生的运动伪影进行校正。由于仪器最终的成像只针对成像目标的头部,而活动的成像目标头部的运动可以简化为刚体运动。According to some exemplary embodiments of the present application, the exemplary embodiment shown in FIG. 1 corrects motion artifacts generated during PET scanning of awake imaging targets. Since the final imaging of the instrument is only aimed at the head of the imaging target, the movement of the head of the active imaging target can be simplified as a rigid body motion.
在步骤S101,拍摄成像目标并获取初始帧图像和连续帧图像。In step S101, an imaging target is photographed and initial frame images and continuous frame images are acquired.
根据本申请的一些示例实施例,是通过利用一对影像记录设备平行放置在成像目标的两侧,以视频的形式记录成像目标的运动,该影像记录设备包括可以记录成像目标平面影像或者3D影像的任何设备,比如相机、DV、摄像机、3D扫描仪等。利用一对影像记录设备拍的视频中都包括初始时刻的初始帧图像和非初始时刻的连续帧图像,其中,初始帧图像和连续帧图像分别包括同一时刻一对影像记录设备拍摄的图像。本领域技术人员应当理解的是,在该实施例中影像记录设备平行放置是为了更便于在后期计算处理过程中统一坐标系,降低运算复杂度。实际上,影像记录设备可以以任意角度、任意数量进行设置,在后期进行数据处理时进行相应的坐标系换算即可。According to some exemplary embodiments of the present application, the movement of the imaging target is recorded in the form of video by using a pair of image recording devices placed in parallel on both sides of the imaging target. The image recording device includes a plane image or a 3D image capable of recording the imaging target Any device, such as camera, DV, camcorder, 3D scanner, etc. The video captured by a pair of image recording devices includes an initial frame image at an initial moment and continuous frame images at a non-initial moment, wherein the initial frame image and continuous frame images respectively include images captured by a pair of image recording devices at the same moment. Those skilled in the art should understand that, in this embodiment, the image recording devices are placed in parallel for the purpose of unifying the coordinate system in the post-calculation process and reducing the computational complexity. In fact, the image recording equipment can be set at any angle and in any number, and the corresponding coordinate system conversion can be performed during data processing in the later stage.
根据一些实施例,初始帧图像为利用影像记录设备拍摄的视频中开始时刻的帧图像,视频中其他时刻的图像都为连续帧图像。According to some embodiments, the initial frame image is the frame image at the beginning moment of the video captured by the image recording device, and the images at other moments in the video are continuous frame images.
根据一些实施例,步骤S101中利用一对型号相同的影像记录设备,比如相机,平行放置在成像目标的左右两侧,组成双目视觉系统,以视频的形式记录物体运动的信息。According to some embodiments, in step S101, a pair of image recording devices of the same model, such as cameras, are placed in parallel on the left and right sides of the imaging target to form a binocular vision system to record information about object movement in the form of video.
在步骤S103中,计算初始帧图像的初始三维特征点。In step S103, the initial three-dimensional feature points of the initial frame image are calculated.
根据本申请的一些示例实施例,通过如下步骤计算初始帧图像的初始三维特征点:According to some exemplary embodiments of the present application, the initial three-dimensional feature points of the initial frame image are calculated through the following steps:
首先,利用尺度不变特征变换(Scale-invariant feature transform,简称SIFT)方法从初 始帧图像中提取初始帧图像的二维特征点。First, use the scale-invariant feature transform (SIFT for short) method to extract the two-dimensional feature points of the initial frame image from the initial frame image.
然后,利用数字减影血管造影技术(Digital subtraction angiography,简称DSA)原理去除初始帧图像的二维特征点的背景图像,以确保提取的二维特征点均处于初始帧图像中的成像目标图像上。Then, use the digital subtraction angiography (Digital subtraction angiography, referred to as DSA) principle to remove the background image of the two-dimensional feature points of the initial frame image to ensure that the extracted two-dimensional feature points are all on the imaging target image in the initial frame image .
最后,将一对影像记录设备各自拍摄的初始帧图像中的二维特征点进行匹配,并三角化匹配信息,从而得到初始帧图像的初始三维特征点。Finally, match the two-dimensional feature points in the initial frame images captured by a pair of image recording devices, and triangulate the matching information, so as to obtain the initial three-dimensional feature points of the initial frame images.
在步骤S105中,计算连续帧图像的连续帧三维特征点。In step S105, three-dimensional feature points of consecutive frames of images are calculated.
在步骤S105中,计算连续帧三维特征点的方法和步骤S103中计算初始帧三维特征点的方法相同,在此不再赘述。In step S105, the method for calculating the three-dimensional feature points of the continuous frames is the same as the method for calculating the three-dimensional feature points of the initial frame in step S103, and will not be repeated here.
下面针对步骤S103和步骤S105中的三角化原理进行说明。所谓三角化,即通过物体的特征点根据在不同影像记录设备的二维投影位置恢复出物体的特征点的三维位置信息。如图2所示,为根据本申请示例实施例的三角化原理图。The principle of triangulation in step S103 and step S105 will be described below. The so-called triangulation refers to recovering the three-dimensional position information of the feature points of the object according to the two-dimensional projection positions of different image recording devices through the feature points of the object. As shown in FIG. 2 , it is a schematic diagram of triangulation according to an exemplary embodiment of the present application.
如图2所示,设相机光轴与图像的交点坐标为(cx,cy),两个相机的光心距离为D,焦距均为f,三维空间中的一个特征点P在左、右图像上的投影分别为Pl与Pr,其与光轴图像交点的横坐标距离分别为xl与xr,纵坐标距离为yl与yr。As shown in Figure 2, let the coordinates of the intersection point between the camera optical axis and the image be (cx, cy), the distance between the optical centers of the two cameras is D, the focal length is f, and a feature point P in the three-dimensional space is in the left and right images The projections on are respectively Pl and Pr, the abscissa distances between them and the intersection point of the optical axis image are xl and xr respectively, and the ordinate distances are yl and yr.
利用相似三角形,通过公式(11)可得点P在影像记录设备坐标下的深度Z。Using similar triangles, the depth Z of the point P in the coordinates of the image recording device can be obtained through the formula (11).
Figure PCTCN2021131635-appb-000010
Figure PCTCN2021131635-appb-000010
Figure PCTCN2021131635-appb-000011
Figure PCTCN2021131635-appb-000011
在步骤S107中,匹配初始三维特征点和连续帧三维特征点,以得到RT矩阵。In step S107, the initial 3D feature points are matched with the 3D feature points of consecutive frames to obtain an RT matrix.
根据本申请的一些示例实施例,计算RT矩阵的步骤包括:According to some example embodiments of the present application, the step of calculating the RT matrix includes:
根据步骤S103和步骤S105分别得到初始帧图像的初始三维特征点的集合P={p 1,…,p n}和连续帧三维特征点的集合Q={q 1,…,q n}。 According to step S103 and step S105, the set of initial 3D feature points P={p 1 ,...,p n } of the initial frame image and the set of 3D feature points of consecutive frames Q={q 1 ,...,q n } are respectively obtained.
通过奇异值分解(Singular value decomposition,SVD)方法使初始三维特征点和连续帧三维特征点的误差J最小化,如公式(13)所示。The error J of the initial 3D feature points and the 3D feature points of consecutive frames is minimized by the Singular value decomposition (SVD) method, as shown in formula (13).
Figure PCTCN2021131635-appb-000012
Figure PCTCN2021131635-appb-000012
其中,J为初始三维特征点和连续帧三维特征点的误差,R为旋转矩阵,Rp i表示旋转矩阵R与点集P中的第i个点p i相乘,T为平移矩阵,p i、q i分别表示集合P、Q中的第i个三维特征点,i为小于n的自然数,n为集合P和Q中三维特征点的数量。 Among them, J is the error between the initial three-dimensional feature point and the three-dimensional feature point of the continuous frame, R is the rotation matrix, Rp i represents the multiplication of the rotation matrix R and the i-th point p i in the point set P, T is the translation matrix, p i , q i represent the i-th three-dimensional feature point in the sets P and Q respectively, i is a natural number less than n, and n is the number of three-dimensional feature points in the sets P and Q.
利用公式(14)平均集合P和集合Q中的各点坐标,得到集合P和集合Q的中心位置
Figure PCTCN2021131635-appb-000013
得到新的点集P′和Q′,即各点坐标减去中心位置,此时中心位置与坐标原点重合,这些新的点集P′、Q′中分别包括重新中心化后的三维特征点p i'和q i',p i'=p iP,q i'=q iQ
Use the formula (14) to average the coordinates of each point in the set P and the set Q to obtain the center position of the set P and the set Q
Figure PCTCN2021131635-appb-000013
Obtain new point sets P' and Q', that is, the coordinates of each point minus the center position. At this time, the center position coincides with the origin of the coordinates. These new point sets P' and Q' respectively include the re-centered three-dimensional feature points p i ' and q i ', p i '=p iP , q i '=q iQ .
Figure PCTCN2021131635-appb-000014
Figure PCTCN2021131635-appb-000014
Figure PCTCN2021131635-appb-000015
Figure PCTCN2021131635-appb-000015
通过公式(17),对协方差矩阵H进行奇异值分解。Singular value decomposition is performed on the covariance matrix H by formula (17).
[U,S,V]=SVD(H)   公式(17)[U,S,V]=SVD(H) Formula (17)
其中,U、S、V分别表示经过奇异值分解后的第一酉矩阵、对角矩阵以及第二酉矩阵,SVD表示对括号内的物理量进行奇异值分解,比如,若协方差矩阵H为m×n阶矩阵,则经过奇异值分解后的第一酉矩阵U为m×m阶的酉矩阵,对角矩阵S为m×n阶的对角矩阵,对角线上的元素为协方差矩阵H的奇异值,第二酉矩阵V为n×n阶的酉矩阵。Among them, U, S, and V respectively represent the first unitary matrix, diagonal matrix and second unitary matrix after singular value decomposition, and SVD represents the singular value decomposition of the physical quantities in brackets. For example, if the covariance matrix H is m ×n order matrix, then the first unitary matrix U after singular value decomposition is a unitary matrix of order m×m, the diagonal matrix S is a diagonal matrix of order m×n, and the elements on the diagonal are covariance matrices The singular value of H, the second unitary matrix V is a unitary matrix of order n×n.
通过公式(18),根据分解出的结果计算旋转矩阵R。Through the formula (18), the rotation matrix R is calculated according to the decomposed result.
R=VU T   公式(18) R = V U T formula (18)
通过公式(19),利用公式(18)得到的旋转矩阵R计算平移矩阵T。Through formula (19), the translation matrix T is calculated using the rotation matrix R obtained from formula (18).
Figure PCTCN2021131635-appb-000016
Figure PCTCN2021131635-appb-000016
通过公式(18)和公式(19)得到的R、T矩阵表示物体的刚体运动的六轴的运动信息。物体的刚体运动可以分解为绕着空间中三轴,即x,y,z轴分别做平移、旋转运动,得到的物体绕着三轴的运动量就可组成一个RT矩阵。RT矩阵也称为旋转-平移(Rotation-Translation,简称RT)矩阵,用以描述物体的六轴运动信息。六轴为x、y和z轴方向的平移和旋转运动。The R and T matrices obtained by formula (18) and formula (19) represent the six-axis motion information of the rigid body motion of the object. The rigid body motion of an object can be decomposed into translational and rotational motions around three axes in space, namely x, y, and z axes, and the obtained motion of the object around the three axes can form an RT matrix. The RT matrix is also called a rotation-translation (Rotation-Translation, RT for short) matrix, and is used to describe six-axis motion information of an object. The six axes are translational and rotational movements in the directions of x, y and z axes.
根据一些实施例,将公式(18)和公式(19)得到的R、T矩阵式合并,即得到RT矩阵。According to some embodiments, the R and T matrices obtained by formula (18) and formula (19) are combined to obtain the RT matrix.
根据本申请的一些实施例,得到RT矩阵的步骤包括:在成像目标上设置若干个放射性点源,通过对采集时间帧内的图像进行重建识别放射性点源,根据三个不共线的放射性点源组成的面得到成像目标的位姿,所述位姿包括成像目标的位置和成像目标绕三维坐标轴的旋转角度,通过计算连续帧之间的运动得到RT矩阵。According to some embodiments of the present application, the step of obtaining the RT matrix includes: setting several radioactive point sources on the imaging target, and identifying the radioactive point sources by reconstructing images within the acquisition time frame, according to three non-collinear radioactive point sources The pose of the imaging target is obtained from the surface composed of sources, the pose includes the position of the imaging target and the rotation angle of the imaging target around the three-dimensional coordinate axis, and the RT matrix is obtained by calculating the motion between consecutive frames.
在步骤S109,利用所述RT矩阵对图像进行校正,得到重建图像。In step S109, the image is corrected using the RT matrix to obtain a reconstructed image.
根据本申请的一些示例实施例,利用RT矩阵对PET图像校正步骤包括:According to some exemplary embodiments of the present application, the step of correcting the PET image using the RT matrix includes:
对活体目标进行PET扫描,得到正弦图数据。将正弦图数据按照时间帧划分,利用步骤S107中的RT矩阵将当前时间帧内的数据通过公式(9)校正到初始位置。即,利用RT矩阵替换公式(9)中的映射关系矩阵M t,从而得到校正的重建图像。 Perform PET scans on living targets to obtain sinogram data. Divide the sinogram data according to time frames, and use the RT matrix in step S107 to correct the data in the current time frame to the initial position through formula (9). That is, the RT matrix is used to replace the mapping relationship matrix M t in formula (9), so as to obtain a corrected reconstructed image.
根据本申请的一些实施例,在执行步骤S109之前,还需要通过公式(20)对图像的体素值进行校正。According to some embodiments of the present application, before step S109 is performed, the voxel value of the image needs to be corrected by formula (20).
X t=W tRTX 0   公式(20) X t =W t RTX 0 formula (20)
其中,X t表示t时刻FOV区域的放射性物质的浓度的分布,X 0表示0时刻FOV区域的放射性物质的浓度的分布,W t为t时刻所有体素位置的插值的权重。 Among them, X t represents the distribution of radioactive material concentration in the FOV area at time t, X 0 represents the distribution of radioactive material concentration in the FOV area at time 0, and W t is the interpolation weight of all voxel positions at time t.
下面详细说明通过公式(20)如何对PET图像的体素值进行校正。How to correct the voxel value of the PET image by formula (20) will be described in detail below.
为了替代复杂的图像配准过程得到不同时刻图像体素之间的映射关系,可根据运动信息对各体素的中心点进行空间坐标变换,得到其对应的体素位置。由于空间坐标变换后得到的点并不一定是图像的体素中心,为了提升定位精度,将t时刻体素位置根据运动信息变换回初始时刻对应的体素位置后,利用变换后的体素位置周围的体素值通过插值算法计算变换后的体素位置的体素值,如公式(21)所示。将位于空间点r i处的活度值f(r i,0)视为与空间点r i邻近的所有体素的体素值的加权之和。 In order to replace the complex image registration process to obtain the mapping relationship between image voxels at different times, the spatial coordinate transformation of the center point of each voxel can be performed according to the motion information to obtain its corresponding voxel position. Since the point obtained after spatial coordinate transformation is not necessarily the voxel center of the image, in order to improve the positioning accuracy, the voxel position at time t is transformed back to the corresponding voxel position at the initial time according to the motion information, and the transformed voxel position is used The surrounding voxel values are calculated by an interpolation algorithm to calculate the voxel value of the transformed voxel position, as shown in formula (21). The activity value f( ri ,0) located at the spatial point r i is regarded as the weighted sum of the voxel values of all voxels adjacent to the spatial point r i .
Figure PCTCN2021131635-appb-000017
Figure PCTCN2021131635-appb-000017
其中,
Figure PCTCN2021131635-appb-000018
表示空间点r i附近的体素的下标集合,ω ij为权重。x j(0)与公式(20)中X 0对应,f(r i,0)与公式(21)中X t对应。公式(21)的矩阵表示如公式(20)所示。
in,
Figure PCTCN2021131635-appb-000018
Indicates the subscript set of voxels near the spatial point r i , and ω ij is the weight. x j (0) corresponds to X 0 in formula (20), and f(r i ,0) corresponds to X t in formula (21). The matrix representation of formula (21) is shown in formula (20).
根据一些实施例,根据以空间点r i的位置为中心的体素与其附近体素之间重叠的面积或体积与体素总面积或体积的比值作为权重值ω ijAccording to some embodiments, the weight value ω ij is taken as the ratio of the overlapping area or volume between the voxel centered on the position of the spatial point r i and its nearby voxels to the total area or volume of the voxel.
同理,通过公式(21)可得连续的空间点的活度值f(r 0,0),f(r 1,0),…,f(r n-1,0)。 Similarly, the activity values f(r 0 ,0),f(r 1 ,0),…,f(r n-1 ,0) of continuous space points can be obtained by formula (21).
图3示出了根据本申请示例实施例的一种体素在二维图形的映射过程示意图。如图3所示,图中的灰色点表示t时刻的一个体素中心A被映射到位于0时刻的与其对应的原始位置B,原始位置B的体素值可通过对B在0时刻位置附近的体素的插值计算得到。Fig. 3 shows a schematic diagram of a mapping process of voxels on a two-dimensional graph according to an exemplary embodiment of the present application. As shown in Figure 3, the gray dot in the figure indicates that a voxel center A at time t is mapped to its corresponding original position B at time 0, and the voxel value of the original position B can be compared to B near the position at time 0 The interpolation calculation of voxels is obtained.
根据一些实施例,可以利用点标记技术获取RT矩阵。具体步骤包括:在实验对象(例如老鼠)的头部粘贴小而轻的放射性点源,对采集时间帧内的图像进行重建,识别图像中高亮的点,即认为是点源,默认点源位置相对于大鼠头部保持静止。根据三个不共线点源组成的面即可得到大鼠的头部位姿。头部位姿包括物体的位置和物体绕xyz三轴的三个旋转角度,用以表示物体的朝向。通过计算连续帧之间的运动即可得到运动矩阵RT。According to some embodiments, the RT matrix may be acquired using point labeling techniques. The specific steps include: paste a small and light radioactive point source on the head of the experimental subject (such as a mouse), reconstruct the image within the acquisition time frame, identify the highlighted point in the image, that is, it is considered a point source, and the default point source position Keep still relative to the head of the rat. According to the surface composed of three non-collinear point sources, the head pose of the rat can be obtained. The head pose includes the position of the object and three rotation angles of the object around the xyz three axes, which are used to represent the orientation of the object. The motion matrix RT can be obtained by calculating the motion between consecutive frames.
根据本申请的一些示例实施例,利用简单但精度更高的外部相机,在运动物体进行运动的同时获取它的运动信息,保证了运动信息获取的准确性,提升了运动校正图像的准确性。同时,避免了进行大量复杂的图像配准计算,降低了计算的复杂度。而且,所提供的算法不仅只适用于小范围的运动校正,对范围较大的运动校正同样适用,大大降低了成本。According to some exemplary embodiments of the present application, a simple but higher-precision external camera is used to obtain motion information of a moving object while it is moving, thereby ensuring the accuracy of motion information acquisition and improving the accuracy of motion-corrected images. At the same time, a large number of complicated image registration calculations are avoided, and the complexity of calculations is reduced. Moreover, the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.
图4示出了根据本申请示例实施例的一种图像处理装置。如图4所示的图像处理装置包括帧图像获取单元401、初始三维特征点计算单元403、连续帧三维特征点计算单元405、RT矩阵计算单元407和图像重建单元409。Fig. 4 shows an image processing device according to an exemplary embodiment of the present application. The image processing device shown in FIG. 4 includes a frame image acquisition unit 401 , an initial 3D feature point calculation unit 403 , a continuous frame 3D feature point calculation unit 405 , an RT matrix calculation unit 407 and an image reconstruction unit 409 .
帧图像获取单元401利用相机拍摄成像目标并获取初始帧图像和连续帧图像。相机的设置和图像的获取可以利用图1实施例中所述的方法,在此不再赘述。The frame image acquiring unit 401 uses a camera to capture an imaging target and acquire initial frame images and continuous frame images. The method described in the embodiment in FIG. 1 may be used for camera setting and image acquisition, and details are not repeated here.
初始三维特征点计算单元403用于计算初始帧图像的初始三维特征点。初始三维特征点的计算可以利用图1实施例中步骤S103所述的方法,在此不再赘述。The initial three-dimensional feature point calculating unit 403 is used to calculate the initial three-dimensional feature point of the initial frame image. The calculation of the initial three-dimensional feature points can use the method described in step S103 in the embodiment of FIG. 1 , which will not be repeated here.
连续帧三维特征点计算单元405用于计算连续帧图像的连续帧三维特征点。连续帧三维特征点的计算可以利用图1实施例中步骤S105所述的方法,在此不再赘述。The continuous frame three-dimensional feature point calculation unit 405 is used for calculating continuous frame three-dimensional feature points of the continuous frame images. The calculation of the three-dimensional feature points of consecutive frames can use the method described in step S105 in the embodiment of FIG. 1 , which will not be repeated here.
RT矩阵计算单元407用于匹配初始三维特征点和连续帧三维特征点,以得到RT矩阵。RT矩阵的计算可以利用图1实施例中步骤S107所述的方法,在此不再赘述。The RT matrix calculation unit 407 is used to match the initial 3D feature points with the 3D feature points of consecutive frames to obtain an RT matrix. The calculation of the RT matrix can use the method described in step S107 in the embodiment of FIG. 1 , which will not be repeated here.
图像重建单元409用于利用RT矩阵对PET图像进行校正,得到重建图像。图像重建可以利用图1实施例中步骤S109所述的方法,在此不再赘述。The image reconstruction unit 409 is configured to use the RT matrix to correct the PET image to obtain a reconstructed image. The method described in step S109 in the embodiment of FIG. 1 can be used for image reconstruction, which will not be repeated here.
图5a示出了物体静止时的校正结果。图5b示出了物体运动校正前的重建结果,图5c示出了根据本申请示例实施例对物体运动矫正后的重建结果。由图5a~图5c可以看出通过利用本申请提供的图像处理方法,能够基本消除由于物体运动造成的伪影,从而可以提高重建图像的质量。Figure 5a shows the correction results when the object is stationary. Fig. 5b shows a reconstruction result before object motion correction, and Fig. 5c shows a reconstruction result after object motion correction according to an exemplary embodiment of the present application. It can be seen from FIGS. 5 a to 5 c that by using the image processing method provided by the present application, artifacts caused by object motion can be basically eliminated, thereby improving the quality of the reconstructed image.
本领域技术人员应当注意的是,上述实施例中通过RT矩阵校正的是PET图像,实际上,上述方法还可以应用于CT图像、MR图像等其它医学图像的运动校正,或者是PET-CT图像、PET-MR图像、CT-MR图像中一种或者多种图像的运动校正,在这些医学图像的校正 过程中仅需通过RT矩阵替换相应的M t矩阵即可实现,这属于本领域技术人员根据本发明的启示所容易实现的,在此不再赘述。 It should be noted by those skilled in the art that the PET image is corrected by the RT matrix in the above embodiment. In fact, the above method can also be applied to motion correction of other medical images such as CT images and MR images, or PET-CT images. , PET-MR images, CT-MR images in one or more images of motion correction, in the correction process of these medical images only need to replace the corresponding M t matrix by RT matrix, which belongs to those skilled in the art What can be easily realized according to the teaching of the present invention will not be repeated here.
图6示出了根据本申请实施例的又一种用于刚体运动的电子设备框图。图6示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。Fig. 6 shows a block diagram of another electronic device for rigid body motion according to an embodiment of the present application. The electronic device shown in FIG. 6 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
如图6所示,该电子设备以通用计算设备的形式表现。该电子设备的组件可以包括但不限于:至少一个处理器210、至少一个存储器220、连接不同系统组件(包括存储器220和处理器210)的总线230、显示单元240等。其中,存储器220存储有程序代码,程序代码可以被处理器210执行,使得处理器210执行本说明书描述的根据本申请各种示例性实施方式的方法。例如,处理器210可以执行如图1中所示的方法。As shown in FIG. 6, the electronic device takes the form of a general-purpose computing device. Components of the electronic device may include, but are not limited to: at least one processor 210, at least one memory 220, a bus 230 connecting different system components (including the memory 220 and the processor 210), a display unit 240, and the like. Wherein, the memory 220 stores program codes, and the program codes can be executed by the processor 210, so that the processor 210 executes the methods described in this specification according to various exemplary embodiments of the present application. For example, the processor 210 may execute the method as shown in FIG. 1 .
存储器220可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)2201和/或高速缓存存储单元2202,还可以进一步包括只读存储单元(ROM)2203。The memory 220 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 2201 and/or a cache storage unit 2202 , and may further include a read-only storage unit (ROM) 2203 .
存储器220还可以包括具有一组(至少一个)程序模块2205的程序/实用工具2204,这样的程序模块2205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。 Memory 220 may also include programs/utilities 2204 having a set (at least one) of program modules 2205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, which Each or some combination of the examples may include the implementation of a network environment.
总线230可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。 Bus 230 may represent one or more of several types of bus structures, including a memory cell bus or memory cell controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local area using any of a variety of bus structures. bus.
电子设备也可以与一个或多个外部设备300(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该图像校正装置交互的设备通信,和/或与使得该电子设备能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口250进行。而且,电子设备还可以通过网络适配器260与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器260可以通过总线230与电子设备的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The electronic device can also communicate with one or more external devices 300 (such as keyboards, pointing devices, bluetooth devices, etc.), and can also communicate with one or more devices that allow the user to interact with the image correction device, and/or communicate with the device that enables the user to interact with the image correction device The electronic device is capable of communicating with any device (eg, router, modem, etc.) that communicates with one or more other computing devices. Such communication may occur through input/output (I/O) interface 250 . Moreover, the electronic device can also communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 260 . The network adapter 260 can communicate with other modules of the electronic device through the bus 230 . It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and Data backup storage system, etc.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个计算机可读存储介质(可以是CD-ROM、U盘、移动硬盘等)中或网络上,包括若干计算机程序指令以使得一台计算设备(可以是个人计算机、服务器、或者网络设备等)执行根据本申请实施方式的上述方法。Through the description of the above implementations, those skilled in the art can easily understand that the example implementations described here can be implemented by software, or can be implemented by combining software with necessary hardware. The technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a computer-readable storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network, including several The computer program instructions enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present application.
软件产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。A software product may utilize any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。A computer readable storage medium may include a data signal carrying readable program code in baseband or as part of a carrier wave traveling as part of a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium other than a readable storage medium that can send, propagate or transport a program for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the readable storage medium may be transmitted by any suitable medium, including but not limited to wireless, cable, optical cable, RF, etc., or any suitable combination of the above.
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如C语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program codes for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language—such as C or a similar programming language. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute. In cases involving a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, using an Internet service provider). business to connect via the Internet).
上述计算机可读介质承载有一个或者多个程序指令,当上述一个或者多个程序指令被一个该设备执行时,使得该计算机可读介质实现前述功能。The above-mentioned computer-readable medium carries one or more program instructions, and when the above-mentioned one or more program instructions are executed by one device, the computer-readable medium can realize the above-mentioned functions.
本领域技术人员可以理解上述各模块可以按照实施例的描述分布于装置中,也可以进行相应变化唯一不同于本实施例的一个或多个装置中。上述实施例的多个模块可以合并为一个模块,也可以进一步将一个模块拆分成多个子模块。Those skilled in the art can understand that the above modules can be distributed in the device according to the description of the embodiment, or can be changed correspondingly to one or more devices that are only different from the embodiment. Multiple modules in the above embodiments may be combined into one module, or one module may be further split into multiple sub-modules.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个计算机可读存储介质(可以是CD-ROM、U盘、移动硬盘等)中或网络上,包括若干计算机程序指令以使得一台计算设备(可以是个人计算机、服务器、或者网络设备等)执行根据本申请实施方式的上述方法。Through the description of the above implementations, those skilled in the art can easily understand that the example implementations described here can be implemented by software, or by combining software with necessary hardware. The technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a computer-readable storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network, including several The computer program instructions enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present application.
软件产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。A software product may utilize any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。A computer readable storage medium may include a data signal carrying readable program code in baseband or as part of a carrier wave traveling as part of a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A readable storage medium may also be any readable medium other than a readable storage medium that can send, propagate or transport a program for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the readable storage medium may be transmitted by any suitable medium, including but not limited to wireless, cable, optical cable, RF, etc., or any suitable combination of the above.
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程 序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如C语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program codes for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural programming Language—such as C or a similar programming language. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute. In cases involving a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (for example, using an Internet service provider). business to connect via the Internet).
上述计算机可读介质承载有一个或者多个程序指令,当上述一个或者多个程序指令被一个该设备执行时,使得该计算机可读介质实现前述功能。The above-mentioned computer-readable medium carries one or more program instructions, and when the above-mentioned one or more program instructions are executed by one device, the computer-readable medium can realize the above-mentioned functions.
本领域技术人员可以理解上述各模块可以按照实施例的描述分布于装置中,也可以进行相应变化唯一不同于本实施例的一个或多个装置中。上述实施例的多个模块可以合并为一个模块,也可以进一步将一个模块拆分成多个子模块。Those skilled in the art can understand that the above-mentioned modules can be distributed in the device according to the description of the embodiment, and corresponding changes can also be made in one or more devices that are only different from the embodiment. Multiple modules in the above embodiments may be combined into one module, or one module may be further split into multiple sub-modules.
根据本申请的一些示例实施例,利用简单但精度更高的外部相机,在运动物体进行运动的同时获取它的运动信息,保证了运动信息获取的准确性,提升了运动校正图像的准确性。同时,避免了进行大量复杂的图像配准计算,降低了计算的复杂度。而且,所提供的算法不仅只适用于小范围的运动校正,对范围较大的运动校正同样适用,大大降低了成本。According to some exemplary embodiments of the present application, a simple but higher-precision external camera is used to obtain motion information of a moving object while it is moving, thereby ensuring the accuracy of motion information acquisition and improving the accuracy of motion-corrected images. At the same time, a large number of complex image registration calculations are avoided, and the complexity of calculations is reduced. Moreover, the provided algorithm is not only applicable to small-range motion correction, but also applicable to larger-range motion correction, which greatly reduces the cost.
虽然本申请提供了如上述实施例或流程图所述的方法操作步骤,但基于常规或者无需创造性的劳动在所述方法中可以包括更多或者更少的操作步骤。在逻辑性上不存在必要因果关系的步骤中,这些步骤的执行顺序不限于本申请实施例提供的执行顺序。Although the present application provides the operation steps of the method described in the above embodiments or flowcharts, more or fewer operation steps may be included in the method based on routine or no creative effort. In steps that do not logically have a necessary causal relationship, the execution order of these steps is not limited to the execution order provided in the embodiment of the present application.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其它实施例的不同之处。Each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明仅用于帮助理解本申请的方法及其核心思想。同时,本领域技术人员依据本申请的思想,基于本申请的具体实施方式及应用范围上做出的改变或变形之处,都属于本申请保护的范围。综上所述,本说明书内容不应理解为对本申请的限制。The above is a detailed introduction to the embodiments of the present application. In this paper, specific examples are used to illustrate the principles and implementation methods of the present application. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application. At the same time, changes or deformations made by those skilled in the art based on the ideas of the application, specific implementation methods and application scopes of the application all belong to the scope of protection of the application. To sum up, the contents of this specification should not be understood as limiting the application.

Claims (18)

  1. 一种图像处理方法,其特征在于,所述图像处理方法包括:An image processing method, characterized in that the image processing method comprises:
    拍摄成像目标并获取初始帧图像和连续帧图像;Shoot the imaging target and obtain the initial frame image and continuous frame images;
    计算所述初始帧图像的初始三维特征点;calculating initial three-dimensional feature points of the initial frame image;
    计算所述连续帧图像的连续帧三维特征点;calculating the continuous frame three-dimensional feature points of the continuous frame images;
    匹配所述初始三维特征点和所述连续帧三维特征点,以得到RT矩阵;matching the initial three-dimensional feature points and the three-dimensional feature points of the continuous frames to obtain an RT matrix;
    利用所述RT矩阵对图像进行校正,得到重建图像。The image is corrected by using the RT matrix to obtain a reconstructed image.
  2. 根据权利要求1所述的图像处理方法,其特征在于,利用影像记录设备拍摄成像目标并获取初始帧图像和连续帧图像。The image processing method according to claim 1, wherein an image recording device is used to shoot the imaging target and acquire initial frame images and continuous frame images.
  3. 根据权利要求2所述的图像处理方法,其特征在于,在所述成像目标的两侧平行放置一对型号相同的所述影像记录设备,组成双目视觉系统以拍摄所述成像目标。The image processing method according to claim 2, wherein a pair of image recording devices of the same model are placed in parallel on both sides of the imaging target to form a binocular vision system to photograph the imaging target.
  4. 根据权利要求2所述的图像处理方法,其特征在于,利用所述影像记录设备以视频的形式记录所述成像目标的运动,所述视频包括初始时刻的初始帧图像和非初始时刻的连续帧图像,其中,所述初始帧图像和所述连续帧图像分别包括同一时刻一对所述影像记录设备拍摄的图像。The image processing method according to claim 2, wherein the image recording device is used to record the motion of the imaging target in the form of a video, the video including an initial frame image at an initial moment and continuous frames at a non-initial moment images, wherein the initial frame image and the continuous frame images respectively include images taken by a pair of the image recording devices at the same time.
  5. 根据权利要求1所述的图像处理方法,其特征在于,计算所述初始帧图像的初始三维特征点的步骤包括:The image processing method according to claim 1, wherein the step of calculating the initial three-dimensional feature points of the initial frame image comprises:
    利用尺度不变特征变换方法提取所述初始帧图像的二维特征点;Extracting two-dimensional feature points of the initial frame image by using a scale-invariant feature transformation method;
    去除所述初始帧图像的二维特征点的背景图像;removing the background image of the two-dimensional feature points of the initial frame image;
    匹配去除背景后的所述初始帧图像的二维特征点;matching the two-dimensional feature points of the initial frame image after background removal;
    三角化匹配信息,从而得到所述初始帧图像的所述初始三维特征点。Triangulating the matching information, so as to obtain the initial three-dimensional feature points of the initial frame image.
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述计算所述连续帧图像的连续帧三维特征点,包括:The image processing method according to claim 1, wherein the calculating the continuous frame three-dimensional feature points of the continuous frame images comprises:
    利用尺度不变特征变换方法提取所述连续帧图像的二维特征点;Extracting two-dimensional feature points of the continuous frame images by using a scale-invariant feature transformation method;
    去除所述连续帧图像的二维特征点的背景图像;removing the background image of the two-dimensional feature points of the continuous frame images;
    匹配去除背景后的所述连续帧图像的二维特征点;matching the two-dimensional feature points of the continuous frame images after background removal;
    三角化匹配信息,从而得到所述连续帧图像的所述初始三维特征点。Triangulating the matching information, so as to obtain the initial three-dimensional feature points of the continuous frame images.
  7. 根据权利要求5或6所述的图像处理方法,其特征在于,所述三角化匹配信息的步骤包括:The image processing method according to claim 5 or 6, wherein the step of triangulating matching information comprises:
    通过公式(7-1)计算所述成像目标上任一点在影像记录设备坐标下的深度Z;Calculate the depth Z of any point on the imaging target under the coordinates of the image recording device by formula (7-1);
    Figure PCTCN2021131635-appb-100001
    Figure PCTCN2021131635-appb-100001
    通过公式(7-2)计算所述成像目标上任一点在影像记录设备坐标下的横坐标X与纵坐标Y;Calculate the abscissa X and the ordinate Y of any point on the imaging target under the coordinates of the image recording device by formula (7-2);
    Figure PCTCN2021131635-appb-100002
    Figure PCTCN2021131635-appb-100002
    其中,T为两个所述影像记录设备的光学距离,f为所述影像记录设备的焦距,x1、xr分别为所述点在两个所述影像记录设备的图像帧上的投影与光轴图像交点的横坐标距离;y1为所述点在其中一个所述影像记录设备的图像帧上的投影与光轴图像交点的纵坐标距离。Wherein, T is the optical distance between the two image recording devices, f is the focal length of the image recording devices, x1 and xr are respectively the projection and optical axis of the point on the image frames of the two image recording devices The abscissa distance of the image intersection point; y1 is the ordinate distance between the projection of the point on the image frame of one of the image recording devices and the intersection point of the optical axis image.
  8. 根据权利要求1所述的图像处理方法,其特征在于,得到所述RT矩阵的步骤包括:The image processing method according to claim 1, wherein the step of obtaining the RT matrix comprises:
    通过奇异值分解方法使所述初始三维特征点和所述连续帧三维特征点的误差最小化,以得到所述RT矩阵。The errors between the initial three-dimensional feature points and the three-dimensional feature points of the consecutive frames are minimized by a singular value decomposition method to obtain the RT matrix.
  9. 根据权利要求8所述的图像处理方法,其特征在于,通过公式(9-1)使所述误差最小化:The image processing method according to claim 8, wherein the error is minimized by formula (9-1):
    Figure PCTCN2021131635-appb-100003
    Figure PCTCN2021131635-appb-100003
    其中,R、T分别表示旋转矩阵和平移矩阵,J表示所述误差,p i、q i分别为所述初始三维特征点的集合P={p 1,…,p n}和所述连续帧三维特征点的集合Q={q 1,…,q n}中的第i个元素,i、n均为自然数,i≤n。 Among them, R and T represent the rotation matrix and translation matrix respectively, J represents the error, p i and q i represent the set of initial three-dimensional feature points P={p 1 ,...,p n } and the continuous frame The ith element in the set Q={q 1 ,...,q n } of three-dimensional feature points, i and n are both natural numbers, i≤n.
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述奇异值分解方法通过公式(10-1)进行:The image processing method according to claim 9, wherein the singular value decomposition method is carried out by formula (10-1):
    [U,S,V]=SVD(H)    公式(10-1)[U,S,V]=SVD(H) Formula (10-1)
    其中,U、S、V分别表示经过奇异值分解后的第一酉矩阵、对角矩阵以及第二酉矩阵,SVD表示奇异值分解,重新中心化的点集之间的协方差矩阵H通过公式(10-2)确定:Among them, U, S, and V respectively represent the first unitary matrix, diagonal matrix and second unitary matrix after the singular value decomposition, SVD represents the singular value decomposition, and the covariance matrix H between the re-centered point sets is passed by the formula (10-2) Determine:
    Figure PCTCN2021131635-appb-100004
    Figure PCTCN2021131635-appb-100004
    其中,p i'和q i'分别表示重新中心化后的三维特征点。 Among them, p i ' and q i ' represent the three-dimensional feature points after re-centering respectively.
  11. 根据权利要求10所述的图像处理方法,其特征在于,所述旋转矩阵R和所述平移矩阵T分别通过公式(11-1)、(11-2)确定:The image processing method according to claim 10, wherein the rotation matrix R and the translation matrix T are respectively determined by formulas (11-1), (11-2):
    R=VU T      公式(11-1) R = V U T formula (11-1)
    T=-Rμ PQ  公式(11-2) T=-Rμ PQ formula (11-2)
    其中,μ P和μ Q分别为集合P、集合Q的中心位置。 Among them, μ P and μ Q are the central positions of set P and set Q respectively.
  12. 根据权利要求1所述的图像处理方法,其特征在于,所述图像包括CT图像、MR图像、PET图像、PET-CT图像、PET-MR图像以及CT-MR图像。The image processing method according to claim 1, wherein the images include CT images, MR images, PET images, PET-CT images, PET-MR images and CT-MR images.
  13. 根据权利要求1所述的图像处理方法,其特征在于,所述利用所述RT矩阵对PET图像进行校正,得到重建图像,包括:The image processing method according to claim 1, wherein said correcting the PET image using said RT matrix to obtain a reconstructed image comprises:
    对所述成像目标进行PET扫描,得到正弦图数据;performing a PET scan on the imaging target to obtain sinogram data;
    利用所述RT矩阵对所述正弦图数据进行校正,以重建图像。Correcting the sinogram data by using the RT matrix to reconstruct an image.
  14. 根据权利要求1所述的图像处理方法,其特征在于,在利用所述RT矩阵对所述图像进行校正之前,所述图像处理方法还包括通过公式(14-1)对所述图像的体素值进行校正:The image processing method according to claim 1, characterized in that before using the RT matrix to correct the image, the image processing method further comprises using the formula (14-1) to correct the voxels of the image The value is corrected:
    X t=W tRTX 0    公式(14-1) X t =W t RTX 0 Formula (14-1)
    其中,X t表示t时刻FOV区域的放射性物质的浓度的分布,X 0表示初始时刻FOV区域的放射性物质的浓度的分布,W t为t时刻所有体素位置的插值的权重。 Among them, X t represents the distribution of radioactive material concentration in the FOV area at time t, X 0 represents the distribution of radioactive material concentration in the FOV area at the initial time, and W t is the interpolation weight of all voxel positions at time t.
  15. 根据权利要求1所述的图像处理方法,其特征在于,得到所述RT矩阵的步骤包括:The image processing method according to claim 1, wherein the step of obtaining the RT matrix comprises:
    在所述成像目标上设置若干个放射性点源,通过对采集时间帧内的图像进行重建识别所述放射性点源,根据三个不共线的所述放射性点源组成的面得到所述成像目标的位姿,所述位姿包括所述成像目标的位置和所述成像目标绕三维坐标轴的旋转角度,通过计算连续帧之间的运动得到所述RT矩阵。Setting several radioactive point sources on the imaging target, identifying the radioactive point sources by reconstructing images within the acquisition time frame, and obtaining the imaging target according to a surface composed of three non-collinear radioactive point sources The pose includes the position of the imaging target and the rotation angle of the imaging target around the three-dimensional coordinate axis, and the RT matrix is obtained by calculating the motion between consecutive frames.
  16. 一种图像处理装置,其特征在于,所述图像校正装置包括:An image processing device, characterized in that the image correction device includes:
    帧图像获取单元,用于拍摄成像目标并获取初始帧图像和连续帧图像;A frame image acquisition unit, configured to photograph an imaging target and acquire initial frame images and continuous frame images;
    初始三维特征点计算单元,用于计算所述初始帧图像的初始三维特征点;an initial three-dimensional feature point calculation unit, configured to calculate the initial three-dimensional feature point of the initial frame image;
    连续帧三维特征点计算单元,用于计算所述连续帧图像的连续帧三维特征点;A continuous frame 3D feature point calculation unit, configured to calculate the continuous frame 3D feature points of the continuous frame images;
    RT矩阵计算单元,用于匹配所述初始三维特征点和所述连续帧三维特征点,以得到RT矩阵;An RT matrix calculation unit, configured to match the initial three-dimensional feature points and the three-dimensional feature points of the continuous frames to obtain an RT matrix;
    图像重建单元,用于利用所述RT矩阵对图像进行校正,得到重建图像。An image reconstruction unit, configured to use the RT matrix to correct the image to obtain a reconstructed image.
  17. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    一个或多个处理器;one or more processors;
    存储装置,用于存储计算机程序;storage means for storing computer programs;
    当所述计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1-15中任一所述的方法。When the computer program is executed by the one or more processors, the one or more processors are made to implement the method according to any one of claims 1-15.
  18. 一种计算机可读存储介质,其特征在于,其上存储有程序指令,所述程序指令被执行时实现如权利要求1-15中任一项所述的方法。A computer-readable storage medium, wherein program instructions are stored thereon, and the method according to any one of claims 1-15 is implemented when the program instructions are executed.
PCT/CN2021/131635 2021-11-12 2021-11-19 Image processing method and apparatus, and electronic device and computer-readable storage medium WO2023082306A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111337384.2 2021-11-12
CN202111337384.2A CN114170146A (en) 2021-11-12 2021-11-12 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2023082306A1 true WO2023082306A1 (en) 2023-05-19

Family

ID=80479165

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131635 WO2023082306A1 (en) 2021-11-12 2021-11-19 Image processing method and apparatus, and electronic device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN114170146A (en)
WO (1) WO2023082306A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681732A (en) * 2023-08-03 2023-09-01 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN117281616A (en) * 2023-11-09 2023-12-26 武汉真彩智造科技有限公司 Operation control method and system based on mixed reality
CN117649503A (en) * 2024-01-29 2024-03-05 杭州永川科技有限公司 Image reconstruction method, apparatus, computer device, storage medium, and program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071362B (en) * 2023-03-20 2023-08-15 内蒙古晶环电子材料有限公司 Crystal pulling broken bud detection method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170048511A1 (en) * 2014-05-28 2017-02-16 Inuitive Ltd. Method for Stereoscopic Reconstruction of Three Dimensional Images
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN110544301A (en) * 2019-09-06 2019-12-06 广东工业大学 Three-dimensional human body action reconstruction system, method and action training system
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170048511A1 (en) * 2014-05-28 2017-02-16 Inuitive Ltd. Method for Stereoscopic Reconstruction of Three Dimensional Images
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN110544301A (en) * 2019-09-06 2019-12-06 广东工业大学 Three-dimensional human body action reconstruction system, method and action training system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681732A (en) * 2023-08-03 2023-09-01 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN116681732B (en) * 2023-08-03 2023-10-20 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN117281616A (en) * 2023-11-09 2023-12-26 武汉真彩智造科技有限公司 Operation control method and system based on mixed reality
CN117281616B (en) * 2023-11-09 2024-02-06 武汉真彩智造科技有限公司 Operation control method and system based on mixed reality
CN117649503A (en) * 2024-01-29 2024-03-05 杭州永川科技有限公司 Image reconstruction method, apparatus, computer device, storage medium, and program product

Also Published As

Publication number Publication date
CN114170146A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
WO2023082306A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
Cai et al. Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-principle study
US7599540B2 (en) Motion compensated reconstruction technique
US7352386B1 (en) Method and apparatus for recovering a three-dimensional scene from two-dimensional images
US20140161332A1 (en) Image reconstruction from limited or incomplete data
CN111540025B (en) Predicting images for image processing
WO2008024352A2 (en) Methods and systems for registration of images
US10863946B2 (en) Respiratory motion estimation in projection domain in nuclear medical imaging
Olesen et al. Motion tracking in narrow spaces: a structured light approach
WO2023138197A1 (en) Image reconstruction method and apparatus, training method and apparatus, and device and storage medium
CN112150571A (en) Image motion artifact eliminating method, device, equipment and storage medium
CN110555897B (en) Image generation method, device, equipment and storage medium
Bruder et al. Compensation of skull motion and breathing motion in CT using data-based and image-based metrics, respectively
CN111612887B (en) Human body measuring method and device
Van Eyndhoven et al. Combined motion estimation and reconstruction in tomography
GB2541284A (en) Method and computer program product for generating an artefact-reduced voxel data record
JP7459243B2 (en) Image reconstruction by modeling image formation as one or more neural networks
Guo et al. Iterative image reconstruction for limited-angle CT using optimized initial image
CN110853113B (en) TOF-PET image reconstruction algorithm and reconstruction system based on BPF
Maur et al. CBCT auto-calibration by contour registration
Hsiao et al. Bayesian image reconstruction for transmission tomography using deterministic annealing
Maas et al. Nerf for 3d reconstruction from x-ray angiography: Possibilities and limitations
CN114511666A (en) Model generation method, image reconstruction method, device, equipment and medium
Lozenski et al. Neural fields for dynamic imaging
CN113963056B (en) CT image reconstruction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21963751

Country of ref document: EP

Kind code of ref document: A1