CN113963056A - CT image reconstruction method, device, electronic equipment and storage medium - Google Patents

CT image reconstruction method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113963056A
CN113963056A CN202111044738.4A CN202111044738A CN113963056A CN 113963056 A CN113963056 A CN 113963056A CN 202111044738 A CN202111044738 A CN 202111044738A CN 113963056 A CN113963056 A CN 113963056A
Authority
CN
China
Prior art keywords
projection image
projection
frame
image
markers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111044738.4A
Other languages
Chinese (zh)
Other versions
CN113963056B (en
Inventor
于留青
刘肖琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruitiying Medical Equipment Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111044738.4A priority Critical patent/CN113963056B/en
Publication of CN113963056A publication Critical patent/CN113963056A/en
Application granted granted Critical
Publication of CN113963056B publication Critical patent/CN113963056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Operations Research (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides a CT image reconstruction method, a CT image reconstruction device, electronic equipment and a storage medium. The method comprises the following steps; fixedly connecting a plurality of markers on the detected object; acquiring a plurality of projection images of the detected object with markers at different arbitrary angles; establishing a relation between the pixel coordinates and the marker space coordinates through the pixel coordinates of the markers on each frame of image, and calculating an imaging geometric relation between an actual imaging system and the detected object for each frame of image; improving the precision of the geometric relation by a mathematical optimization method; and (3) realizing three-dimensional image reconstruction by adopting a corresponding high-precision geometric relation under an actual working condition to obtain a CT image of the detected object. The imaging geometric relation of the invention is higher than the imaging geometric relation precision ensured by mechanical processing and motion control precision, the CT image reconstruction precision is improved under the condition of not requiring the imaging system to control the high precision, and the low-precision CT machine and the DR machine are conveniently transformed and improved into high-precision CBCT equipment.

Description

CT image reconstruction method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a CT image reconstruction method, apparatus, electronic device, and storage medium.
Background
In recent years, various CT machines have been rapidly developed and widely used, and imaging can be reconstructed (iterative method, back projection method, etc. of CT three-dimensional image reconstruction) according to the ray attenuation law after a human body is scanned with rays from multiple angles according to the physical characteristic that the absorption coefficients of rays are different when the rays penetrate through organs or tissues of the human body. The actual CT image reconstruction algorithm requires that parameters of the imaging geometric relationship between the imaging system and the detected object when the CT imaging system takes the projection image of the detected object must be acquired. In order to obtain the imaging geometric relationship parameters, at present, various CT systems generally require a radiation source and a radiation detector to move along a predetermined moving trajectory, the radiation source emits X-rays at a plurality of predetermined positions to irradiate a detected object, the radiation detector records a series of projection data at the corresponding position of the radiation source to obtain corresponding imaging geometric relationship parameters, and the projection data are used to perform three-dimensional reconstruction on the detected object.
However, the accuracy of the above-mentioned reconstruction algorithm depends on the accuracy of the imaging geometric relationship between the detected object and the imaging system, which is mainly ensured by the accuracy of the imaging system hardware and mechanical control at present, and this method of controlling and acquiring the imaging geometric relationship results in high cost and low accuracy for CT image reconstruction.
Disclosure of Invention
The embodiment of the application provides a CT image reconstruction method, a CT image reconstruction device, an electronic device and a storage medium, which can reduce the manufacturing cost of a CT machine and improve the reconstruction precision of a CT image.
In a first aspect, an embodiment of the present application provides a CT image reconstruction method, including:
acquiring a plurality of frames of projection images, wherein the plurality of frames of projection images are obtained by an imaging system emitting X rays to a detected object under a plurality of arbitrary imaging angles for imaging, and the detected object is provided with a plurality of markers;
acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the marker in each frame of projection image, wherein the imaging geometric relationship corresponding to each frame of projection image is the imaging geometric relationship between the detected object and the imaging system when each frame of projection image is acquired;
and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
In a second aspect, an embodiment of the present application provides a CT image reconstruction apparatus, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring multi-frame projection images, the multi-frame projection images are obtained by an imaging system emitting X rays to a detected object under a plurality of arbitrary imaging angles for imaging, and a plurality of markers are arranged on the detected object;
the processing unit is used for acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the markers in each frame of projection image, wherein the imaging geometric relationship corresponding to each frame of projection image is the imaging geometric relationship between the detected object and the imaging system when each frame of projection image is acquired; and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor coupled to a memory, the memory configured to store a computer program, the processor configured to execute the computer program stored in the memory to cause the electronic device to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program makes a computer execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, the computer being operable to cause a computer to perform the method according to the first aspect.
The embodiment of the application has the following beneficial effects:
in the traditional various CT devices, the imaging geometric relation between the detected object and the imaging system is ensured by a mechanical device or device hardware, and the imaging geometric relation between the imaging system and the detected object during imaging is determined by algorithms such as photogrammetry and the like; and also to optimize the imaging geometry with high accuracy. The global coordinate system of the CT imaging system is fixedly connected with the measured object, and the pixel coordinates of the markers in the projection images directly reflect the position and the posture of the imaging system during imaging, wherein the position and the posture include various errors such as vibration, shaking and the like under actual working conditions, so that the imaging geometric relation obtained based on photogrammetry of the markers more accurately reflects the imaging geometric relation required by CT reconstruction. By using the method, the imaging geometric relation can be acquired with high precision without high-precision control of a mechanical device, so that the quality and the precision of three-dimensional CT image reconstruction of the detected object are improved, and the cost for manufacturing the CT machine is reduced. Meanwhile, the DR system can be conveniently transformed into a cone beam CT system by utilizing the imaging geometric relationship for calibration.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a CT image reconstruction method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of acquiring a plurality of frames of projection images according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of intrinsic and extrinsic parameters of an imaging system provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of acquiring projection images at position 1 and position 2 according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating functional units of a CT reconstruction apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
To facilitate an understanding of the embodiments of the present application, a description will first be given of relevant knowledge to which the present application relates.
At present, when various types of CT perform three-dimensional reconstruction, a radiation source and a radiation detector are generally required to move along a preset running track and acquire images at a relatively fixed position. The radiation source emits X-rays at a plurality of set positions on the motion trail to irradiate the detected object, then the radiation detector records projection data at the corresponding direction of the radiation source, and the three-dimensional reconstruction of the detected object is realized by combining the obtained projection data with a reconstruction algorithm.
For example, in Cone Beam Computed Tomography (CBCT), the projection imaging process can be expressed by formula (1):
p ═ s ═ f formula (1);
wherein s represents the imaging geometric relationship between the imaging system and the detected object, and can be represented by a matrix, f is the spatial coordinate of the detected object, and p is the projection data of the detected object.
Therefore, the tomographic image reconstruction process can be expressed by formula (2):
f=s-1p formula (2);
since finite angle reconstruction is pathologically, and the pathologically is treated by the generalized inverse, the estimate of the reconstructed image can be represented by equation (3):
Figure BDA0003250752060000041
as can be seen from the formula (1), the formula (2) and the formula (3), the imaging geometric relationship between the imaging system and the detected object is a precondition for various CT three-dimensional image reconstruction algorithms, and the accuracy of the imaging geometric relationship is one of the key factors influencing the quality of the CT three-dimensional image reconstruction result. However, the imaging geometric relationship between the imaging system of various CT machines and the detected object is ensured by the hardware processing precision and the mechanical motion control precision of the imaging system, for example, the FDK reconstruction algorithm of the three-dimensional cone-beam CT system usually requires that the cone-beam central ray is perpendicular to the ray detector and projected on the center of the ray detector. Therefore, when the machining and control accuracy of the imaging system is not sufficient, other problems such as artifacts and blurring are brought to the reconstruction result, and the reconstruction quality effect is reduced, so that the CT image reconstruction cost is high and the accuracy is low.
How to reduce the cost of CT image reconstruction and improve the reconstruction accuracy is a problem to be solved urgently at present.
In order to facilitate understanding of the technical solutions of the present application, the related concepts of the present application are explained and illustrated.
The imaging system of the present application may be an imaging system under DR imaging, or a helical CT imaging system with a large number of detector rows to form Cone beam CT imaging, or an imaging system under Cone Beam Computed Tomography (CBCT), or an imaging system of C-arm CT, or an imaging system under synthetic X-ray Tomography (Tomothesesis). Wherein the imaging system comprises a radiation source and a radiation detector under the above-mentioned multiple imaging modes, and the relative distance between the radiation source and the radiation detector is kept unchanged in the application.
The coordinate system required to be established in the application comprises a global coordinate system, a measured object coordinate system, an imaging system coordinate system and an image coordinate system. The coordinate origin of the measured object coordinate system is established on the measured object or a bearing device fixedly connected with the measured object. And for the convenience of calculation, the measured object coordinate system is taken as a global coordinate system in the application. The origin of coordinates of the imaging system coordinate system is established at the center of the ray source, and the z axis is superposed with the optical axis of the imaging system. The space coordinates of the marker are three-dimensional space coordinates of the marker in a measured object coordinate system, and the pixel coordinates of the marker are two-dimensional pixel coordinates of the marker in an image coordinate system.
The present application identifies and extracts each marker on a projection image by an image processing method, obtains the pixel coordinates of each marker, and matches and numbers the same-name markers (same-name points, i.e., same markers) between different images by an image processing mathematical method, for example, an algorithm such as RANSAC.
Optionally, in order to facilitate automatic matching of the same marker (the same name point) between different images, a plurality of specific markers may be made into different specific shapes, such as a triangle, a cross, and the like. For each frame of projection images, when the marker in the specific shape is detected by adopting an image processing and pattern recognition algorithm, the marker can be automatically matched and numbered, and the rest markers can be matched and numbered through the relative position relation with the specific marker, so that the markers contained in each frame of projection images and the numbers of the markers are quickly and accurately recognized, and the homonymy points in the two frames of projection images are quickly obtained.
Optionally, in order to reduce the influence of the ray blocking effect of the markers, after the markers of each image are extracted by the image processing algorithm and matched with the numbers, the image at the marker position may be interpolated by using the gray level of the image around the marker position to replace the original gray level of the marker, so as to suppress the influence of the ray-sensitive markers on the radiographic image. The numerical value of the position of the marker can also be set to be null, namely, the image gray scale of the marker does not participate in the calculation of the three-dimensional image reconstruction, so that the disturbance influence of the ray sensitive material of the marker can be reduced, and the quality of the three-dimensional image reconstruction is not influenced.
Alternatively, the material of the marker may be any material sensitive to x-rays, such as lead, steel, or a bone material, and the material of the marker is not limited in this application. The shape of the marker may be in the form of a bead or a line, and the shape of the marker is not limited in the present application.
Optionally, the marker is disposed on the detected object, the marker may be directly disposed on the detected object, or the marker may be indirectly disposed on the detected object, for example, the marker may be disposed on the accessory, the accessory may be disposed on the detected object, and the accessory may be an accessory that can be freely detached. Therefore, the present application does not limit the manner in which the marker is provided on the object to be detected. In addition, the positional relationship among the m markers in the present application may be arbitrary. For example, the m markers can be randomly and fixedly connected to the detected object and move along with the detected object; alternatively, the relationship may be fixed, for example, m markers may be placed on a shelf, and the shelf may be attached to the object to be detected.
Optionally, in order to accurately obtain the pixel coordinates of each marker in each frame of projection image, the markers in each frame of projection image are first extracted and positioned with high precision by using an image processing algorithm, for example, quadratic surface fitting is performed on a gray scale surface or a correlation coefficient matrix of a spherical marker, then an extreme value is obtained, and the extreme value point is used as the pixel coordinates of the marker.
Referring to fig. 1, fig. 1 is a schematic flowchart of a CT image reconstruction method according to an embodiment of the present disclosure. The method is applied to a CT image reconstruction device. The method comprises the following steps:
101: acquiring a plurality of frames of projection images.
Illustratively, the multi-frame projection image is obtained by projecting and imaging an imaging system by emitting X-rays to the detected object under a plurality of imaging angles, and the multi-frame projection image corresponds to a plurality of imaging angles in a one-to-one manner, and the plurality of imaging angles may be any imaging angle.
In an embodiment of the present application, the above-mentioned implementation process of acquiring n frames of projection images may be: controlling the detected object to rotate and/or translate for multiple times, wherein each rotation and/or translation forms an imaging angle; and under n imaging angles obtained by rotating and/or translating the detected object for n times, controlling the ray source to emit X rays to the detected object to obtain n frames of projection images.
As shown in fig. 2, the object to be detected is disposed on a carrier device, which can rotate. Controlling the bearing device to rotate to drive the detected object to rotate together; after the bearing device is rotated each time, the ray source is controlled to emit X rays to the detected object, the ray detector records a series of projection data in the corresponding direction of the ray source, a projection image corresponding to each rotation is obtained, and n frames of projection images are obtained.
Of course, in practical applications, the detected object may also rotate autonomously, for example, when the detected object is a person, the person may rotate autonomously for multiple times. After each rotation, the ray source is controlled to emit X rays to the detected object, and n frames of projection images are obtained.
In another embodiment of the present application, the above implementation process of acquiring n frames of projection images may further be: the imaging system is controlled to rotate and/or translate for multiple times, and the ray source is controlled to emit X rays to the detected object after the imaging system is rotated and/or translated every time, so that n frames of projection images are obtained. The imaging system may rotate and/or translate on a predetermined track, or may not rotate and/or translate on a predetermined track, for example, may rotate and/or translate along an arbitrary trajectory. The imaging geometry at each imaging position may be unknown while moving on a predetermined trajectory.
In another embodiment of the present application, the above implementation process of acquiring n frames of projection images may further include: and controlling the imaging system and the detected object to perform multiple rotations and/or translations simultaneously, and controlling the ray source to emit X rays to the detected object after each rotation and/or translation to obtain n frames of projection images.
Generally, n imaging angles are formed between an imaging system and a detected object through a rotating and/or translating mode, and then the ray source is controlled to respectively emit X rays to the detected object under the n imaging angles to obtain n frames of projection images. Moreover, the n imaging angles may be arbitrary, and it is not necessary to emit X-rays to the detected object when the radiation source and/or the radiation detector reach a fixed position.
102: and acquiring an imaging geometric relation corresponding to each frame of projection image according to the pixel coordinates of the marker in each frame of projection image.
It should be understood that the imaging geometric relationship of each frame of projection image referred to in this application is the imaging geometric relationship between the measured object coordinate system and the imaging system coordinate system, i.e., the attitude and position relationship of the imaging system coordinate system relative to the measured object coordinate system, when each frame of projection image is acquired, and they are essentially identical.
Illustratively, the imaging geometry corresponding to each frame of projection images includes the internal and external parameters of the imaging system corresponding to each frame of projection images. As shown in FIG. 3, the intrinsic parameter of each frame of projection includes the focal length corresponding to each frame of projection image, i.e. the distance D from the ray source C to the ray detectorSDAnd coordinates (C) of the principal pointx,Cy) (ii) a The main point is a vertical intersection point from the ray source C to an imaging surface of the ray detector, a connecting line between the ray source C and the main point is an optical axis, and the coordinate of the main point is the pixel coordinate of a central pixel point of each frame of projection image; the external parameters are the attitude and the position relation of the imaging system coordinate system relative to the measured object coordinate system during imaging, wherein the attitude is represented by a rotation matrix, and the position relation is represented by a translation matrix.
Further, the imaging geometric relationship among the measured object coordinate system, the imaging system coordinate system and the image coordinate system can be expressed by formula (4):
Figure BDA0003250752060000071
wherein Z isCIs a constant factor, (X, Y, Z) refers to the space coordinate of the marker P in the coordinate system of the measured object,
Figure BDA0003250752060000072
the pixel coordinates of the image point obtained by the central perspective projection of the marker P, namely the pixel coordinates of the marker P in the projection image; du and dv are the detector pixel sizes respectively; r0The rotation matrix is a 3 x 3 unit orthogonal matrix, the elements of which are trigonometric function combinations of rotation angles, and the rotation angles are used for expressing Euler angles which are respectively rotated around three coordinate axes when the posture of the measured object coordinate system is transformed to be consistent with the posture of the imaging system coordinate system. T is0Is a translation matrix, and T0=(Tx,Ty,Tz),Tx,Ty,TzRespectively representing the translation amounts of the origin of the coordinate system of the measured object on three coordinate axes to the origin of the coordinate system of the imaging system. Thus, T0The coordinate of the origin of the coordinate system of the measured object in the coordinate system of the imaging system may also be referred to, and may be equivalently, the coordinate of the origin of the coordinate system of the imaging system in the coordinate system of the measured object.
As can be known from equation (4), the imaging geometry corresponding to each frame of projection image can also be represented by a projection matrix M formed by the internal and external parameters of the imaging system, which are consistent in nature. Therefore, when the imaging geometric relationship in the adopted CT three-dimensional image reconstruction algorithm uses different mathematical expression forms such as a projection matrix or an attitude, a position component, and the like, the corresponding solving and expression forms in the imaging geometric relationship solving process can be adopted, and details are not repeated below.
The imaging geometrical relation corresponding to each frame of projection to be obtained in the application is to obtain the coordinates of the corresponding principal point of each frame of projection image, the distance from the ray source to the ray detector, the rotation matrix and the translation matrix. In addition, the principal point coordinate (C) corresponding to each frame of projection image in the present applicationx,Cy) The value of (2) is set as the pixel coordinate of the central pixel point of the projection image, so the principal point coordinate corresponding to each frame of projection image can be directly determined based on each frame of projection image, and the principal point coordinates of the multiple frames of projection images are the same; furthermore, without loss of generality, in the embodiments of the present application, it is assumed that the distance D from the radiation source C to the radiation detector is obtained during the whole imaging processSDThe internal parameters of the multi-frame projection images are the same, and the internal parameters of the multi-frame projection images are collectively referred to as the internal parameters of the imaging system, which will be described below by taking the internal parameters of the imaging system as an example. Thus, after the first determination of the internal parameters of the imaging system, mainly the distance D from the source C to the radiation detectorSDThen, the intrinsic parameters of all the projection images are determined. For the sake of simplicity, the distance D from the source C to the detector C, which is nominal for the CT system, can be used directlySDTherefore, the present applicationThe emphasis translates into how to determine the corresponding extrinsic parameters for each frame of the projected image.
Several methods for determining the corresponding extrinsic parameters of each frame of the projection image are described below, but the implementation of acquiring the extrinsic parameters of the imaging system is not limited.
The method comprises the following steps: when there are multiple frames of projection images with known spatial coordinates of the markers.
It should be understood that the spatial coordinates of the plurality of markers are known, that is, the relative positional relationship between the plurality of markers with respect to the origin of coordinates of the measured object coordinate system is fixed, and the relative positional relationship does not change when the markers are placed on the measured object. Therefore, after the measured object coordinate system is established, the space coordinates of the markers can be directly obtained.
When a plurality of markers with known spatial coordinates are arranged in any frame of projection image, namely the number of the markers with the known spatial coordinates is larger than a first threshold value, the spatial coordinates of the markers included in the frame of projection image and the pixel coordinates of the markers can be directly used to form a linear equation set of each element in a projection matrix of the frame of projection image according to a formula (4), the equation set is solved, and then the internal parameters and the external parameters of the frame of projection image are obtained from the projection matrix through decomposition according to the relation between the projection matrix and the internal parameters and the external parameters of the imaging system, so that the imaging geometric relation of the frame of projection image is obtained.
Optionally, when the CT three-dimensional reconstruction algorithm is directly implemented based on the projection matrix, the internal parameters and the external parameters of the imaging system do not need to be further decomposed from the projection matrix, and the solved projection matrix can be directly used to complete the CT three-dimensional image reconstruction.
Alternatively, when the distribution of the markers with known three-dimensional coordinates satisfying a certain constraint condition is specially designed, only 3 markers with known spatial coordinates are needed to acquire the internal parameters and the external parameters of the frame projection image.
It should be noted that if the frame projection image is used for calculating the internal parameters of the imaging system for the first time and the external parameters of the frame projection image, 6 markers with known spatial coordinates are required in the frame projection image. In some cases, the number of markers for which the spatial coordinates are known may be less than 6. If the internal parameters of the imaging system have been calculated, for example, the internal parameters of the imaging system have been calculated using other frame projection images before calculating the external parameters of the frame projection image, or a nominal parameter of the distance from the radiation source to the detector of the imaging system is directly used as the internal parameter, then only 3 markers with known spatial coordinates are needed in the frame projection image, and the external parameters of the frame projection image can be determined. Therefore, when the number of the markers with known spatial coordinates in each frame of the projection image is larger than the first threshold, one frame of the projection image containing 6 or more than 6 known spatial coordinates can be found first, and the internal parameters of the imaging system can be calculated.
The method 2 comprises the following steps: the spatial coordinates of the markers in any one frame of the projection image are not known at all, or the number of markers for which the spatial coordinates are known is less than the first threshold.
Illustratively, a first projection image and a second projection image are selected from the plurality of projection images to form a first projection image pair, the first projection image and the second projection image have k identical markers, and the k identical markers are referred to as k first markers, and k is greater than or equal to a second threshold (the second threshold may be set to be 7 in this application). In a possible implementation, in order to ensure sufficient intersection measurement accuracy, two projection images whose imaging angle difference has an absolute value greater than a threshold value may be selected from the plurality of projection images.
Then, according to the pixel coordinates of each first marker in the first projection image and the second projection image respectively, determining the space coordinates of each first marker; and determining the imaging geometrical relationship corresponding to each frame of projection image according to the space coordinate of the first marker and the pixel coordinate of the marker in each frame of projection image.
The process of acquiring the spatial coordinates of the first marker is described below with reference to the drawings.
As shown in fig. 4, when the first projection image and the second projection image are acquired at the position 1 and the position 2, respectively, and the imaging angle between the first projection image and the second projection image is greater than the threshold value, the first projection image and the second projection image may be combined into a first projection image pair.
Illustratively, it is determined that pixel coordinates of the ith first marker of the k first markers in the first projection image and the second projection image satisfy the constraint of formula (5) according to the epipolar geometry:
x′i TFxi0 formula (5);
wherein x isiIs the pixel coordinate, x 'of the ith first marker in the first projection image'iF is a fundamental matrix for the pixel coordinates of the ith first marker in the second projection image, wherein the fundamental matrix F can be represented by equation (6):
F=K-TEK-1formula (6);
wherein E is an essential matrix, and K is an internal reference matrix formed by internal parameters of the imaging system. The essential matrix E can be represented by equation (7):
E=[t]×r formula (7);
where R and t are the rotational and translational matrices required to transform the imaging system coordinate system from the pose and position of position 1 to the pose and position of position 2, i.e. the relative rotational-translational relationship between the first and second projection images.
Further, the internal reference matrix K can be represented by formula (8):
Figure BDA0003250752060000101
further, acquiring pixel coordinates of the ith first marker in the first projection image and the second projection image respectively; and constructing a constraint equation according to the pixel coordinates of the ith first marker in the first projection image and the second projection image and the basic matrix.
Illustratively, the constraint equation can be expressed by equation (9):
Figure BDA0003250752060000102
wherein (x)i,yi) Is the pixel coordinate of the ith first marker in the first projection image, (x'i,y′i) Is the pixel coordinate of the ith first marker in the second projection image.
In addition, since the fundamental matrix F itself satisfies the relationship shown in formula (10):
det (f) ═ 0 equation (10);
where det denotes determinant.
To avoid loss of generality, take f331. Then, for the fundamental matrix F, there are 8 unknown parameters to be determined, and in order to determine the fundamental matrix F, at least the pixel coordinates of 7 first markers in the first projection image and the second projection image respectively need to be acquired, so that in this application, at least 7 markers are disposed on the detected object, and the number of the first markers in the first projection image pair is greater than or equal to 7. The present application illustrates 7 markers. For each of the first markers, an explicit constraint equation as shown in equation (11) can be constructed in conjunction with equation (9):
xix′if11+x′iyif12+x′if13+xiy′if21+xiy′if22+y′if23+xif31+yif32+1 ═ 0 formula (11);
for 7 first markers, 7 explicit constraint equations shown in formula (11) can be constructed, and in combination with formula (10), 8 unknown parameters (f) can be constructed11,f12,f13,f21,f22,f23,f31,f32) 8 polynomial equations. Then 8 unknown parameters (F) in the fundamental matrix F can be obtained by the constructed 8 polynomial equations11,f12,f13,f21,f22,f23,f31,f32) And obtaining a basic matrix F.
Further, the determined basic matrix F and the internal reference matrix K are input into the formula (6), and the essential matrix E can be determined;
further, the intrinsic matrix E is subjected to Singular Value Decomposition (SVD) Decomposition, and the obtained intrinsic matrix E can also be represented by formula (12):
Figure BDA0003250752060000111
wherein, UE,VEIs an orthogonal matrix, sigmaEIs a matrix of singular values, ΣE=diag(σ1,σ1,0),UE3Is a matrix UEThe 3 rd column vector of (1).
Further, there are four possibilities to coexist when decomposing the essential matrix E into the rotation matrix and the translation matrix required to transform the imaging system coordinate system from the pose and position of position 1 to the pose and position of position 2, which can be represented by equation (13):
Figure BDA0003250752060000112
wherein:
Figure BDA0003250752060000121
according to the criterion that the space point corresponding to the image matching inner point set (namely, the matching point set formed by the same-name points, namely, the first marker) in the stereo image pair is necessarily in front of the two-view sight line (namely, in front of the sight lines of the first projection image and the second projection image), three possible situations in the formula (13) can be eliminated, and the rotation matrix R and the translation matrix t with the scaling coefficient, which are only needed for reasonably converting the imaging system coordinate system from the posture and the position of the position 1 to the posture and the position of the position 2, can be screened out.
Exemplary, for the translation matrixT is not the final translation matrix found here, but in practice the final translation matrix T is st, where s is a scaling factor and s is a scaling factor
Figure BDA0003250752060000122
Determining a scaling factor, wherein LiIs the true distance, L 'of any pair of first markers'iIs the distance of any pair of first markers. Therefore, after obtaining the translation matrix t, the spatial coordinates of k first markers can be determined by using the principle of triangular intersection based on the obtained R and t, and the distance L 'of any pair of first markers can be calculated'i(ii) a Then, a scaling factor s is determined, and a final translation matrix T is determined based on the scaling factor.
Therefore, for the first projection image pair, according to the rotation matrix R and the final translation matrix T between the determined first projection image and the second projection image, and the pixel coordinates of the k first markers in the first projection image and the second projection image respectively, the spatial coordinates of the k first markers in the imaging system coordinate system are determined by using the principle of triangular intersection.
The space coordinate of the marker obtained according to the pixel coordinate of the marker is the space coordinate of the marker in the imaging system coordinate system, and a global coordinate system fixedly connected with the measured object is appointed and established according to the space coordinate of the marker in the imaging system coordinate system and application requirements, so that the global space coordinate of each marker is obtained.
Since the internal parameters corresponding to each projection image are the same, obtaining the internal parameters of the first projection image or the second projection image corresponds to obtaining the internal parameters of all projection images. And subsequently, determining the corresponding external parameters of each frame of projection image according to the space coordinates of not less than 3 markers and the pixel coordinates of the markers in each frame of projection image.
Illustratively, N pose estimation operations are performed according to the external parameters corresponding to the first projection image or the second projection image respectively and the pixel coordinates of the markers in each frame of projection image, so as to obtain the external parameters of each frame of projection image, wherein the ith pose estimation operation includes the following steps:
from a set h of projection imagesi-1To select a projection image a and from a set of projection images wi-1Wherein the projection image a and the projection image b have p same markers and the p same markers are taken as p second markers, and the projection image set h isi-1And the set of projection images wi-1For performing the i-1 st pose estimation operation, wherein a set h of projection images is obtainediThe method is characterized by comprising the following steps of (1) forming a projection image with a known imaging geometric relation in a multi-frame projection image; set of projection images wiComprises the plurality of frames of projection images except the projection image set hiAll projection images other than the projection image set wiThe method is composed of a plurality of projection images with unknown imaging geometrical relations (mainly unknown external parameters) in the projection images.
It should be noted that the set of projection images hiThe projection image of known imaging geometry may be obtained based on any of the methods of the present invention.
When the number of second markers including known spatial coordinates in the p second markers is greater than or equal to a third threshold, for example, the third threshold may be set to 3, and the corresponding external parameter of the projection image b is determined according to the internal parameter of the imaging system, the spatial coordinates of the second markers including known spatial coordinates in the p second markers, and the pixel coordinates of the second markers including known spatial coordinates in the projection image b based on the 2D-3D pose estimation method.
The imaging geometry of the projection image b is thus acquired, and the projection image b is therefore derived from the projection image set wi-1Is moved out and put into the projection image set hi-1Obtaining a projection image set h corresponding to the ith posture estimation operationiAnd a set of projection images wiI.e. moving the projection images of known imaging geometry to the projection image set hiIn (1). It should be understood that after the N pose estimation operations have been performed, the set of projection images wNAnd (4) solving the imaging geometrical relation of each frame of projection image for the empty set.
Further, for a second marker of which the spatial coordinates are unknown, among the p second markers, the spatial coordinates of the second marker can be determined based on the imaging geometric relationship between the projection image a and the projection image b, so as to determine the spatial coordinates of the p second markers. Therefore, after each pose estimation operation is performed, if a marker with unknown spatial coordinates exists in the same marker on two frames of projection images participating in the pose estimation operation, the spatial coordinates of the marker with unknown spatial coordinates can be calculated based on the imaging geometric relationship obtained by the pose estimation operation, and thus, as the number of times of performing the pose estimation operation increases, more and more spatial coordinates of the marker are calculated on each frame of projection image. Therefore, if any marker appears in at least two frames of projection images, the spatial coordinates of the marker on each frame of projection image can be calculated after a plurality of pose estimation operations are performed.
When p is smaller than the third threshold, the projection image a and the projection image b may be set as a new projection image pair, and the spatial coordinates of the p second markers and the external parameters of the projection image a and the projection image b may be obtained in the manner of the above-described basic matrix, which will not be described.
The method 3 comprises the following steps: the spatial coordinates of the partial markers (a minimum of six markers) are known.
Illustratively, the spatial coordinates of a part of the markers are known, which results in that the number of markers comprising known spatial coordinates in a part of the projection image is greater than or equal to a first threshold value, or, in the case that the parameters in the imaging system are obtained, the number of markers comprising known spatial coordinates in a part of the projection image is greater than or equal to a third threshold value, and the number of markers comprising known spatial coordinates in another part of the projection image is smaller than the first threshold value; therefore, for a part of known cases, a target projection image in a multi-frame projection image can be acquired without solving an essential matrix and a basic matrix, wherein the number of markers containing known spatial coordinates in the target projection image is greater than or equal to a first threshold value; directly solving the imaging geometric relation of the target projection image based on the space coordinate and the pixel coordinate of the marker with the known space coordinate; moving the target projection image into a projection image set with a known imaging geometric relationship, moving the residual projection images except the target projection image into a projection image set with an unknown imaging geometric relationship, and executing the pose estimation operation, namely reconstructing a new projection image pair by using the projection images in the projection image set with the known imaging geometric relationship and the projection images with the unknown imaging geometric relationship; and finally, estimating the imaging geometric relationship of the projection images in the projection image set with unknown imaging geometric relationship based on a 2D-3D pose estimation method to obtain the imaging geometric relationship of each frame of projection images.
In the embodiment of the application, after determining the imaging geometric relationship between the detected object and the imaging system when each image is acquired and the spatial coordinates of the markers in each frame of the projection image, the imaging geometric relationship and the spatial coordinates of the markers may be further optimized. The following provides a process for optimizing the imaging geometry and spatial coordinates of the markers using beam adjustment and like optimization algorithms for the present application.
After the imaging geometric relationship of each frame of projection image is determined, the imaging geometric relationship of each frame of projection image and the space coordinates of the marker in each frame of projection image in the measured object coordinate system are obtained by high-precision optimization by adopting a light beam adjustment optimization method.
Specifically, in the beam adjustment optimization method, the projection (pixel coordinates of the markers in the projection image) of each marker in each projection image in each frame is recalculated according to the imaging model according to the internal parameters (i.e., the internal parameters of the imaging system), the external parameters and the spatial coordinates of the markers corresponding to each projection image, that is, the image side re-projection image point, and the minimum sum of the distances between the re-projection result of each marker and the actual projection result is used as the optimization objective function. For the optimal solution of the objective function, the observation equation and the constraint equation can be linearized through Taylor expansion, and then the correction value of each adjustment parameter is calculated in a step-by-step iterative manner. If the algorithm converges, the iteratively calculated correction values will gradually go to zero. And when the final correction value is smaller than the threshold value, determining that the imaging relation is well met, and obtaining the optimized imaging geometric relation and the optimized space coordinate.
Optionally, the re-projection of the marker in each frame of the projection image may also be performed on the object side, and the processing methods are similar and are not described again. In the following, image-side reprojection is taken as an example.
In particular, xijThe actual projection result of the ith marker in the j frame projection image is obtained. For convenience in labeling, it is assumed that each marker is visible in all projection images. Thus, the objective function may be determined to minimize the sum of the squares of the distances between the reprojected image points and the actual projected image points for all markers. It should be understood that the sum of squares of the distances between the re-projected image points and the actual projected image points of the respective markers in each frame of the projected image is calculated; then, the sum of the squares of the distances of the multi-frame projection images is taken as an objective function. Of course, the absolute value of the difference between the re-projection image point and the actual projection image point of each marker in each frame of projection image can also be calculated; then, the sum of the absolute values of the projection images of the plurality of frames is used as an objective function. In the present application, the sum of squared distances is mainly used as an example for explanation.
Wherein the objective function can be represented by equation (15):
Figure BDA0003250752060000151
where m is the number of the plurality of markers, n is the number of the plurality of frames of projection images, xijAnd
Figure BDA0003250752060000152
respectively representing an actual projection image point and a reprojection image point of the ith marker in the projection image of the jth frame, d (x, y) is an Euclidean distance between two image points x and y represented by non-homogeneous coordinates, and k represents a column vector formed by internal parameters corresponding to the projection image of the jth frame. a isjExternal parameter set corresponding to j frame projection imageAlignment vector, biA column vector consisting of the spatial coordinates of the ith marker.
Furthermore, as the n frames of projection images are all generated by the same imaging system, the internal parameters k on the n frames of projection images are all the same, the target function has a special parameter division form, and the corresponding normal equation has a special sparse block division form. The sparse LM beam method indirect adjustment in the form of the object points with the same internal parameters and multiple views can be adopted to optimize the internal parameters and the external parameters of the imaging system and the space coordinates of each marker. And taking the internal parameters and the external parameters of each frame of projection image and the space coordinates of each marker when the value of the objective function is minimum as the internal parameters and the external parameters of each frame of projection image and the space coordinates of each marker after optimization.
In the actual optimization calculation process, the parameters participating in the optimization iteration may be different, for example, when the spatial coordinates of the intrinsic parameters and/or markers are accurate, the spatial coordinates of the intrinsic parameters and/or markers may not participate in the optimization calculation. The optimization objective function here may have different specific forms, for example, the sum of squared distances may be replaced by the sum of absolute values of distances, etc.
It should be understood that, while the above description has been given of the optimization method using all the projection images, in practical applications, the optimization method may also be performed using a part of the frame projection images in the multi-frame projection images, and the optimization method is similar to the optimization method using all the projection images and will not be described.
103: and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
It should be understood that, because the imaging geometric relationship corresponding to each frame of projection image is determined, each frame of projection image and the imaging geometric relationship corresponding to each frame of projection image can be used as input data of a reconstruction algorithm, and the processing of each frame of projection image is completed through a CT reconstruction algorithm to realize the three-dimensional reconstruction of the CT image of the measured object.
It can be seen that, in the embodiment of the present application, the following advantages exist in implementing the technical solution of the present application:
compared with the imaging geometrical relationship between the detected object and the imaging system of various traditional CT machine equipment, the imaging geometrical relationship between the imaging system and the detected object is completed and the accuracy is ensured by mechanical devices, equipment hardware and motion control accuracy; the imaging geometrical relationship between the imaging system and the detected object can be optimized and obtained in high precision when each frame of projection image is obtained; therefore, the imaging geometrical relationship between the imaging system and the detected object when each frame of projection image is imaged can be acquired with high precision without high-precision control of a mechanical device, the three-dimensional CT image reconstruction precision and quality of the detected object are improved, and the cost for manufacturing the CT machine is reduced. In addition, the method does not need a high-precision synchronous motion control mechanism, can greatly reduce the technical difficulty of instruments of the same kind, can develop new instruments, and can also conveniently transform and improve the conventional DR machine into cone beam CT equipment, thereby greatly reducing the structural complexity and the processing precision of projection acquisition hardware and saving the hardware cost.
In addition, the method can upgrade the original imaging quality on the basis of the original equipment. For example, the traditional DR machine can be improved into a CBCT machine, the internal three-dimensional image of the detected object can be obtained, and any tomography image can be obtained. Because the spatial resolution of the DR machine is higher than that of the CT machine, after the conventional DR machine is improved into a cone beam CT device by the method, a CT image with high spatial resolution can be obtained, so that clearer detail information can be obtained, and accurate diagnosis of tiny lesions is facilitated. Moreover, the traditional CT needs projection data of more angles to reconstruct an object, and the CBCT can reconstruct the object by using relatively less projection data, so that the traditional DR is transformed into the CBCT equipment to reconstruct and check CT images, and the dose of the patient irradiated by rays is greatly reduced.
Referring to fig. 5, fig. 5 is a block diagram illustrating functional units of a CT image reconstruction apparatus according to an embodiment of the present disclosure. The CT image reconstruction apparatus 500 includes: an acquisition unit 501 and a processing unit 502, wherein:
the acquiring unit 501 is configured to acquire a plurality of frames of projection images, where the plurality of frames of projection images are obtained by an imaging system emitting X-rays to a detected object at a plurality of arbitrary imaging angles for imaging, and the detected object is provided with a plurality of markers;
the processing unit 502 is configured to obtain an imaging geometric relationship corresponding to each frame of the projection images according to the pixel coordinates of the markers in each frame of the projection images, where the imaging geometric relationship corresponding to each frame of the projection images is an imaging geometric relationship between the detected object and the imaging system when each frame of the projection images is obtained.
In some possible embodiments, the imaging system includes a radiation source and a radiation detector; the imaging geometrical relationship corresponding to each frame of projection image comprises an internal parameter of the imaging system and an external parameter corresponding to each frame of projection image; the internal parameters of the imaging system comprise a focal length and coordinates of a principal point, wherein the focal length is the distance from the ray source to the ray detector, and the principal point is a central pixel point of each frame of projection image.
In some possible embodiments, when the number of markers with known spatial coordinates in the multiple frames of projection images is greater than or equal to a first threshold, in terms of acquiring the imaging geometry corresponding to each frame of projection image according to the pixel coordinates of the markers in each frame of projection image, the processing unit 502 is specifically configured to: and determining the internal parameters of the imaging system and the external parameters corresponding to each frame of projection images according to the space coordinates and the pixel coordinates of the markers with known space coordinates in each frame of projection images.
In some possible embodiments, when the number of markers with known spatial coordinates in the partial projection images in the multiple frames of projection images is greater than or equal to a first threshold, in terms of acquiring the imaging geometry corresponding to each frame of projection images according to the pixel coordinates of the markers in each frame of projection images, the processing unit 502 is specifically configured to: determining internal parameters of the imaging system and external parameters corresponding to the partial projection images according to the space coordinates and pixel coordinates of the markers with known space coordinates in the partial projection images; and performing pose estimation operation on a first residual projection image for multiple times according to the internal parameters of the imaging system and the external parameters of the partial projection image to obtain the external parameters of the first residual projection image, wherein the first residual image is the projection image except the partial projection image in the multi-frame projection image.
In some possible embodiments, in terms of obtaining the imaging geometric relationship corresponding to each frame of the projection image according to the pixel coordinates of the marker in each frame of the projection image, the processing unit 502 is specifically configured to: selecting a first projection image and a second projection image from the multi-frame projection images, wherein the first projection image and the second projection image have k same markers and the k same markers are used as k first markers, and k is larger than or equal to a second threshold value; determining an internal parameter of the imaging system from pixel coordinates of the k first markers in the first and second projection images, respectively, and a constraint between the fundamental matrix and a pole; determining the space coordinates of each first marker according to the pixel coordinates of each first marker in the first projection image and the second projection image respectively; respectively obtaining an external parameter corresponding to the first projection image and an external parameter corresponding to the second projection image according to the spatial coordinates of the k first markers and the pixel coordinates of the k first markers in the first projection image and the second projection image; and according to the internal parameters of the imaging system, the external parameters corresponding to the first projection image or the second projection image and the pixel coordinates of the marker in each frame of projection image, performing pose estimation operation on second residual projection images for multiple times respectively to obtain the external parameters of the second residual projection images, wherein the second residual projection images are projection images except the first projection images and the second projection images in the multiple projection images.
In some possible embodiments, the processing unit 502 is specifically configured to, in determining the spatial coordinates of each first marker from the pixel coordinates of each first marker in the first projection image and the second projection image, respectively: constructing an essential matrix between the first projection image and the second projection image according to the pixel coordinates of each first marker in the first projection image and the second projection image respectively and the internal parameters of the imaging system; performing singular value decomposition on the intrinsic matrix to obtain a relative rotation translation relation between the imaging systems when the first projection image and the second projection image are imaged; and determining the space coordinates of each first marker based on a triangular intersection algorithm and the relative rotation and translation relation.
In some possible embodiments, the processing unit 502 is specifically configured to, in terms of constructing an essential matrix between the first projection image and the second projection image based on pixel coordinates of each first marker in the first projection image and the second projection image, respectively, and the intrinsic parameters of the imaging system: establishing a constraint equation corresponding to the k first markers according to the epipolar geometric relationship; determining a basic matrix according to the pixel coordinates of the k first markers in the first projection image and the second projection image respectively and the constraint equation; and determining the essential matrix according to the basic matrix and the internal reference matrix, wherein the internal reference matrix is a matrix formed by internal parameters of the imaging system.
In some possible embodiments, in terms of performing an ith pose estimation operation of the multiple pose operations, the processing unit 502 is specifically configured to: from a set h of projection imagesi-1To select a projection image a and from a set of projection images wi-1Wherein the projection image a and the projection image b have p same markers, and the projection image set hi-1And the set of projection images wi-1Obtained after the i-1 st attitude estimation operation is executed, and the projection image set hi-1Projection images including known external parameters of said plurality of frames of projection images, projectionSet of photographic images wi-1The projection images comprise unknown external parameters in the plurality of frames of projection images; and when i is 1, if the projection image set h0Including said partial projection image, the set of projection images w0Including the first remaining image, if the projection image set h0Including the first projection image and the second projection image, the set of projection images w0Including the second remaining projection image; (ii) treating said p identical markers as second markers; when the number of second markers containing known space coordinates in the p second markers is larger than or equal to a third threshold, determining an external parameter corresponding to the projection image b according to the internal parameters of the imaging system, the space coordinates of the second markers of the p second markers with known space coordinates and the pixel coordinates of the second markers with known space coordinates in the projection image b based on a 2D-3D pose estimation method; according to the internal parameters of the imaging system, the pixel coordinates of a second marker with unknown space coordinates in the projected image a and the projected image b, the external parameters corresponding to the projected image a and the external parameters corresponding to the projected image b, and the space coordinates of the second marker with unknown space coordinates are determined; extracting the projection image b from the set w of projection imagesi-1Is moved out and put into the projection image set hi-1Obtaining a projection image set h corresponding to the ith posture estimation operationiAnd a set of projection images wi
In some possible embodiments, after acquiring the imaging geometry corresponding to each of the frames of projection images according to the pixel coordinates of the markers in each of the frames of projection images, the processing unit 502 is further configured to: carrying out re-projection on the space coordinates corresponding to the markers in each frame of projection image according to the internal parameters of each frame of projection image and the external parameters of each frame of projection image to obtain re-projection points of the markers in each frame of projection image, wherein the internal parameters of each frame of projection image are the internal parameters of the imaging system; constructing an optimization objective function according to the sum of the distances between the reprojection points of the markers in each frame of projection image and the pixel coordinates; taking the minimum value based on the optimization objective function to obtain the internal parameters and the external parameters of each frame of optimized projection images and the space coordinates of the markers; and respectively taking the internal parameters and the external parameters of each optimized projection image as the internal parameters and the external parameters of each projection image during CT image reconstruction.
In some possible embodiments, in acquiring a plurality of frames of projection images, the acquiring unit 501 is specifically configured to: controlling the detected object to rotate and/or translate for multiple times, and controlling the ray source to emit X rays to the detected object after rotating and/or translating the detected object every time to obtain the multi-frame projection images; or, controlling the imaging system to rotate and/or translate for multiple times, and controlling the ray source to emit X rays to the detected object after rotating and/or translating the imaging system every time to obtain the multi-frame projection images; or controlling the detected object and the imaging system to rotate and/or translate for multiple times at the same time, and controlling the ray source to emit X rays to the detected object after rotating and/or translating the detected object and the imaging system each time so as to obtain the multi-frame projection images.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device may be the CT reconstruction apparatus described above, or an apparatus including the CT reconstruction apparatus. As shown in fig. 6, the electronic device 600 includes a transmitter 601, a processor 602, and a memory 603. Connected to each other by a bus 604. The memory 603 is used to store computer programs and data, and can transfer data stored in the memory 603 to the processor 602.
The processor 602 is configured to read the computer program in the memory 603 to perform the following operations:
controlling a transmitter 601 to transmit X-rays to the detected object to obtain a plurality of frames of projection images, wherein the plurality of frames of projection images are obtained by an imaging system transmitting X-rays to the detected object at a plurality of arbitrary imaging angles for imaging, and a plurality of markers are arranged on the detected object;
acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the marker in each frame of projection image, wherein the imaging geometric relationship corresponding to each frame of projection image is the imaging geometric relationship between the detected object and the imaging system when each frame of projection image is acquired;
and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
More specific functions of the processor 602 can be referred to the functions of the processing unit in fig. 5, and will not be described.
Embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement some or all of the steps in any one of the CT image reconstruction methods described in the above method embodiments.
Embodiments of the present application further provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute some or all of the steps of any one of the CT image reconstruction methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. A CT image reconstruction method, comprising:
acquiring a plurality of frames of projection images, wherein the plurality of frames of projection images are obtained by an imaging system emitting X rays to a detected object under a plurality of arbitrary imaging angles for imaging, and the detected object is provided with a plurality of markers;
acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the marker in each frame of projection image, wherein the imaging geometric relationship corresponding to each frame of projection image is the imaging geometric relationship between the detected object and the imaging system when each frame of projection image is acquired;
and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
2. The method of claim 1,
the imaging system comprises a ray source and a ray detector;
the imaging geometrical relationship corresponding to each frame of projection image comprises an internal parameter of the imaging system and an external parameter corresponding to each frame of projection image;
the internal parameters of the imaging system comprise a focal length and coordinates of a principal point, wherein the focal length is the distance from the ray source to the ray detector, and the principal point is a central pixel point of each frame of projection image.
3. The method according to claim 1 or 2,
when the number of the markers with known spatial coordinates in the multiple frames of projection images is greater than or equal to a first threshold, acquiring the imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the markers in each frame of projection image, including:
and determining the internal parameters of the imaging system and the external parameters corresponding to each frame of projection images according to the space coordinates and the pixel coordinates of the markers with known space coordinates in each frame of projection images.
4. The method according to claim 1 or 2,
when the number of markers with known spatial coordinates in part of the multiple frames of projection images is greater than or equal to a first threshold, acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the markers in each frame of projection image, including:
determining internal parameters of the imaging system and external parameters corresponding to the partial projection images according to the space coordinates and pixel coordinates of the markers with known space coordinates in the partial projection images;
and performing pose estimation operation on a first residual projection image for multiple times according to the internal parameters of the imaging system and the external parameters of the partial projection image to obtain the external parameters of the first residual projection image, wherein the first residual image is the projection image except the partial projection image in the multi-frame projection image.
5. The method according to claim 1 or 2,
the acquiring of the imaging geometric relationship corresponding to each frame of the projection image according to the pixel coordinates of the markers in each frame of the projection image includes:
selecting a first projection image and a second projection image from the multi-frame projection images, wherein the first projection image and the second projection image have k same markers and the k same markers are used as k first markers, and k is larger than or equal to a second threshold value;
determining an internal parameter of the imaging system from pixel coordinates of the k first markers in the first and second projection images, respectively, and a constraint between the fundamental matrix and a pole;
determining the space coordinates of each first marker according to the pixel coordinates of each first marker in the first projection image and the second projection image respectively;
respectively obtaining an external parameter corresponding to the first projection image and an external parameter corresponding to the second projection image according to the spatial coordinates of the k first markers and the pixel coordinates of the k first markers in the first projection image and the second projection image;
and according to the internal parameters of the imaging system, the external parameters corresponding to the first projection image or the second projection image and the pixel coordinates of the marker in each frame of projection image, performing pose estimation operation on second residual projection images for multiple times respectively to obtain the external parameters of the second residual projection images, wherein the second residual projection images are projection images except the first projection images and the second projection images in the multiple projection images.
6. The method of claim 5, wherein determining spatial coordinates of each first marker from pixel coordinates of each first marker in the first projection image and the second projection image, respectively, comprises:
constructing an essential matrix between the first projection image and the second projection image according to the pixel coordinates of each first marker in the first projection image and the second projection image respectively and the internal parameters of the imaging system;
performing singular value decomposition on the intrinsic matrix to obtain a relative rotation translation relation between the imaging systems when the first projection image and the second projection image are imaged;
and determining the space coordinates of each first marker based on a triangular intersection algorithm and the relative rotation and translation relation.
7. The method of claim 6,
constructing an essential matrix between the first projection image and the second projection image according to the pixel coordinates of each first marker in the first projection image and the second projection image, respectively, and the internal parameters of the imaging system, including:
establishing a constraint equation corresponding to the k first markers according to the epipolar geometric relationship;
determining a basic matrix according to the pixel coordinates of the k first markers in the first projection image and the second projection image respectively and the constraint equation;
and determining the essential matrix according to the basic matrix and the internal reference matrix, wherein the internal reference matrix is a matrix formed by internal parameters of the imaging system.
8. The method according to any one of claims 4 to 7,
the ith pose estimation operation in the plurality of pose operations comprises the following steps:
from a set h of projection imagesi-1To select a projection image a and from a set of projection images wi-1Wherein the projection image a and the projection image b have p same markers, and the projection image set hi-1And the set of projection images wi-1Obtained after the i-1 st attitude estimation operation is executed, and the projection image set hi-1A projection image comprising a known external parameter of said plurality of frames of projection images, a set w of projection imagesi-1The projection images comprise unknown external parameters in the plurality of frames of projection images; and when i is 1, if the projection image set h0Including said partial projection image, the set of projection images w0Including the first remaining image, if the projection image set h0Including the first projection image and the second projection image, the set of projection images w0Including the second remaining projection image;
(ii) treating said p identical markers as second markers;
when the number of second markers containing known space coordinates in the p second markers is larger than or equal to a third threshold, determining an external parameter corresponding to the projection image b according to the internal parameters of the imaging system, the space coordinates of the second markers of the p second markers with known space coordinates and the pixel coordinates of the second markers with known space coordinates in the projection image b based on a 2D-3D pose estimation method;
according to the internal parameters of the imaging system, the pixel coordinates of a second marker with unknown space coordinates in the projected image a and the projected image b, the external parameters corresponding to the projected image a and the external parameters corresponding to the projected image b, and the space coordinates of the second marker with unknown space coordinates are determined;
extracting the projection image b from the set w of projection imagesi-1Is moved out and put into the projection image set hi-1Obtaining a projection image set h corresponding to the ith posture estimation operationiAnd a set of projection images wi
9. The method according to any one of claims 2-8, wherein after acquiring the imaging geometry corresponding to each frame of the projection image according to the pixel coordinates of the markers in each frame of the projection image, the method further comprises:
carrying out re-projection on the space coordinates corresponding to the markers in each frame of projection image according to the internal parameters of each frame of projection image and the external parameters of each frame of projection image to obtain re-projection points of the markers in each frame of projection image, wherein the internal parameters of each frame of projection image are the internal parameters of the imaging system;
constructing an optimization objective function according to the sum of the distances between the reprojection points of the markers in each frame of projection image and the pixel coordinates;
taking the minimum value based on the optimization objective function to obtain the internal parameters and the external parameters of each frame of optimized projection images and the space coordinates of the markers;
and respectively taking the internal parameters and the external parameters of each optimized projection image as the internal parameters and the external parameters of each projection image during CT image reconstruction.
10. The method of any one of claims 1-9, wherein said acquiring a plurality of frames of projection images comprises:
controlling the detected object to rotate and/or translate for multiple times, and controlling the ray source to emit X rays to the detected object after rotating and/or translating the detected object every time to obtain the multi-frame projection images;
or,
controlling the imaging system to rotate and/or translate for multiple times, and controlling the ray source to emit X rays to the detected object after the imaging system rotates and/or translates each time to obtain the multi-frame projection images;
or,
and controlling the detected object and the imaging system to rotate and/or translate for multiple times at the same time, and controlling the ray source to emit X rays to the detected object after rotating and/or translating the detected object and the imaging system each time so as to obtain the multi-frame projection images.
11. A CT image reconstruction apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring multi-frame projection images, the multi-frame projection images are obtained by an imaging system emitting X rays to a detected object under a plurality of arbitrary imaging angles for imaging, and a plurality of markers are arranged on the detected object;
the processing unit is used for acquiring an imaging geometric relationship corresponding to each frame of projection image according to the pixel coordinates of the markers in each frame of projection image, wherein the imaging geometric relationship corresponding to each frame of projection image is the imaging geometric relationship between the detected object and the imaging system when each frame of projection image is acquired; and performing three-dimensional image reconstruction on the multi-frame projection images according to the imaging geometric relationship corresponding to the multi-frame projection images and each frame projection image to obtain the CT image of the detected object.
12. An electronic device, comprising: a processor coupled to the memory, and a memory for storing a computer program, the processor for executing the computer program stored in the memory to perform the method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-10.
CN202111044738.4A 2021-09-07 2021-09-07 CT image reconstruction method, device, electronic equipment and storage medium Active CN113963056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111044738.4A CN113963056B (en) 2021-09-07 2021-09-07 CT image reconstruction method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111044738.4A CN113963056B (en) 2021-09-07 2021-09-07 CT image reconstruction method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113963056A true CN113963056A (en) 2022-01-21
CN113963056B CN113963056B (en) 2022-08-26

Family

ID=79461017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111044738.4A Active CN113963056B (en) 2021-09-07 2021-09-07 CT image reconstruction method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113963056B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1861001A (en) * 2006-03-31 2006-11-15 北京航空航天大学 Method for standardizing projection coordinate origin suitable for three-D CT scanning system
CN103226113A (en) * 2013-03-29 2013-07-31 中国计量科学研究院 Automatic calibration method for reconstructing voxel sizes of cone beam 3D-CT scanning system
WO2018035905A1 (en) * 2016-08-24 2018-03-01 深圳先进技术研究院 Cone-beam ct three-dimensional reconstruction method and system
FI20175244L (en) * 2017-03-17 2018-09-18 Planmeca Oy Self-calibrating medical imaging apparatus
CN109671128A (en) * 2018-12-07 2019-04-23 广州华端科技有限公司 Data processing, image rebuilding method and device in image reconstruction process
CN110353713A (en) * 2019-07-16 2019-10-22 上海联影医疗科技有限公司 The determination method and apparatus of geometric correction die body, reconstruction image geometric parameter
CN111223159A (en) * 2019-12-31 2020-06-02 广州华端科技有限公司 Cone beam imaging method, apparatus, computer device and storage medium
CN112451093A (en) * 2021-01-26 2021-03-09 浙江大学 Physical space and image space registration method for image-guided robot minimally invasive surgery
CN112489135A (en) * 2020-11-27 2021-03-12 深圳市深图医学影像设备有限公司 Calibration method of virtual three-dimensional face reconstruction system
CN112634353A (en) * 2020-12-17 2021-04-09 广州华端科技有限公司 Self-calibration method, device and medium for geometric calibration die body of CBCT system
CN112837246A (en) * 2021-03-25 2021-05-25 中科超精(南京)科技有限公司 Geometric correction method for image guidance system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1861001A (en) * 2006-03-31 2006-11-15 北京航空航天大学 Method for standardizing projection coordinate origin suitable for three-D CT scanning system
CN103226113A (en) * 2013-03-29 2013-07-31 中国计量科学研究院 Automatic calibration method for reconstructing voxel sizes of cone beam 3D-CT scanning system
WO2018035905A1 (en) * 2016-08-24 2018-03-01 深圳先进技术研究院 Cone-beam ct three-dimensional reconstruction method and system
FI20175244L (en) * 2017-03-17 2018-09-18 Planmeca Oy Self-calibrating medical imaging apparatus
CN109671128A (en) * 2018-12-07 2019-04-23 广州华端科技有限公司 Data processing, image rebuilding method and device in image reconstruction process
CN110353713A (en) * 2019-07-16 2019-10-22 上海联影医疗科技有限公司 The determination method and apparatus of geometric correction die body, reconstruction image geometric parameter
CN111223159A (en) * 2019-12-31 2020-06-02 广州华端科技有限公司 Cone beam imaging method, apparatus, computer device and storage medium
CN112489135A (en) * 2020-11-27 2021-03-12 深圳市深图医学影像设备有限公司 Calibration method of virtual three-dimensional face reconstruction system
CN112634353A (en) * 2020-12-17 2021-04-09 广州华端科技有限公司 Self-calibration method, device and medium for geometric calibration die body of CBCT system
CN112451093A (en) * 2021-01-26 2021-03-09 浙江大学 Physical space and image space registration method for image-guided robot minimally invasive surgery
CN112837246A (en) * 2021-03-25 2021-05-25 中科超精(南京)科技有限公司 Geometric correction method for image guidance system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YANLI WAN 等: "A Calibration Method of CBCT Geometric Parameters Based on the Visual Imaging Model", 《SPRINGER 》 *
YANLI WAN 等: "A Calibration Method of CBCT Geometric Parameters Based on the Visual Imaging Model", 《SPRINGER 》, 31 December 2020 (2020-12-31), pages 753 - 763 *
王梦蛟: "锥束CT几何畸变校正和临床应用关键问题研究", 《中国优秀博硕士学位论文全文数据库(博士) 医药卫生科技辑》 *
王梦蛟: "锥束CT几何畸变校正和临床应用关键问题研究", 《中国优秀博硕士学位论文全文数据库(博士) 医药卫生科技辑》, 15 September 2015 (2015-09-15), pages 1 - 105 *

Also Published As

Publication number Publication date
CN113963056B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
RU2568635C2 (en) Feature-based recording of two-/three-dimensional images
Berger et al. Marker‐free motion correction in weight‐bearing cone‐beam CT of the knee joint
EP2156408B1 (en) Pet local tomography
CN110602990B (en) Patient movement correction method for cone beam computed tomography
EP1868157A1 (en) Shape reconstruction using X-ray images
US10524756B2 (en) Methods and systems for image artifacts reduction
US20050047544A1 (en) Apparatus and method for registering 2D radiographic images with images reconstructed from 3D scan data
US7728834B2 (en) Method and apparatus for reconstructing a three-dimensional image volume from two-dimensional projection images
US11790525B2 (en) Method for metal artifact avoidance in x-ray imaging
US10617381B2 (en) Method and system for measuring an X-ray image of an area undergoing medical examination
US20070274581A1 (en) Methods and apparatus for BIS correction
WO2023082306A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
Niebler et al. Projection‐based improvement of 3D reconstructions from motion‐impaired dental cone beam CT data
Selby et al. Patient positioning with X-ray detector self-calibration for image guided therapy
WO2009083864A2 (en) Iterative reconstruction of polyhedral objects from few projections
EP3629294A1 (en) Method of providing a training dataset
Davis et al. Collision-avoiding imaging trajectories for linac mounted cone-beam CT
EP3331449B1 (en) Computed tomography image generation apparatus
CN113963057B (en) Imaging geometric relation calibration method and device, electronic equipment and storage medium
CN113963056B (en) CT image reconstruction method, device, electronic equipment and storage medium
Rausch et al. Kinect‐Based Correction of Overexposure Artifacts in Knee Imaging with C‐Arm CT Systems
CN113963058B (en) On-line calibration method and device for CT (computed tomography) of preset track, electronic equipment and storage medium
CN114119801A (en) Three-dimensional digital subtraction angiography method and device, electronic device and storage medium
Feng et al. Automated least-squares calibration of the coregistration parameters for a micro PET-CT system
Bodensteiner et al. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240628

Address after: 518000, K18, Tourism Creative (Bonded) Park, 1st Floor, Warehouse Area, He He Warehouse Building, No. 8 Shihua Road, Fubao Community, Fubao Street, Futian District, Shenzhen, Guangdong Province, China

Patentee after: Shenzhen Ruitiying Medical Equipment Technology Co.,Ltd.

Country or region after: China

Address before: 518054 18G, unit 2, building 2, Baining garden, 3099 Binhai Avenue, Nanshan District, Shenzhen, Guangdong

Patentee before: Yu Liuqing

Country or region before: China

Patentee before: Liu Xiaolin

TR01 Transfer of patent right