CN117649484A - Three-dimensional optical reconstruction method based on CT image - Google Patents

Three-dimensional optical reconstruction method based on CT image Download PDF

Info

Publication number
CN117649484A
CN117649484A CN202311304245.9A CN202311304245A CN117649484A CN 117649484 A CN117649484 A CN 117649484A CN 202311304245 A CN202311304245 A CN 202311304245A CN 117649484 A CN117649484 A CN 117649484A
Authority
CN
China
Prior art keywords
dimensional
data
optical
projection
voxel data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311304245.9A
Other languages
Chinese (zh)
Inventor
降雨强
黄璐
王瑜
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Genetics and Developmental Biology of CAS
Original Assignee
Institute of Genetics and Developmental Biology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Genetics and Developmental Biology of CAS filed Critical Institute of Genetics and Developmental Biology of CAS
Priority to CN202311304245.9A priority Critical patent/CN117649484A/en
Publication of CN117649484A publication Critical patent/CN117649484A/en
Priority to NL2038335A priority patent/NL2038335A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention relates to a three-dimensional optical reconstruction method based on CT images, which comprises the following steps: acquiring CT projection signals and optical data of the object to be reconstructed at multiple angles by means of a coaxial scanning device; reconstructing CT projection signals of all angles to obtain CT three-dimensional voxel data and extracting CT surface voxel data of the CT three-dimensional voxel data; aligning and registering each pixel point in the optical data with a corresponding pixel point in the CT three-dimensional surface voxel data, and mapping coordinates of the pixel points in the optical data into a CT image coordinate system of the CT three-dimensional surface voxel data to obtain a first optical data set with three-dimensional space coordinates in the CT image coordinate system under all angles; and splicing all optical data in the data set to form three-dimensional optical reconstructed information. The method can generate an accurate and comprehensive optical three-dimensional reconstruction object by providing accurate three-dimensional space information of the object for three-dimensional optical reconstruction by utilizing CT scanning.

Description

Three-dimensional optical reconstruction method based on CT image
Technical Field
The invention belongs to the technical field of three-dimensional optical reconstruction, and particularly relates to a three-dimensional optical reconstruction method based on CT images.
Background
The three-dimensional reconstruction (3-dimensional reconstruction) is to convert two-dimensional image or projection data into space information of a three-dimensional object, and the reconstructed model is convenient for computer display and further processing, has wide application in a plurality of fields of medicine, biology, engineering, computer vision and the like, and has important significance for researching and analyzing the structure and the morphology of the object. With the continuous progress of imaging technology, high-resolution three-dimensional reconstruction becomes an important research direction. Researchers have achieved more accurate and detailed three-dimensional reconstruction results by improving algorithms, using high resolution sensors, and improving data acquisition methods. In the related art, a depth camera-based three-dimensional reconstruction method can provide depth information of an object, and thus can be directly modeled, mainly by a structured light projection (Structured light projection) and a Time of flight (TOF) method. The structured light projection method is to project an image coded according to a certain rule to a measured object by utilizing a light source, and the coded pattern is modulated by the surface shape of the object to generate deformation. And shooting the structured light with deformation by using a camera, and obtaining the depth information of the detected object through the position relation between the camera and the projection light source and the deformation degree of the structured light. This approach, although highly accurate, is subject to interference from ambient light around the object and the structured light projector must be calibrated in advance. The TOF time-of-flight method calculates the depth distance of the surface of the object under test by recording the beam propagation time. The system transmitting device transmits pulse signals, the pulse signals are received by the detector after being reflected by the detected object, and the depth value can be calculated through the time from sending to receiving of the optical signals and the speed of light. The TOF technology may be limited by noise and precision under long distance or low light conditions, and its resolution is low, so that it is unable to realize high-precision three-dimensional reconstruction, and further unable to accurately restore the shape and geometry of the object.
Disclosure of Invention
First, the technical problem to be solved
In order to solve the above problems in the prior art, the present invention provides a three-dimensional optical reconstruction method based on CT images.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
the invention provides a three-dimensional optical reconstruction method based on CT images, which comprises the following steps:
s10, acquiring CT projection signals and optical data of an object to be reconstructed at multiple angles by means of a coaxial scanning device; each of the multiple angles is a rotation angle of the rotating coaxial scanning device relative to the stationary object to be reconstructed; a CT imaging device and an optical imaging device with an installation included angle with the CT imaging device are fixed in the coaxial scanning device;
s20, reconstructing CT projection signals of all angles according to a reconstruction algorithm to obtain CT three-dimensional voxel data of an object to be reconstructed, and extracting surface voxel data of the CT three-dimensional voxel data of the object to be reconstructed to obtain CT three-dimensional surface voxel data of the object to be reconstructed;
s30, aligning and registering each pixel point in the optical data and a corresponding pixel point in the CT three-dimensional surface voxel data, and mapping coordinates of the pixel points in the aligned and registered optical data into a CT image coordinate system of the CT three-dimensional surface voxel data to obtain a first optical data set with three-dimensional space coordinates in the CT image coordinate system under all angles;
And S40, splicing all the optical data in the first optical data set to form information after three-dimensional optical reconstruction.
Optionally, the optical imaging device is a hyperspectral imaging device, and the optical data is hyperspectral data;
the optical imaging device is fluorescence imaging device, and the optical data is fluorescence optical data;
the optical imaging device is RGB imaging device, and the optical data is RGB optical data;
the multi-angle CT projection signals and optical data include: n CT projection signals and N optical data;
wherein, the rotating frame of the coaxial scanning device rotates at an angle relative to the object each timeThe number of data generated when the coaxial scanning device rotates 360 DEG relative to the object is +.>
Optionally, reconstructing the CT projection signals of all angles according to a reconstruction algorithm in S20 to obtain CT three-dimensional voxel data of the object to be reconstructed, including:
preprocessing the CT projection signals of each angle, and reconstructing the preprocessed CT projection signals of all angles by adopting a filtering back projection algorithm to obtain CT three-dimensional voxel data of an object to be reconstructed.
Optionally, a filtered back projection algorithm is adopted to reconstruct the preprocessed CT projection signals of all angles to obtain CT three-dimensional voxel data of the object to be reconstructed, including:
S21, performing one-dimensional Fourier transform on the N preprocessed CT projection signals respectively to obtain N first projection signals in a frequency domain;
s22, filtering the first projection signals in the N frequency domains to obtain N filtered second projection signals;
s23, carrying out one-dimensional inverse Fourier transform on the N second projection signals, and restoring the N second projection signals to a time domain to obtain N third projection signals after filtering in the time domain;
s24, carrying out back projection on each third projection signal, wherein the back projection is to uniformly distribute projection signals under each angle to each point passing through the object according to respective original projection paths, accumulate back projection signals of the same point on the object under all angles to obtain ray attenuation coefficients of each point of the object, and reconstruct CT three-dimensional voxel data of the object;
the CT three-dimensional voxel data comprises: three-dimensional coordinates of each voxel in a CT image of the reconstructed object and HU values of the positions of each voxel, which HU values reflect the degree of absorption of X-rays by the object to be reconstructed.
Optionally, the extracting the surface voxel data of the CT three-dimensional voxel data of the object in S20 obtains the CT three-dimensional surface voxel data of the object to be reconstructed, and further includes:
S25, optimizing the CT three-dimensional voxel data to obtain the CT three-dimensional voxel data after optimization;
s26, extracting surface information of the CT three-dimensional voxel data after optimization processing;
s27, dividing the surface information of the CT three-dimensional voxel data into voxel data of an object and voxel data of a background based on a preset voxel data threshold;
s28, acquiring voxel data of an object boundary in a traversal mode based on the voxel data of the object; and setting HU value of voxel data of a non-object boundary in the voxel data of the object to be 0 to obtain surface voxel data of the object, namely CT three-dimensional surface voxel data.
Optionally, the S30 includes:
the coaxial scanning device rotates relative to the object, and the imaging visual angle of each imaging position and the object included angle areThe installation included angle between the CT imaging equipment and the optical imaging equipment is theta;
the coordinate system of each imaging device of the coaxial scanning device is as follows: the coordinate origin is positioned on the central axis of the rotating frame and at the same height with the optical axes of the imaging devices, the coordinate Z axis points to the centers of the imaging devices from the coordinate origin, the X-ray imaging Z axis points to the X-ray center from the coordinate origin, and the XY plane is perpendicular to the Z axis;
S31, for each imaging view angleThe lower surface voxel data is selected as a projection plane, orthogonal projection is carried out along the negative direction of the Z axis, three-dimensional data (x, y, Z, HU) containing the surface voxels of the object are projected into the XY plane under the corresponding angle, each pixel point in the plane is the projection position of the three-dimensional surface voxel data on the projection plane, and a CT two-dimensional projection image (x, y, HU) under the angle is formed; to obtain all imaging viewing angles->Lower part (C)CT two-dimensional projection images;
s32, for imaging view angleLower CT two-dimensional projection image and imaging view angle +.>Performing feature detection on the optical data to obtain the respective significant feature points in the CT two-dimensional projection image and the optical data;
s33, obtaining feature descriptors of the respective significant feature points and matching the feature descriptors to obtain matched feature points exceeding a preset threshold;
s34, acquiring an imaging visual angle according to the matched characteristic point pairsCT two-dimensional projection image and imaging visual angleSpace coordinate conversion mapping of the optical data;
s35, mapping the imaging visual angle based on space coordinate transformationCT two-dimensional projection image and imaging visual angleRegistering the lower optical data;
traversing all imaging perspectives based on the manner of S32 to S35 And (3) aligning and registering the CT two-dimensional projection images and the optical data under all imaging view angles.
Optionally, the S34 includes:
s341, dividing coordinates of feature points of each imaging device by the focal length of the imaging device based on each matched feature point pair to obtain normalized feature point pair coordinates;
s342, constructing a linear equation based on the normalized characteristic point pair coordinates;
setting p (x, y) and p ' (x ', y ') as normalized feature point pair coordinates;
p (x, y) corresponds to the CT two-dimensional projection image, and p ' (x ', y ') corresponds to the optical data;
by linear equation p' T Fp=0, determining a base matrix, wherein F is the base matrix;
the linear equation constructed by all the characteristic point pairs is collected, and a basic matrix is solved;
s343, based on the basic matrix, performing triangulation by using internal parameters of the CT imaging equipment and the optical imaging equipment, and mapping the normalized characteristic point pair coordinates to three-dimensional points in a world coordinate system;
s344, the space coordinate conversion mapping is obtained by utilizing the three-dimensional point coordinates mapped by the normalized characteristic point pairs in the world coordinate system, and the space coordinate conversion mapping comprises a translation vector and a rotation matrix.
Optionally, the S34 includes:
Converting pixel positions in the optical data under each angle into coordinates in a registered CT two-dimensional projection image coordinate system;
and establishing a corresponding relation between pixels in the optical data and spatial positions in the CT three-dimensional voxel data, and corresponding pixel information in the optical data under each angle to the spatial positions of the CT three-dimensional surface voxel data to obtain a first optical data set with three-dimensional spatial coordinates under all angles.
Optionally, the S40 includes:
traversing all adjacent optical data in the S30 mode, registering the adjacent optical data, identifying an overlapping area, and splicing all optical data in the first optical data set based on the identified overlapping area to form three-dimensional optical reconstructed information;
and optimizing the information after the three-dimensional optical reconstruction to obtain the complete information of the three-dimensional optical reconstruction.
In a second aspect, the present invention also provides a computing device comprising: the computer program is stored in the memory, and the processor executes the computer program in the memory and performs the steps of the three-dimensional optical reconstruction method based on a CT image according to any one of the first aspect.
(III) beneficial effects
The three-dimensional optical reconstruction method does not need to be calibrated in advance, is little influenced by ambient light, and improves the accuracy of three-dimensional optical data reconstruction. The CT image has higher spatial resolution, can provide a microstructure of an object and accurate three-dimensional space coordinates, so that the three-dimensional optical reconstruction method based on the CT image can realize high-precision three-dimensional optical reconstruction, accurately restore the shape and the geometric structure in the optical image of the object, and is particularly suitable for three-dimensional optical reconstruction of the object with severe surface change, namely severe depth change.
Drawings
FIG. 1 is a schematic flow chart of a three-dimensional optical reconstruction method based on CT images;
fig. 2 (a) and fig. 2 (b) are schematic diagrams of a coaxial scanning device provided by the present invention;
FIG. 3 (a) shows the imaging apparatus rotated relative to the objectSchematic of X-ray imaging at angle;
FIG. 3 (b) is a schematic view of CT surface voxel data after the reconstructed CT three-dimensional voxel data comprising an object and a background is subjected to segmentation of the object background and surface extraction;
FIG. 3 (c) is a schematic diagram of a CT two-dimensional projection image obtained by two-dimensional projection of CT surface voxel data;
FIG. 3 (d) shows the imaging apparatus rotated relative to the object Schematic view of optical imaging at angle;
FIG. 3 (e) is a schematic diagram of acquired optical image data;
FIG. 3 (f) is a schematic diagram of the processCT two-dimensional projection image under angle and +.>The optical data at the angle is registered schematically.
Detailed Description
In order that the above-described aspects may be better understood, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The three-dimensional optical reconstruction method of the embodiment of the invention does not belong to the optical reconstruction of human body images in the medical field, and the object to be reconstructed in the embodiment of the invention can be a plant with larger volume or other grain plants.
Example 1
As shown in fig. 1 to 3, the present invention provides a three-dimensional optical reconstruction method based on CT images, the execution subject of the method is any computing device, which includes the following steps:
s10, acquiring CT projection signals and optical data of an object to be reconstructed at multiple angles by means of a coaxial scanning device; each of the multiple angles is a rotation angle of the rotating coaxial scanning device relative to the stationary object to be reconstructed; a CT imaging device and an optical imaging device having an installation angle with the CT imaging device are fixed in the coaxial scanning apparatus, and the computing device of this embodiment may be electrically connected to the CT imaging device and the optical imaging device. In other embodiments, the processing functionality of the computing device may be integrated in the CT imaging device or integrated in the optical imaging device.
The optical data in this step can be understood as an optical image, and in this embodiment, the optical data is described.
For example, the optical imaging apparatus is a hyperspectral imaging apparatus, then the optical data is hyperspectral data;
the optical imaging device is a fluorescence imaging device, and the optical data is fluorescence optical data. The optical imaging device may be an RGB imaging device, and the optical data may also be RGB optical data.
In this embodiment, the multi-angle CT projection signal and optical data include: n CT projection signals and N optical data;
wherein, the rotating frame of the coaxial scanning device rotates at an angle relative to the object each timeThe number of data generated when the coaxial scanning device rotates 360 DEG relative to the object is +.>N is a natural number greater than or equal to 1. Preferably, N is a natural number of 10 or more. In this embodiment, N may be 360, i.e., the rotating frame rotates through an angle of 0 to 360 with respect to the object, and the data at each angle is recorded as +.>For example, the number of data is 360 per rotation through an angle of 1 °. Different angular precision results in different CT imaging resolutions.
S20, reconstructing CT projection signals of all angles according to a reconstruction algorithm to obtain CT three-dimensional voxel data of the object to be reconstructed, and extracting surface voxel data of the CT three-dimensional voxel data of the object to be reconstructed to obtain CT three-dimensional surface voxel data of the object to be reconstructed.
Generally, the CT projection signals of each angle are preprocessed, a filtered back projection algorithm is adopted to reconstruct the preprocessed CT projection signals of all angles, CT three-dimensional voxel data of an object to be reconstructed is obtained, and then the surface information of the CT three-dimensional voxel data is extracted, so that CT three-dimensional surface voxel data of the object to be reconstructed is obtained.
S30, aligning and registering each pixel point in the optical data and a corresponding pixel point in the CT three-dimensional surface voxel data, and mapping coordinates of the pixel points in the aligned and registered optical data into a CT image coordinate system of the CT three-dimensional surface voxel data to obtain a first optical data set with three-dimensional space coordinates in the CT image coordinate system under all angles.
And S40, splicing all the optical data in the first optical data set to form information after three-dimensional optical reconstruction.
The method of the embodiment does not need to be calibrated in advance, is little influenced by ambient light, and improves the accuracy of three-dimensional optical data reconstruction. The CT image has higher spatial resolution, can provide a microstructure of an object and accurate three-dimensional space coordinates, so that the three-dimensional optical reconstruction method based on the CT image can realize high-precision three-dimensional optical reconstruction, accurately restore the shape and the geometric structure in the optical image of the object, and is particularly suitable for three-dimensional optical reconstruction of the object with severe surface change, namely severe depth change.
In order to better understand the above-described processes of step S20 and step S30, step S20 and step S30 are described step by step.
The process for step S20 comprises the sub-steps of:
s21, performing one-dimensional Fourier transform on the N preprocessed CT projection signals respectively to obtain N first projection signals in a frequency domain;
s22, filtering the first projection signals in the N frequency domains to obtain N filtered second projection signals;
s23, carrying out one-dimensional inverse Fourier transform on the N second projection signals, and restoring the N second projection signals to a time domain to obtain N third projection signals after filtering in the time domain;
and S24, carrying out back projection on each third projection signal, wherein the back projection is to uniformly distribute the projection signals under each angle to each point passing through the object according to the respective original projection path, accumulate the back projection signals of the same point on the object under all angles to obtain the ray attenuation coefficient of each point of the object, and reconstruct CT three-dimensional voxel data of the object.
That is, one CT three-dimensional voxel data is accumulated under all angles.
The CT three-dimensional voxel data comprises: three-dimensional coordinates of each voxel in a CT image of the object are reconstructed, and HU values (Hounsfiled Unit values) of the positions of each voxel, which reflect the degree of absorption of X-rays by the object to be reconstructed.
In the art, data grid points in a three-dimensional image are referred to as voxels, and two-dimensional images are referred to as pixels. The following substep is to obtain a detailed description of the CT three-dimensional surface voxel data of the object to be reconstructed for the surface voxel data of the CT three-dimensional voxel data of the extracted object in S20.
S25, optimizing the CT three-dimensional voxel data to obtain the CT three-dimensional voxel data after optimization;
s26, extracting surface information of the CT three-dimensional voxel data from the CT three-dimensional voxel data subjected to the optimization processing.
For example, for the CT voxel data described above, a suitable threshold is selected that can distinguish between an object voxel and a background voxel, which would be used to divide the voxel into two parts: object voxel data and background voxel data; the voxel data is segmented into two regions using a selected threshold: one is voxel data of the object and one is voxel data of the background, as described in substep S27.
S27, dividing the CT three-dimensional voxel data into voxel data of an object and voxel data of a background based on a preset voxel data threshold;
s28, acquiring voxel data of an object boundary in a traversal mode based on the voxel data of the object;
That is, the voxel data of all the objects are traversed, and for each voxel data, whether the adjacent voxel data belongs to background voxel data is checked, if the background voxel data exists in the adjacent voxel data of a certain voxel data, the voxel data is on the boundary of the object;
s29, setting HU value of voxel data of a non-object boundary in the object voxel data to 0 to obtain the surface voxel data of the object, namely CT three-dimensional surface voxel data.
For example, the HU value of voxel data not at the boundary is set to 0, and the boundary voxel data HU value remains unchanged, so that the three-dimensional voxel data formed contains only the object surface voxel data.
By the above-described sub-steps S21 to S29, a procedure of acquiring CT three-dimensional voxel data of an object to be reconstructed in step S20 and acquiring CT three-dimensional surface voxel data is described, which may be implemented in other embodiments, but is not limited thereto.
In this embodiment, the coaxial scanning device rotates relative to the object, and the angle between the imaging view angle of each imaging position and the object isThe installation included angle between the CT imaging equipment and the optical imaging equipment is theta;
(rotated angle)Means that there are N imaging positions for each imaging position, then there are N +. >N imaging data);
in this embodiment, the coordinate system of each imaging device is defined as follows: the coordinate origin is located in the central axis of the rotating frame and at the same height with the optical axis of the imaging device, the coordinate Z axis points to the center of the imaging device from the coordinate origin, the X-ray imaging Z axis points to the X-ray center from the coordinate origin, and the XY plane is perpendicular to the Z axis.
Accordingly, the process of step S30 includes the sub-steps of:
s31, for each imaging view angleThe lower surface voxel data, selecting XY plane as projection plane, orthographically projecting along negative Z-axis direction, and projecting three-dimensional data (x, y, Z, HU) containing object surface voxelsIn an XY plane under a corresponding angle, each pixel point in the plane is a projection position of three-dimensional surface voxel data on a projection plane, and a two-dimensional projection image (x, y, HU) of CT under the angle is formed; to obtain all imaging viewing angles->The lower CT two-dimensional projection image.
S32, for imaging view angleLower CT two-dimensional projection image and imaging view angle +.>Performing feature detection on the optical data to obtain the respective significant feature points in the CT two-dimensional projection image and the optical data;
s33, acquiring feature descriptors of the respective significant feature points and matching the feature descriptors to acquire matched feature point pairs exceeding a preset threshold;
S34, acquiring an imaging visual angle according to the matched characteristic point pairsCT two-dimensional projection image and imaging visual angleSpace coordinate conversion mapping of the optical data;
this substep S34 further comprises the substeps of:
s341, dividing coordinates of feature points of each imaging device by the focal length of the imaging device based on each matched feature point pair to obtain normalized feature point pair coordinates;
s342, constructing a linear equation based on the normalized characteristic point pair coordinates;
setting p (x, y) and p ' (x ', y ') as normalized feature point pair coordinates;
p (x, y) corresponds to the CT two-dimensional projection image, and p ' (x ', y ') corresponds to the optical data;
by linear equation p' T Fp=0, determining a base matrix, wherein F is the base matrix and T is the transpose;
the linear equation constructed by all the characteristic point pairs is collected, and a basic matrix is solved;
s343, based on the basic matrix, performing triangulation by using imaging equipment to which the CT two-dimensional projection image belongs and internal parameters of the imaging equipment to which the optical data belong, and mapping the normalized characteristic point pair coordinates to three-dimensional points in a world coordinate system;
s344, the space coordinate conversion mapping is obtained by utilizing the three-dimensional point coordinates mapped by the normalized characteristic point pairs in the world coordinate system, and the space coordinate conversion mapping comprises a translation vector and a rotation matrix.
S35, mapping the imaging visual angle based on space coordinate transformationCT two-dimensional projection image and imaging visual angleRegistering the lower optical data;
traversing all imaging perspectives based on the manner of sub-steps S32 to S35And (3) aligning and registering the CT two-dimensional projection images and the optical data under all imaging view angles.
Converting pixel positions in the optical data under each angle into coordinates in a registered CT two-dimensional projection image coordinate system;
and establishing a corresponding relation between pixels in the optical data and the spatial positions in the CT three-dimensional voxel data, and corresponding pixel information in the optical data under each angle to the spatial positions of the CT three-dimensional surface voxels to obtain a first optical data set with three-dimensional spatial coordinates under all angles.
The above-mentioned process is not only applicable to hyperspectral images, but also uses fluorescence optical data, RGB optical data, etc., and the embodiment is not limited thereto, and the optical imaging data of the coaxial scanning device is configured according to actual needs, so as to obtain corresponding optical data.
Example two
The three-dimensional optical reconstruction method based on CT images of the present embodiment is shown in detail in conjunction with fig. 2 (a), 2 (b), and 3 (a) to 3 (f), and the optical data of the present embodiment is a fluorescence image or other optical images. The method of the present embodiment may include the steps of:
201. Firstly, CT projection signals and fluorescence data of an object to be reconstructed under multiple angles are acquired by means of a coaxial scanning device, and the object is relatively motionless in the process of acquiring three-dimensional data. A CT projection signal and fluorescence data are arranged at each angle; the multiple angles may be angles at which the imaging device rotates around the object to be reconstructed in the coaxial scanning device.
The coaxial scanning device of the present embodiment includes: a rotating gantry, a CT imaging device (i.e., an X-ray source, an X-ray detector) and a fluoroscopic imaging device (i.e., a light source and a camera) fixed on the rotating gantry.
Fig. 2 (a) and 2 (b) are schematic diagrams of an on-axis scanning device, wherein XYZ is an object coordinate system and X 'Y' Z is a rotating frame coordinate system. CT imaging equipment (including an X-ray source and a ray detector, wherein the X-ray source is opposite to the ray detector, an object to be reconstructed is arranged in the middle) and optical imaging equipment (such as a fluorescent camera, the optical imaging equipment is not limited to the fluorescent camera, but can be other optical imaging equipment, and is not limited to one type, and can be simultaneously provided with various optical imaging equipment.
During imaging, the object is stationary, the rotating frame rotates to drive the imaging device thereon to rotate around the object (i.e. the object coordinate system is kept unchanged, and the rotating frame coordinate system rotates around the Z axis), and each time a certain angle is rotated (e.g.CT reconstruction accuracy is higher with smaller angle)Imaging at one time. Imaging includes:
1) CT imaging under the angle mainly refers to the ray intensity signal of the X-ray under the angle after the X-ray under the angle is attenuated by an object, and the signal is called CT projection signal under the angle;
2) Fluorescent imaging at this angle is the optical image data used for reconstruction. The acquisition of CT projection signals and optical image data under all angles is completed by rotating the rotating frame for one circle, and 360 CT projection signals and 360 fluorescence data can be obtained after the rotating frame rotates and scans 360 degrees with the device under the assumption that imaging is performed once every 1 degree of rotation.
202. And (5) preprocessing data.
The CT projection signals are preprocessed to reduce noise and enhance image quality.
The data preprocessing may be implemented in existing manners, such as preprocessing for artifact removal, gamma correction, filtering, and the like. The present embodiment is not limited thereto, and is selected according to actual needs.
203. And reconstructing CT projection signals of all angles by using a reconstruction algorithm to obtain CT three-dimensional voxel data of the object to be reconstructed.
The essence of CT reconstruction is to use the CT projection signal obtained in step 201 to solve the distribution of the X-ray attenuation coefficient inside the object (i.e. the X-ray attenuation coefficient of different parts of the object. The attenuation of different substances to the X-rays is different, and CT imaging is based on this to achieve nondestructive detection of the distribution of the substances inside the object).
A filtered back projection algorithm (Filtered Back Projection, FBP) is used in this embodiment. The reconstruction steps of the FBP algorithm are as follows:
1) Respectively carrying out one-dimensional Fourier transform on 360 CT projection signals in a time domain to obtain 360 projection signals in a frequency domain;
2) Filtering the CT projection signals in 360 frequency domains to obtain 360 CT projection signals after filtering;
3) Performing one-dimensional inverse Fourier transform on the 360 CT projection signals after filtering, and restoring the CT projection signals to a time domain to obtain 360 CT projection signals after filtering in the time domain;
4) Back projecting each of the filtered projection signals; and accumulating the back projection signals under 360 angles, and calculating the attenuation of each part of the object, namely reconstructing three-dimensional voxel data of the object.
The back projection is to distribute the projection signal at each angle to each point passing through the object according to the original projection path, and the three-dimensional voxel data of the object comprises three-dimensional space coordinates and attenuation coefficients of the object in the voxel on X rays.
All three-dimensional voxels form the three-dimensional spatial morphology of the object, and the three-dimensional optical reconstruction of the object to be used later provides the exact three-dimensional spatial coordinates of the object.
204. And carrying out subsequent processing on the three-dimensional voxel data of the object obtained by CT reconstruction, including denoising and contrast enhancement, so as to obtain the CT three-dimensional voxel data with improved quality.
Denoising is the reduction of noise and artifacts in an image to improve the quality of reconstructed data; enhancing contrast may improve the readability of the reconstructed data.
205. Surface information of the CT three-dimensional voxel data is extracted from the CT three-dimensional voxel data of the object obtained in step 204.
Dividing the CT three-dimensional voxel data into voxel data of an object and voxel data of a background based on a preset voxel data threshold; acquiring voxel data of an object boundary in a traversal mode based on the voxel data of the object; and setting HU value of voxel data of a non-object boundary in the voxel data of the object to be 0, and keeping the HU value of the voxel data of the object interface to be the original HU value to obtain the surface voxel data of the object, namely CT three-dimensional surface voxel data.
206. The CT three-dimensional surface voxel data of the object obtained in step 205 and the fluorescence data obtained in step 201 are registered.
In general, registration refers to mapping one image onto another image by finding a spatial transformation for two or more images in two or more sets of image data acquired at different times or by different imaging devices, such that points in the two images corresponding to the same location in space are in one-to-one correspondence.
Since the CT imaging device and the fluorescence imaging device are both arranged on the rotating frame, the included angle between the CT imaging device and the fluorescence imaging device in the same three-dimensional coordinate system (the coordinate system of the rotating frame) is theta, and the rotating frame rotates by an angle relative to the objectCT imaging view angle (as in FIG. 2 (a)) and rotation angle +.>The imaging view angle (as in fig. 2 (b)) corresponds to the photographing of the same portion of the object.
The step of registering the CT three-dimensional surface voxel data reconstructed in step 205 with the fluorescence data is as follows:
1) The coordinate system of each imaging device of the coaxial scanning device is as follows: the coordinate origin is positioned on the central axis of the rotating frame and at the same height with the optical axes of the imaging devices, the coordinate Z axis points to the centers of the imaging devices from the coordinate origin, and the XY plane is perpendicular to the Z axis; for each imaging viewing angleThe lower surface voxel data is selected as a projection plane, orthogonal projection is carried out along the negative direction of the Z axis, three-dimensional data (x, y, Z, HU) containing the surface voxels of the object are projected into the XY plane under the corresponding angle, each pixel point in the plane is the projection position of the three-dimensional surface voxel data on the projection plane, and a two-dimensional projection image (x, y, HU) of CT under the angle is formed; to obtain all imaging viewing angles- >The lower CT two-dimensional projection image.
2) For angles ofLower CT two-dimensional projection image and corresponding angle +.>The characteristic detection is carried out on the lower optical image in a manual or automatic mode, and obvious characteristic points in the CT two-dimensional projection image and the optical image are obtained;
for example, feature detection may be performed manually or automatically to find significant feature points (which may be edges, intersections, contours, shapes, structures, etc.) in an image, where the manual detection method involves marking features of interest on the image, as appropriate for situations where high precision selection of a particular feature is required. Automatic feature detection utilizes a computer vision library such as OpenCV, and feature points can be effectively and quickly positioned by calling proper feature detection algorithms such as Harris corner detection, SIFT, SURF, FAST, ORB and the like;
3) Calculating an angleLower CT two-dimensional projection image and corresponding angle +.>Feature descriptors of feature points found in the lower optical image (the feature descriptors are number vectors used for representing areas around the feature points), feature points in the two images are matched through the feature descriptors of the feature points of the two images (for example, nearest neighbor matching is adopted to match each feature point in one image with the nearest feature point in the other image), after matching is completed, the quality of matching is determined through quantifying similarity between descriptors of the feature points of the two images, the similarity measure can be calculated to describe Euclidean distance between subvectors, and the smaller the distance is, the more similar the distance is, the better the feature point matching is further represented.
Thereby, salient features corresponding between the two-dimensional CT projection image and the corresponding fluoroscopic image are found. Selecting the best matched descriptor pair by comparing the distance measurement between the feature descriptors, thereby realizing the feature point matching between the images;
4) Calculating to obtain a space coordinate transformation relation between the two images through the matched characteristic point pairs; the method comprises the following specific steps: (1) For each feature point pair, the coordinates of the feature point are divided by the imageNormalizing the focal length of the device; (2) And constructing a linear equation for the normalized coordinates of each characteristic point pair. Assuming that p (x, y) and p '(x', y ') are the coordinates normalized by any pair of matching points in the CT two-dimensional projection image and the optical image, the base matrix is represented by the linear equation p' T Fp=0, where F is the basis matrix, combining all anglesLower CT two-dimensional projection image and corresponding angle +.>The basis matrix is solved for the constructed linear equation for all feature points in the underlying optical image, resulting in a basis matrix describing the geometrical relationship between the two imaging devices (CT imaging detector and optical imaging camera). (3) Triangulating the internal parameters of the CT imaging detector and optical imaging camera maps pairs of feature points onto three-dimensional points in the world coordinate system. (4) And calculating to obtain a space coordinate transformation relation between the CT two-dimensional projection image and the optical image by utilizing three-dimensional point coordinates of the characteristic point pairs in the world coordinate system, wherein the space coordinate transformation relation comprises a translation vector and a rotation matrix.
That is, the angle is calculated by the matched pairs of feature pointsCT two-dimensional projection image and corresponding angleThe spatial coordinate transformation relation (such as translation, rotation, scaling and the like) between the lower optical images is similar to the above, and all the optical images are traversedAnd (3) finishing the operations 2) to 4) for all imaging visual angles, and finally registering the two images through the calculated space coordinate transformation relation. Alignment registration between the CT two-dimensional projection image and the optical image pixel point under each angle is realized. Namely, CT two under all imaging angles is realizedAlignment registration between the dimensional projection image and the pixels of the optical data.
207. Mapping of coordinates.
First, the coordinates of the optical image pixels are mapped into the CT image coordinate system.
Converting pixel positions in the optical data under each angle into coordinates in a registered CT two-dimensional projection image coordinate system; and establishing a corresponding relation between pixels in the optical data and spatial positions in the CT three-dimensional voxel data, and corresponding image pixel information in the optical data under each angle to the spatial positions of the CT three-dimensional surface voxels to obtain an optical data set with three-dimensional spatial coordinates under all angles.
208. Since the optical images at adjacent angles have sufficient overlap, the optical data with three-dimensional spatial coordinates at the adjacent two angles can be registered by the registration method used in step 205, thereby stitching and reconstructing a complete three-dimensional optical image.
209. And (5) three-dimensional reconstruction post-processing. The generated three-dimensional optical image is further optimized, and the spliced part is smoother and smoother through Gaussian filtering, so that more real three-dimensional optical reconstruction is realized.
In practical application, the filter parameters are finely adjusted to obtain the best effect according to the characteristics and the requirements of the image.
The method of the embodiment can generate an accurate and comprehensive three-dimensional model by utilizing high resolution and multi-level information provided by CT scanning. This provides powerful tools and resources for visualization, analysis, and applications.
The embodiment has the advantage of high resolution, namely, the CT image has higher spatial resolution, can capture the tiny details and structures of the object, and can accurately express the shape and characteristics of the object. The real coordinate information of the three-dimensional space of the object can be provided through the CT image crop input data, the prepared three-dimensional space positioning is provided for the three-dimensional optical reconstruction of the object, and the generation of a highly accurate three-dimensional optical model is facilitated.
In addition, an embodiment of the present invention further provides a computing device, including: the computer program is stored in the memory, and the processor executes the computer program in the memory and performs the steps of the three-dimensional optical reconstruction method based on a CT image according to any of the embodiments.
In the description of the present specification, the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., refer to particular features, structures, materials, or characteristics described in connection with the embodiment or example as being included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that alterations, modifications, substitutions and variations may be made in the above embodiments by those skilled in the art within the scope of the invention.

Claims (10)

1. A three-dimensional optical reconstruction method based on CT images, comprising:
S10, acquiring CT projection signals and optical data of an object to be reconstructed at multiple angles by means of a coaxial scanning device; each of the multiple angles is a rotation angle of the rotating coaxial scanning device relative to the stationary object to be reconstructed; a CT imaging device and an optical imaging device with an installation included angle with the CT imaging device are fixed in the coaxial scanning device;
s20, reconstructing CT projection signals of all angles according to a reconstruction algorithm to obtain CT three-dimensional voxel data of an object to be reconstructed, and extracting surface voxel data of the CT three-dimensional voxel data of the object to be reconstructed to obtain CT three-dimensional surface voxel data of the object to be reconstructed;
s30, aligning and registering each pixel point in the optical data and a corresponding pixel point in the CT three-dimensional surface voxel data, and mapping coordinates of the pixel points in the aligned and registered optical data into a CT image coordinate system of the CT three-dimensional voxel data to obtain a first optical data set with three-dimensional space coordinates in the CT image coordinate system under all angles;
and S40, splicing all the optical data in the first optical data set to form information after three-dimensional optical reconstruction.
2. The method of claim 1, wherein the optical imaging device is a hyperspectral imaging device, and the optical data is hyperspectral data;
The optical imaging device is fluorescence imaging device, and the optical data is fluorescence optical data; the optical imaging device is RGB imaging device, and the optical data is RGB optical data;
the multi-angle CT projection signals and optical data include: n CT projection signals and N optical data;
wherein, the rotating frame of the coaxial scanning device rotates at an angle relative to the object each timeThe number of data generated when the coaxial scanning device rotates 360 DEG relative to the object is +.>
3. The method according to claim 1, wherein reconstructing the CT projection signals of all angles according to the reconstruction algorithm in S20, to obtain CT three-dimensional voxel data of the object to be reconstructed, comprises:
preprocessing the CT projection signals of each angle, and reconstructing the preprocessed CT projection signals of all angles by adopting a filtering back projection algorithm to obtain CT three-dimensional voxel data of an object to be reconstructed.
4. A method according to claim 3, wherein reconstructing the preprocessed CT projection signals at all angles using a filtered back projection algorithm, to obtain CT three-dimensional voxel data of the object to be reconstructed, comprises:
s21, performing one-dimensional Fourier transform on the N preprocessed CT projection signals respectively to obtain N first projection signals in a frequency domain;
S22, filtering the first projection signals in the N frequency domains to obtain N filtered second projection signals;
s23, carrying out one-dimensional inverse Fourier transform on the N second projection signals, and restoring the N second projection signals to a time domain to obtain N third projection signals after filtering in the time domain;
s24, carrying out back projection on each third projection signal, wherein the back projection is to uniformly distribute projection signals under each angle to each point passing through the object according to respective original projection paths, accumulate back projection signals of the same point on the object under all angles to obtain ray attenuation coefficients of each point of the object, and reconstruct CT three-dimensional voxel data of the object;
the CT three-dimensional voxel data comprises: three-dimensional coordinates of each voxel in a CT image of the reconstructed object and HU values of the positions of each voxel, which HU values reflect the degree of absorption of X-rays by the object to be reconstructed.
5. The method according to claim 4, wherein extracting the surface voxel data of the CT three-dimensional voxel data of the object in S20, to obtain the CT three-dimensional surface voxel data of the object to be reconstructed, includes:
s25, optimizing the CT three-dimensional voxel data to obtain the CT three-dimensional voxel data after optimization;
S26, extracting surface information of the CT three-dimensional voxel data after optimization processing;
s27, dividing the surface information of the CT three-dimensional voxel data into voxel data of an object and voxel data of a background based on a preset voxel data threshold;
s28, acquiring voxel data of an object boundary in a traversal mode based on the voxel data of the object; and setting HU value of voxel data of a non-object boundary in the voxel data of the object to be 0 to obtain surface voxel data of the object, namely CT three-dimensional surface voxel data.
6. The method according to claim 5, wherein S30 comprises:
the coaxial scanning device rotates relative to the object, and the imaging visual angle of each imaging position and the object included angle areThe installation included angle between the CT imaging equipment and the optical imaging equipment is theta;
the coordinate system of each imaging device of the coaxial scanning device is as follows: the coordinate origin is positioned on the central axis of the rotating frame and at the same height with the optical axes of the imaging devices, the coordinate Z axis points to the centers of the imaging devices from the coordinate origin, the X-ray imaging Z axis points to the X-ray center from the coordinate origin, and the XY plane is perpendicular to the Z axis;
s31, for each imaging view angleThe lower surface voxel data is selected as a projection plane, orthogonal projection is carried out along the negative direction of the Z axis, three-dimensional data (x, y, Z, HU) containing the surface voxels of the object are projected into the XY plane under the corresponding angle, each pixel point in the plane is the projection position of the three-dimensional surface voxel data on the projection plane, and a CT two-dimensional projection image (x, y, HU) under the angle is formed; to obtain all imaging viewing angles- >A lower CT two-dimensional projection image;
s32, for imaging view angleLower CT two-dimensional projection image and imaging view angle +.>Performing feature detection on the optical data to obtain the respective significant feature points in the CT two-dimensional projection image and the optical data;
s33, acquiring feature descriptors of the respective significant feature points and matching the feature descriptors to acquire matched feature point pairs exceeding a preset threshold;
s34, acquiring an imaging visual angle according to the matched characteristic point pairsLower CT two-dimensional projection image and imaging view angle +.>Space coordinate conversion mapping of the optical data;
s35, mapping the imaging visual angle based on space coordinate transformationLower CT two-dimensional projection image and imaging view angle +.>Registering the lower optical data;
traversing all imaging perspectives based on the manner of S32 to S35And (3) aligning and registering the CT two-dimensional projection images and the optical data under all imaging view angles.
7. The method of claim 6, wherein S34 comprises:
s341, dividing coordinates of feature points of each imaging device by the focal length of the imaging device based on each matched feature point pair to obtain normalized feature point pair coordinates;
s342, constructing a linear equation based on the normalized characteristic point pair coordinates;
Setting p (x, y) and p ' (x ', y ') as normalized feature point pair coordinates;
p (x, y) corresponds to the CT two-dimensional projection image, and p ' (x ', y ') corresponds to the optical data;
by linear equation p' T Fp=0, determining a base matrix, wherein F is the base matrix;
the linear equation constructed by all the characteristic point pairs is collected, and a basic matrix is solved;
s343, based on the basic matrix, performing triangulation by using internal parameters of the CT imaging equipment and the optical imaging equipment, and mapping the normalized characteristic point pair coordinates to three-dimensional points in a world coordinate system;
s344, the space coordinate conversion mapping is obtained by utilizing the three-dimensional point coordinates mapped by the normalized characteristic point pairs in the world coordinate system, and the space coordinate conversion mapping comprises a translation vector and a rotation matrix.
8. The method of claim 6, wherein S34 comprises:
converting pixel positions in the optical data under each angle into coordinates in a registered CT two-dimensional projection image coordinate system;
and establishing a corresponding relation between pixels in the optical data and the spatial positions in the CT three-dimensional surface voxel data, and corresponding pixel information in the optical data under each angle to the spatial positions of the CT three-dimensional surface voxel data to obtain a first optical data set with three-dimensional spatial coordinates under all angles.
9. The method of claim 8, wherein S40 comprises:
traversing all adjacent optical data in the S30 mode, registering the adjacent optical data, identifying an overlapping area, and splicing all optical data in the first optical data set based on the identified overlapping area to form three-dimensional optical reconstructed information;
and optimizing the information after the three-dimensional optical reconstruction to obtain the complete information of the three-dimensional optical reconstruction.
10. A computing device, comprising: a memory and a processor, the memory storing a computer program, the processor executing the computer program in the memory and performing the steps of a three-dimensional optical reconstruction method based on CT images as claimed in any one of the preceding claims 1 to 9.
CN202311304245.9A 2023-10-09 2023-10-09 Three-dimensional optical reconstruction method based on CT image Pending CN117649484A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311304245.9A CN117649484A (en) 2023-10-09 2023-10-09 Three-dimensional optical reconstruction method based on CT image
NL2038335A NL2038335A (en) 2023-10-09 2024-07-25 A method based on CT images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311304245.9A CN117649484A (en) 2023-10-09 2023-10-09 Three-dimensional optical reconstruction method based on CT image

Publications (1)

Publication Number Publication Date
CN117649484A true CN117649484A (en) 2024-03-05

Family

ID=90043972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311304245.9A Pending CN117649484A (en) 2023-10-09 2023-10-09 Three-dimensional optical reconstruction method based on CT image

Country Status (2)

Country Link
CN (1) CN117649484A (en)
NL (1) NL2038335A (en)

Also Published As

Publication number Publication date
NL2038335A (en) 2024-09-20

Similar Documents

Publication Publication Date Title
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
Strecha et al. On benchmarking camera calibration and multi-view stereo for high resolution imagery
Huang et al. Bezier interpolation for 3-D freehand ultrasound
CN102208109B (en) Different-source image registration method for X-ray image and laser image
US20050047544A1 (en) Apparatus and method for registering 2D radiographic images with images reconstructed from 3D scan data
CN104567758B (en) Stereo imaging system and its method
Moussa et al. An automatic procedure for combining digital images and laser scanner data
CN111602177B (en) Method and apparatus for generating a 3D reconstruction of an object
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN110230979A (en) A kind of solid target and its demarcating three-dimensional colourful digital system method
US12086923B2 (en) Surface determination using three-dimensional voxel data
US10204405B2 (en) Apparatus and method for parameterizing a plant
Tabkha et al. Semantic enrichment of point cloud by automatic extraction and enhancement of 360° panoramas
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN111583388A (en) Scanning method and device of three-dimensional scanning system
CN109872353A (en) Based on the white light data and CT Registration of Measuring Data method for improving iteration closest approach algorithm
CN114159085B (en) PET image attenuation correction method and device, electronic equipment and storage medium
CN113052929B (en) Linear scanning CL reconstruction method based on projection view weighting
CN117649484A (en) Three-dimensional optical reconstruction method based on CT image
CN115880371A (en) Method for positioning center of reflective target under infrared visual angle
Detchev et al. Image matching and surface registration for 3D reconstruction of a scoliotic torso
CN113729747B (en) Spherical metal marked cone beam CT metal artifact removal system and removal method
Maimone et al. A taxonomy for stereo computer vision experiments
JPH09204532A (en) Image recognition method and image display method
Petrovska et al. Geometric Accuracy Analysis between Neural Radiance Fields (NeRFs) and Terrestrial laser scanning (TLS)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination