CN116128838A - Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method - Google Patents

Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method Download PDF

Info

Publication number
CN116128838A
CN116128838A CN202310059400.9A CN202310059400A CN116128838A CN 116128838 A CN116128838 A CN 116128838A CN 202310059400 A CN202310059400 A CN 202310059400A CN 116128838 A CN116128838 A CN 116128838A
Authority
CN
China
Prior art keywords
respiratory
dimensional
patient
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310059400.9A
Other languages
Chinese (zh)
Inventor
费旋珈
黄思盛
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Linatech Medical Science And Technology
Original Assignee
Suzhou Linatech Medical Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Linatech Medical Science And Technology filed Critical Suzhou Linatech Medical Science And Technology
Priority to CN202310059400.9A priority Critical patent/CN116128838A/en
Publication of CN116128838A publication Critical patent/CN116128838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • A61B5/1135Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing by monitoring thoracic expansion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention provides a binocular vision-based respiratory detection system, a 4D-CT image reconstruction system and a binocular vision-based respiratory detection method. The respiratory detection system is arranged on the CT rack and is used for monitoring respiratory motion of the chest and the abdomen of a patient to acquire respiratory phase of the patient in real time, and comprises a camera, a projector and a data processing module, wherein the respiratory detection system acquires respiratory curve of the patient through the following steps: s1: acquiring a body surface image of a patient in real time through a camera; s2: reconstructing a body surface three-dimensional contour by using a structured light reconstruction algorithm; s3: positioning three-dimensional body surface movement feature points; s4: tracking the track of the three-dimensional characteristic points of the body surface in real time by utilizing a characteristic point tracking algorithm; s5: and acquiring a breathing curve of the patient through the motion trail of the characteristic points.

Description

Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method
Technical Field
The invention relates to the technical field of 4D-CT, in particular to a binocular vision-based respiratory detection system, a 4D-CT image reconstruction system and a binocular vision-based respiratory detection method.
Background
Computed tomography (Computed Tomography, CT) offers great assistance to clinically accurate radiotherapy. The clinician can delineate the target area according to the tumor information in the CT, and the treatment scheme is formulated, so that the tumor is always in the planned target area in the treatment process, the treatment efficiency of radiotherapy is improved, and the probability of tumor recurrence is reduced. At present, 3D-CT is commonly used in the clinical field for collecting and planning tumor information of a patient, the scheme can obtain a better radiotherapy effect in four limbs or head areas, but for chest and abdomen areas, the shape and position of the tumor can be changed along with the respiratory movement, cardiac movement and other involuntary movements of the human body of the patient, and the phenomenon is particularly obvious in the lung edge area and the cardiac edge area. The influence of respiratory motion is the greatest, and according to related researches, the human body respiration can enable the center of the liver to generate displacement of 10-26 mm in the up-down direction and generate displacement of 1-12 mm in the front-back direction; tumors located in the lower part of the lung have an average displacement of 12mm under the influence of respiration. The shape and size of the tumor in the 3D-CT image are greatly changed due to motion blur, and at the same time, the motion artifact also severely degrades the image quality of the 3D-CT. If a doctor uses a low-quality CT image containing motion artifact to make a radiotherapy plan, a tumor may be off-target due to respiratory motion in the radiotherapy process, so that the tumor cannot receive radiation with sufficient dose in the radiotherapy, and the radiotherapy effect is affected. Therefore, when a doctor makes a radiotherapy plan of chest and abdomen, a target area is generally firstly sketched on a 3D-CT diagnostic image, and then the planned target area (Planning Target Volume, PTV) is obtained by expanding the target area, so that the tumor can be kept in the target area in the treatment process. The scope of the PTV extension was determined by physician experience and patient statistics, with no specificity. However, the tumor shift varies from patient to patient, and this causes unnecessary irradiation of normal tissue surrounding the lung tumor, which may seriously cause radiation pneumonitis. Clinical scholars control lung tumor displacement by external measures: (1) a gas-closing technique and (2) a respiratory gating technique.
Although the deficiency of the 3D-CT imaging technology is overcome to a certain extent through an external method in clinic, the problem of real-time movement of a target area cannot be truly solved. In recent years, time-variant has been added to CT image reconstruction, and 4D-CT techniques have emerged. The 4D-CT acquires data while adding to the acquired projection data the time information of the phase in the breathing cycle at the current moment (i.e. the breathing phase). Then, the projection data are subjected to phase classification according to the time information on each projection data body, finally, the projection data under each phase are subjected to three-dimensional image reconstruction by using a reconstruction algorithm, a three-dimensional image sequence which changes along with time, namely a 4D-CT sequence image is constructed, the technology provides a group of three-dimensional CT images of each respiratory phase containing motion information, the shape of thoracic and abdominal organs is more truly reproduced, and motion artifacts in the images are effectively eliminated. The 4DCT technology is combined with the radiotherapy technology, and personalized radiotherapy plan design is carried out according to the motion characteristics of the target area of the patient, so that the outer release boundary of the target area is reduced, the irradiated dose of the target area is improved, and simultaneously, the toxic and side effects on normal tissues are reduced, so that the radiotherapy precision is improved.
Compared with the conventional CT, the 4DCT technology increases one-dimensional time variable, can provide a group of three-dimensional images of different phases of the suction cycle, and at present, the clinically common 4DCT mainly comprises two phases: firstly, an image acquisition stage; and secondly, an image time phase grouping stage. In the CT scanning process, the CT machine is connected with an external respiration monitoring system, and synchronously acquires CT images and respiration signals and performs data communication, so that a time label is provided for reconstructing the CT images, all CT tomographic images are classified according to the time labels, namely, the phase information of the synchronously acquired respiration signals, and a group of three-dimensional CT images, namely, 4DCT images, containing motion information and having different respiration phases are obtained. Currently, the 4DCT system mainly acquires respiratory signals of a patient by the following methods:
1. measuring the respiration volume by a spirometer;
2. fixing an infrared marker on the body surface of a patient, and measuring the fluctuation movement of the body surface marker along with respiration by using an infrared camera to obtain a respiration signal;
3. an abdominal pressure band is fixed on the abdomen of a patient, the pressure difference caused by respiratory motion is measured, and the respiratory motion is represented by a pressure signal.
All three methods need to fix additional devices on the body of the patient, bring certain discomfort to the patient, and bring additional study cost to doctors, which is time-consuming and labor-consuming.
Disclosure of Invention
In order to solve the technical problems, the invention discloses a binocular vision-based respiratory detection system, wherein the whole system is integrated on a CT machine, an additional device is not required to be added on a patient, the operation steps of a doctor are reduced, and the treatment efficiency is improved.
In order to achieve the above purpose, the technical scheme of the invention provides: a respiratory detection system based on binocular vision, the respiratory detection system is installed in the CT frame for monitor patient chest belly's respiratory motion in order to acquire patient's respiratory phase in real time, respiratory detection system includes camera, projecting apparatus and data processing module, wherein, respiratory detection system acquires patient's respiratory curve through following step: s1: acquiring a body surface image of a patient in real time through a camera;
s2: reconstructing a body surface three-dimensional contour by using a structured light reconstruction algorithm; s3: positioning three-dimensional body surface movement feature points; s4: tracking the track of the three-dimensional characteristic points of the body surface in real time by utilizing a characteristic point tracking algorithm; s5: and acquiring a breathing curve of the patient through the motion trail of the characteristic points.
Further, in step S1, three-dimensional point cloud data of the body surface of the patient is obtained by a structured light three-dimensional reconstruction algorithm, the phase shift fringes used by the structured light are 3-step phase shift fringes, due to symmetry, the phase shift fringes are shifted by 1200 each time, a total of 3 fringe images are used in the three-step phase shift method, and the intensities of the fringes are as follows:
I 1 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)-2π/3]
I 2 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)]
I 3 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)+2π/3]
wherein I' (x, y) is the average intensity of the point in the image, I "(x, y) is the intensity of the modulation of the point in the image, I 1 (x,y)、I 2 (x, y) and I 3 (x, y) is the image acquired by the final camera, phi (x, y) is the phase of the solution, and can be calculated by the following formula:
Figure BDA0004060997670000041
and finally, calculating the corresponding code word by phi (x, y)/2 pi multiplied by p, wherein p is the width of the projector.
Further, in step S2, the three-dimensional point calculation process is as follows:
Figure BDA0004060997670000042
/>
wherein p is c Transformation matrix obtained for camera calibration, p p Transformation matrix obtained for projector calibration, u c U is the pixel coordinates of the corresponding camera p For the coordinates of the corresponding projector, Q is a three-dimensional coordinate point, and for the calculation of Q, by a method of introducing tensor determinant, the following determinant is first calculated:
Figure BDA0004060997670000043
wherein, I (k) is an identity matrix, the coordinates of the three-dimensional point Q are calculated as follows:
Figure BDA0004060997670000051
wherein u is cx 、u cy For camera pixel coordinate u c =[u cx ,u cy ],u px For projector pixel coordinates u p =0.[u px ,u py ],Q k Is a three-dimensional point cloud coordinate.
Further, in step S3, motion feature points are extracted by SIFT3D algorithm.
Further, the SIFT3D algorithm extracts feature points as follows:
(1) Generating a scale space, and defining the convolution of the original image I (x, y) I (x, y) and the Gaussian kernel G (x, y, sigma) as a scale space L (x, y, sigma) of an image:
Figure BDA0004060997670000052
Figure BDA0004060997670000053
wherein G (x, y, σ) is a variable-scale Gaussian function;
(2) Constructing a Gaussian difference function:
Figure BDA0004060997670000054
wherein k represents the scale factor of two adjacent scales;
(3) Detecting extreme points of the scale space, comparing each point of the scale space with 26 adjacent points, and if the point is the largest or smallest in the 26 neighborhoods, the point is a characteristic point;
(4) Extremum point screening, namely eliminating points which do not meet the requirements by curve fitting of a DoG function, so as to obtain stable extremum points, namely characteristic points;
(5) Determining the direction of the feature points, calculating the gradient of each point in the neighborhood of the feature points, and adopting a histogram to count the gradient direction and a model to define the main direction of the feature points as the direction corresponding to the maximum value of the histogram.
Further, in step S4, after SIFT feature points of the three-dimensional point cloud at different moments are extracted, euclidean distances of the feature points at different moments are calculated, feature points with the closest euclidean distances are used as matching points, a RANSAC algorithm is adopted to remove wrong matching point pairs, and after the matching point pairs at different moments are obtained, motion trajectories of the feature points at different moments are obtained.
Further, in step S5, a respiratory curve is obtained by selecting and calculating the variation trace of the mean value of the Z-axis coordinates of all the feature points, the Z-axis is the direction perpendicular to the treatment bed, a respiratory cycle is determined according to the maximum value and the minimum value of the respiratory curve, and the whole respiratory cycle is divided into 8-10 phases, namely, the respiratory phase of the patient is obtained.
The technical scheme of the invention also provides a 4D-CT image reconstruction system, which comprises a CT machine and the binocular vision-based respiration detection system, wherein the CT machine is used for acquiring two-dimensional CT images through scanning, the binocular vision-based respiration detection system is used for acquiring the respiration phase of a scanned patient, and the binocular vision-based respiration detection system is arranged on a frame of the CT machine, wherein after the respiration phase of the scanned patient is acquired through the binocular vision-based respiration detection system, all the two-dimensional CT images are grouped according to the respiration phase to obtain 3D-CT images of each respiration phase, and the 3D-CT images of all the respiration phases are 4D-CT sequence images.
The technical scheme of the invention also provides a 4D-CT image reconstruction system method, which uses the 4D-CT image reconstruction system, and comprises the following steps: acquiring CT two-dimensional image data in real time through a CT machine; acquiring respiratory phase information of a scanned patient through a binocular vision-based respiratory detection system; adding respiratory phase information to the CT two-dimensional image data; grouping all CT images according to respiratory phases; 4D-CT sequence images are obtained.
Drawings
FIG. 1 is a schematic installation diagram of a binocular vision system of the present invention;
FIG. 2 is a schematic diagram of a 4D-CT reconstruction procedure in accordance with the present invention;
FIG. 3 is a schematic representation of the binocular vision system of the present invention extracting breathing curves;
FIG. 4 is a schematic diagram of the binocular vision acquisition and reconstruction results of the chest and abdomen of a human body according to the present invention;
FIG. 5 is a schematic illustration of a breathing curve of a patient;
FIG. 6 is a schematic representation of a complete acquisition of a 4D-CT of the present invention.
Detailed Description
The technical scheme of the present invention will be further described with reference to specific examples, but the present invention is not limited to these examples.
The invention provides a scheme for realizing 4D-CT based on binocular vision, a binocular vision system is arranged on a CT machine frame, as shown in figure 1, and is used for monitoring respiratory motion of chest and abdomen of a patient, respiratory phases of the patient can be obtained in real time (generally, one respiratory period is divided into 8-10 phases), after CT obtains a two-dimensional image, the binocular vision system can provide a current respiratory phase for the CT image, after waiting for the end of scanning of the patient, all the two-dimensional images are grouped according to the respiratory phases, so as to obtain 3D-CT images of each respiratory phase, the 3D-CT images of all the respiratory phases are 4D-CT sequence images, and the whole flow is shown in figure 2.
The binocular vision system consists of a camera, a projector and a data processing module, wherein the camera can use a visible light camera or an infrared camera, the projector can select a visible light projector or an infrared projector corresponding to the camera, three-dimensional point cloud data of the body surface of a patient can be acquired through a structured light three-dimensional reconstruction algorithm, phase shift stripes used by structured light are 3-step phase shift stripes, the phase shift stripes are offset 120 degrees each time due to symmetry, a total of 3 stripe images are used in a three-step phase shift method, and the intensities of the stripes are as follows:
I 1 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)-2π/3]
I 2 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)]
I 3 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)+2π/3]
i' (x, y) is the average intensity of the point in the image, I "(x, y) is the intensity of the modulation of the point in the image, I 1 (x,y)、I 2 (x, y) and I 3 (x, y) is the image acquired by the final camera and phi (x, y) is the phase of the solution. It can be calculated from the following formula:
Figure BDA0004060997670000081
/>
and finally, calculating the corresponding code word by phi (x, y)/2 pi multiplied by p, wherein p is the width of the projector.
The speckle fringes used, which determine the corresponding speckle position by means of a speckle of 4*4, are coded in this way, for example, as described in document Apple Stem-end/Calyx Identification using a Speckle-array Encoding Pattern and in international patent EP 2207127 B1.
The three-dimensional system reconstructed in real time in fig. 3 adopts a multithreading synchronization mode. The method mainly comprises four threads, including a main thread, an acquisition thread and a decoding thread. The main thread is used for controlling the switch, the point cloud and the real-time tracking curve display of other threads, and the acquisition thread is mainly used for synchronously acquiring visible and infrared pictures; the decoding thread is used for calculating corresponding code words for the acquired phase shift picture and the speckle picture; and calculating three-dimensional coordinates by using a triangle principle in the reconstruction thread.
The three-dimensional points in fig. 3 are calculated as follows:
Figure BDA0004060997670000082
in p c Transformation matrix obtained for camera calibration, p p Transformation matrix obtained for projector calibration, u c U is the pixel coordinates of the corresponding camera p For the coordinates of the corresponding projector, Q is a three-dimensional coordinate point, and for the calculation of Q, by a method of introducing tensor determinant, the following determinant is first calculated:
Figure BDA0004060997670000083
where I (k) is an identity matrix, the coordinates of the three-dimensional point Q are calculated as follows:
Figure BDA0004060997670000091
wherein u is cx 、u cy For camera pixel coordinate u c =[u cx ,u cy ],u px For projector pixel coordinates u p =0.[u px ,u py ],Q k Is a three-dimensional point cloud coordinate.
After the three-dimensional point cloud coordinates of the body surface of the patient are obtained, the characteristic points of the body surface of the patient need to be extracted, and the three-dimensional point cloud coordinates can be divided into: single point features, local features, global features. The point features are basic representation of geometric features around a certain point, and cannot obtain too much information because they use few parameter values to approximately represent the geometric features of k neighborhood of a point, and the finally extracted respiratory information is also easily affected by noise and is not accurate enough; the SIFT3D algorithm is used for extracting characteristic points of the body surface of a patient, the characteristic is one of local characteristics, and the common local characteristics include PFH, FPFH, SHOT, C-SHOT, RSD, ESF descriptors, 3D shape descriptors, lande characteristics (spectrum characteristics) and the like, and can be used for replacing the SIFT3D characteristics.
The SIFT3D algorithm extracts local features as follows:
1. generating a scale space
The convolution of the artwork I (x, y) with the gaussian kernel G (x, y, σ) is defined as the scale space L (x, y, σ) of an image:
Figure BDA0004060997670000092
Figure BDA0004060997670000093
where G (x, y, σ) is a variable-scale Gaussian function.
2. Construction of Gaussian difference function (DoG, difference of Guassian)
Figure BDA0004060997670000094
Where k represents the scale factor of two adjacent scales.
3. Detecting extreme points of scale space
Each point of the scale space is compared to the adjacent 26 points (the adjacent 8 points of the same scale and the 9*2 points of the upper and lower adjacent scales) and if the point is the largest or smallest in the 26 neighborhoods, it is the feature point.
4. Extreme point screening
And (3) eliminating points which do not meet the requirements by curve fitting of the DoG function, so as to obtain stable extreme points, namely characteristic points.
5. Determining feature point direction
And calculating gradients of all points in the neighborhood of the characteristic points, and defining the main direction of the characteristic points as the direction corresponding to the maximum value of the histogram by adopting the statistical gradient direction and the modulus of the histogram.
When a patient breathes, three-dimensional point cloud data of the body surface of the patient can be obtained in real time through a structured light reconstruction algorithm, SIFT feature points of the three-dimensional point cloud at different moments are extracted, euclidean distances of the feature points at different moments are calculated, feature points with the shortest Euclidean distances are used as matching points, an RANSAC algorithm is adopted to remove wrong matching point pairs, after the matching point pairs at different moments are obtained, movement tracks of the feature points at different moments can be obtained, and because the up-down movement of chest and abdomen of the patient is obvious during breathing, the change track of the coordinate mean value of the Z axis (perpendicular to the direction of a treatment bed) of all the feature points is selected and calculated, a breathing curve (as shown in a fifth graph) is obtained, a breathing cycle is determined according to the maximum value and the minimum value of the breathing curve, the whole breathing cycle is divided into 8-10 phases, and the specific working procedure is shown in a third graph.
The image data of the structural light of the body surface of the patient are acquired in real time through a camera and projector reconstruction system, as shown in fig. 4.
1. Obtaining body surface three-dimensional contour information by utilizing a structured light three-dimensional reconstruction algorithm;
2. positioning three-dimensional motion feature points of the body surface;
3. tracking the track of the three-dimensional characteristic points of the body surface in real time by utilizing a characteristic point tracking algorithm;
4. and acquiring a respiratory motion curve of the patient through the motion track of the characteristic points, as shown in figure 5.
The respiratory curve is similar to a sine signal, a common clinical habit is to divide a respiratory cycle into 8 to 10 phases, the respiratory phase of a patient can be obtained in real time by utilizing a binocular vision system, after a two-dimensional image is acquired by CT, the respiratory phase corresponding to the acquisition of the two-dimensional image can be known, a plurality of respiratory cycles can be included in the scanning process of the patient, each respiratory cycle also includes a plurality of two-dimensional images, the two-dimensional images are grouped according to the respiratory phase, 8 to 10 groups of 3D-CT data, namely 4D-CT sequence images, and a complete acquisition schematic diagram is shown in fig. 6.
Compared with the prior art, the invention has the advantages that;
1. the breathing phase can be directly obtained according to the body surface of the patient without additional markers, so that the operation is simpler;
2. and 3D point cloud imaging is adopted, and respiratory phases are commonly acquired by multiple characteristic points, so that the robustness and the accuracy are higher.
In a first embodiment of the present invention, there is provided a binocular vision based respiration detection system mounted on a CT gantry for monitoring respiratory motion of a patient's chest and abdomen to acquire respiratory phases of the patient in real time, the respiration detection system comprising a camera, a projector and a data processing module, wherein the respiration detection system acquires a respiratory profile of the patient by: s1: acquiring a body surface image of a patient in real time through a camera; s2: reconstructing a body surface three-dimensional contour by using a structured light reconstruction algorithm; s3: positioning three-dimensional body surface movement feature points; s4: tracking the track of the three-dimensional characteristic points of the body surface in real time by utilizing a characteristic point tracking algorithm; s5: and acquiring a breathing curve of the patient through the motion trail of the characteristic points.
Further, in step S1, three-dimensional point cloud data of the body surface of the patient is obtained by a structured light three-dimensional reconstruction algorithm, the phase shift fringes used by the structured light are 3-step phase shift fringes, due to symmetry, the phase shift fringes are shifted by 1200 each time, a total of 3 fringe images are used in the three-step phase shift method, and the intensities of the fringes are as follows:
I 1 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)-2π/3]
I 2 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)]
I 3 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)+2π/3]
wherein I' (x, y) is the average intensity of the point in the image, I "(x, y) is the intensity of the modulation of the point in the image, I 1 (x,y)、I 2 (x, y) and I 3 (x, y) is the image acquired by the final camera, phi (x, y) is the phase of the solution, and can be calculated by the following formula:
Figure BDA0004060997670000121
and finally, calculating the corresponding code word by phi (x, y)/2 pi multiplied by p, wherein p is the width of the projector.
Further, in step S2, the three-dimensional point calculation process is as follows:
Figure BDA0004060997670000122
wherein p is c Transformation matrix obtained for camera calibration, p p Transformation matrix obtained for projector calibration, u c U is the pixel coordinates of the corresponding camera p For the coordinates of the corresponding projector, Q is a three-dimensional coordinate point, and for the calculation of Q, by a method of introducing tensor determinant, the following determinant is first calculated:
Figure BDA0004060997670000123
wherein, I (k) is an identity matrix, the coordinates of the three-dimensional point Q are calculated as follows:
Figure BDA0004060997670000124
wherein u is cx 、u cy For camera pixel coordinate u c =[u cx ,u cy ],u px For projector pixel coordinates u p =0.[u px ,u py ],Q k Is a three-dimensional point cloud coordinate.
Further, in step S3, motion feature points are extracted by SIFT3D algorithm.
Further, the SIFT3D algorithm extracts feature points as follows:
(1) Generating a scale space, and defining the convolution of the original image I (x, y) I (x, y) and the Gaussian kernel G (x, y, sigma) as a scale space L (x, y, sigma) of an image:
Figure BDA0004060997670000131
Figure BDA0004060997670000132
wherein G (x, y, σ) is a variable-scale Gaussian function;
(2) Constructing a Gaussian difference function:
Figure BDA0004060997670000133
/>
wherein k represents the scale factor of two adjacent scales;
(3) Detecting extreme points of the scale space, comparing each point of the scale space with 26 adjacent points, and if the point is the largest or smallest in the 26 neighborhoods, the point is a characteristic point;
(4) Extremum point screening, namely eliminating points which do not meet the requirements by curve fitting of a DoG function, so as to obtain stable extremum points, namely characteristic points;
(5) Determining the direction of the feature points, calculating the gradient of each point in the neighborhood of the feature points, and adopting a histogram to count the gradient direction and a model to define the main direction of the feature points as the direction corresponding to the maximum value of the histogram.
Further, in step S4, after SIFT feature points of the three-dimensional point cloud at different moments are extracted, euclidean distances of the feature points at different moments are calculated, feature points with the closest euclidean distances are used as matching points, a RANSAC algorithm is adopted to remove wrong matching point pairs, and after the matching point pairs at different moments are obtained, motion trajectories of the feature points at different moments are obtained.
Further, in step S5, a respiratory curve is obtained by selecting and calculating the variation trace of the mean value of the Z-axis coordinates of all the feature points, the Z-axis is the direction perpendicular to the treatment bed, a respiratory cycle is determined according to the maximum value and the minimum value of the respiratory curve, and the whole respiratory cycle is divided into 8-10 phases, namely, the respiratory phase of the patient is obtained.
In another embodiment of the present invention, there is further provided a 4D-CT image reconstruction system, including a CT machine and the binocular vision-based respiration detection system as described above, where the CT machine is configured to acquire two-dimensional CT images through scanning, the binocular vision-based respiration detection system is configured to acquire respiratory phases of a scanned patient, and the binocular vision-based respiration detection system is mounted on a gantry of the CT machine, where after acquiring respiratory phases of the scanned patient through the binocular vision-based respiration detection system, all the two-dimensional CT images are grouped according to respiratory phases to obtain 3D-CT images of each respiratory phase, and the 3D-CT images of all respiratory phases are 4D-CT sequence images.
In other embodiments of the present invention, there is also provided a 4D-CT image reconstruction system method using the 4D-CT image reconstruction system as described above, the method comprising: acquiring CT two-dimensional image data in real time through a CT machine; acquiring respiratory phase information of a scanned patient through a binocular vision-based respiratory detection system; adding respiratory phase information to the CT two-dimensional image data; grouping all CT images according to respiratory phases; 4D-CT sequence images are obtained.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and improvements could be made by those skilled in the art without departing from the inventive concept, which falls within the scope of the present invention.

Claims (9)

1. A respiratory detection system based on binocular vision, characterized in that, respiratory detection system installs in the CT frame for monitor patient's chest abdomen's respiratory motion in order to acquire patient's respiratory phase in real time, respiratory detection system includes camera, projecting apparatus and data processing module, wherein, respiratory detection system acquires patient's respiratory curve through following step:
s1: acquiring a body surface image of a patient in real time through a camera;
s2: reconstructing a body surface three-dimensional contour by using a structured light reconstruction algorithm;
s3: positioning three-dimensional body surface movement feature points;
s4: tracking the track of the three-dimensional characteristic points of the body surface in real time by utilizing a characteristic point tracking algorithm;
s5: and acquiring a breathing curve of the patient through the motion trail of the characteristic points.
2. The system of claim 1, wherein in step S1, three-dimensional point cloud data of the patient surface is obtained by a structured light three-dimensional reconstruction algorithm, the phase shift fringes used by the structured light are 3-step phase shift fringes, and due to symmetry, the phase shift fringes are offset 1200 each time, and a total of 3 fringe images are used in the three-step phase shift method, and the intensities of the fringes are as follows:
I 1 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)-2π/3]
I 2 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)]
I 3 (x,y)=I'(x,y)+I”(x,y)cos[φ(x,y)+2π/3]
wherein I' (x, y) is the average intensity of the point in the image, I "(x, y) is the intensity of the modulation of the point in the image, I 1 (x,y)、I 2 (x, y) and I 3 (x, y) is the image acquired by the final camera, phi (x, y) is the phase of the solution, and can be calculated by the following formula:
Figure FDA0004060997650000011
and finally, calculating the corresponding code word by phi (x, y)/2 pi multiplied by p, wherein p is the width of the projector.
3. The system according to claim 2, wherein in step S2, the three-dimensional points are calculated as follows:
Figure FDA0004060997650000021
wherein p is c Transformation matrix obtained for camera calibration, p p Transformation matrix obtained for projector calibration, u c U is the pixel coordinates of the corresponding camera p For the coordinates of the corresponding projector, Q is a three-dimensional coordinate point, and for the calculation of Q, by a method of introducing tensor determinant, the following determinant is first calculated:
Figure FDA0004060997650000022
wherein, I (k) is an identity matrix, the coordinates of the three-dimensional point Q are calculated as follows:
Figure FDA0004060997650000023
wherein u is cx 、u cy For camera pixel coordinate u c =[u cx ,u cy ],u px For projector pixel coordinates u p =0.[u px ,u py ],Q k Is a three-dimensional point cloud coordinate.
4. A system according to claim 3, characterized in that in step S3, the motion feature points are extracted by SIFT3D algorithm.
5. The system of claim 4, wherein the SIFT3D algorithm extracts feature points as follows:
(1) Generating a scale space, and defining the convolution of the original image I (x, y) I (x, y) and the Gaussian kernel G (x, y, sigma) as a scale space L (x, y, sigma) of an image:
Figure FDA0004060997650000024
Figure FDA0004060997650000025
wherein G (x, y, σ) is a variable-scale Gaussian function;
(2) Constructing a Gaussian difference function:
Figure FDA0004060997650000031
wherein k represents the scale factor of two adjacent scales;
(3) Detecting extreme points of the scale space, comparing each point of the scale space with 26 adjacent points, and if the point is the largest or smallest in the 26 neighborhoods, the point is a characteristic point;
(4) Extremum point screening, namely eliminating points which do not meet the requirements by curve fitting of a DoG function, so as to obtain stable extremum points, namely characteristic points;
(5) Determining the direction of the feature points, calculating the gradient of each point in the neighborhood of the feature points, and adopting a histogram to count the gradient direction and a model to define the main direction of the feature points as the direction corresponding to the maximum value of the histogram.
6. The system according to claim 5, wherein in step S4, after SIFT feature points of the three-dimensional point cloud at different moments are extracted, euclidean distances of the feature points at different moments are calculated, feature points with the closest euclidean distances are used as matching points, a RANSAC algorithm is used to remove wrong matching point pairs, and motion trajectories of the feature points at different moments are obtained after the matching point pairs at different moments are obtained.
7. The system according to claim 6, wherein in step S5, a respiratory curve is obtained by selecting and calculating the variation trace of the mean value of the Z-axis coordinates of all the feature points, the Z-axis is the direction perpendicular to the treatment table, one respiratory cycle is determined according to the maximum value and the minimum value of the respiratory curve, and the whole respiratory cycle is divided into 8-10 phases, so as to obtain the respiratory phase of the patient.
8. A 4D-CT image reconstruction system comprising a CT machine for acquiring a two-dimensional CT image by scanning, and a binocular vision-based respiration detection system as claimed in any one of claims 1 to 7 for acquiring a respiratory phase of a scanned patient, the binocular vision-based respiration detection system being mounted on a gantry of the CT machine, wherein,
after the respiratory phase of the scanned patient is acquired through the binocular vision-based respiratory detection system, all the two-dimensional CT images are grouped according to the respiratory phase, so that a 3D-CT image of each respiratory phase is obtained, and the 3D-CT images of all the respiratory phases are 4D-CT sequence images.
9. A 4D-CT image reconstruction system method, wherein the 4D-CT image reconstruction system of claim 8 is used, the method comprising:
acquiring CT two-dimensional image data in real time through a CT machine;
acquiring respiratory phase information of a scanned patient through a binocular vision-based respiratory detection system;
adding respiratory phase information to the CT two-dimensional image data;
grouping all CT images according to respiratory phases;
4D-CT sequence images are obtained.
CN202310059400.9A 2023-01-14 2023-01-14 Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method Pending CN116128838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310059400.9A CN116128838A (en) 2023-01-14 2023-01-14 Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310059400.9A CN116128838A (en) 2023-01-14 2023-01-14 Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method

Publications (1)

Publication Number Publication Date
CN116128838A true CN116128838A (en) 2023-05-16

Family

ID=86304309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310059400.9A Pending CN116128838A (en) 2023-01-14 2023-01-14 Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method

Country Status (1)

Country Link
CN (1) CN116128838A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117558428A (en) * 2024-01-12 2024-02-13 华中科技大学同济医学院附属同济医院 Imaging optimization method and system for liver MRI

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117558428A (en) * 2024-01-12 2024-02-13 华中科技大学同济医学院附属同济医院 Imaging optimization method and system for liver MRI
CN117558428B (en) * 2024-01-12 2024-03-22 华中科技大学同济医学院附属同济医院 Imaging optimization method and system for liver MRI

Similar Documents

Publication Publication Date Title
Dawood et al. Lung motion correction on respiratory gated 3-D PET/CT images
Li et al. Establishing a normative atlas of the human lung: intersubject warping and registration of volumetric CT images
US8989349B2 (en) Dynamic tracking of moving targets
US20100198101A1 (en) Non-invasive location and tracking of tumors and other tissues for radiation therapy
CN104739510B (en) New method for establishing corresponding relation between sequence images and respiratory signals
WO2003107275A2 (en) Physiological model based non-rigid image registration
Remmert et al. Four-dimensional magnetic resonance imaging for the determination of tumour movement and its evaluation using a dynamic porcine lung phantom
US8487933B2 (en) System and method for multi-segment center point trajectory mapping
Park et al. Simultaneous tumor and surrogate motion tracking with dynamic MRI for radiation therapy planning
Cheng et al. Airway segmentation and measurement in CT images
US8094895B2 (en) Point subselection for fast deformable point-based imaging
Hosseinian et al. 3D Reconstruction from Multi-View Medical X-ray images–review and evaluation of existing methods
US20230204701A1 (en) Systems and methods for magnetic resonance imaging
US9355454B2 (en) Automatic estimation of anatomical extents
CN116128838A (en) Binocular vision-based respiratory detection system, 4D-CT image reconstruction system and method
Alam et al. Medical image registration: Classification, applications and issues
Meschini et al. A clustering approach to 4D MRI retrospective sorting for the investigation of different surrogates
Fischer et al. An MR-based model for cardio-respiratory motion compensation of overlays in X-ray fluoroscopy
US20080285822A1 (en) Automated Stool Removal Method For Medical Imaging
JP6692817B2 (en) Method and system for calculating displacement of target object
CN115996668A (en) Non-contact four-dimensional imaging method and system based on four-dimensional body surface breathing signals
Galdames et al. Registration of renal SPECT and 2.5 D US images
CN115006737A (en) Radiotherapy body position monitoring system based on depth camera
Li et al. 3D intersubject warping and registration of pulmonary CT images for a human lung model
Akhtar et al. Auto-segmentation of thoraco-abdominal organs in free breathing pediatric dynamic MRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination