CN110680371B - Human body internal and external structure imaging method and device based on structured light and CT - Google Patents

Human body internal and external structure imaging method and device based on structured light and CT Download PDF

Info

Publication number
CN110680371B
CN110680371B CN201910998038.5A CN201910998038A CN110680371B CN 110680371 B CN110680371 B CN 110680371B CN 201910998038 A CN201910998038 A CN 201910998038A CN 110680371 B CN110680371 B CN 110680371B
Authority
CN
China
Prior art keywords
imaging
data
image
structured light
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910998038.5A
Other languages
Chinese (zh)
Other versions
CN110680371A (en
Inventor
王国平
郭彦彬
刘迎宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910998038.5A priority Critical patent/CN110680371B/en
Publication of CN110680371A publication Critical patent/CN110680371A/en
Application granted granted Critical
Publication of CN110680371B publication Critical patent/CN110680371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4417Constructional features of apparatus for radiation diagnosis related to combined acquisition of different diagnostic modalities
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pulmonology (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of 3D imaging, and provides a human body internal and external structure imaging method and device based on structured light and CT. The method comprises the steps of placing an object to be imaged on a rotating table and fixing the object; starting the structured light scanning equipment and the CT scanning equipment; the structured light equipment is provided with two sets of acquisition assemblies, and acquisition points of the two sets of acquisition assemblies are respectively positioned on the same side and the opposite surface side of the spiral CT; recording first imaging data of the human body contour data acquired by the structured light equipment; imaging each slice of the human body, and recording second imaging data; and generating complete information of the object to be imaged, which contains external structure information and internal structure information, according to the first imaging data and the second imaging data. The invention solves the defects of structural light and spiral CT comprehensive imaging in the aspect of human body, can realize real-time imaging in the processing speed, does not need to provide too many data characteristics in the matching of later-stage data fusion algorithm, and simplifies the imaging process.

Description

Human body internal and external structure imaging method and device based on structured light and CT
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of 3D imaging, in particular to a human body internal and external structure imaging method and device based on structured light and CT.
[ background of the invention ]
CT is different from common X-ray imaging, and is a reconstructed image obtained by scanning a human body layer with an X-ray beam to obtain information and processing the information by a computer, and is digital imaging instead of analog imaging. It opens the way to digital imaging. The density resolution of the tomographic image displayed by the CT is obviously superior to that of the X-ray image, so that the anatomical structure and the pathological changes thereof which cannot be displayed by the X-ray image can be developed, the inspection range of the human body is obviously expanded, and the detection rate of the pathological changes and the diagnosis accuracy are improved. CT as the first developed digital imaging has greatly facilitated the development of medical imaging. After CT, new digital imaging such as MRI and ECT are developed, and imaging technology of images is changed.
In addition, the multi-region registration technology based on structured light scanning is used for experimental research of spinal surgery three-dimensional navigation, and although the research solves spinal treatment, the image reconstruction complexity is high, multiple types of data need to be measured, the complexity is large, and timely imaging cannot be achieved during operation.
The technical defect of the calibration of the three-dimensional measurement and the CT scanning of the maxillofacial defect structure light lies in that the set position of the structure light camera is not good, the angle is required to be adjusted automatically and continuously, and the automation is poor.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
[ summary of the invention ]
The invention aims to solve the technical problems of three-dimensional measurement of structured light and calibration of CT scanning, and has the technical defects that the set position of a structured light camera is not good, the angle is required to be adjusted automatically and continuously, and the automation is poor.
The invention further solves the technical problems that in the imaging of the structure outside the object by the structured light, the complexity of image reconstruction is high, various types of data need to be measured, the complexity is high, and the imaging cannot be realized in time during the operation.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for imaging structures inside and outside a human body based on structured light and CT, a structured light device, a CT scanning device, and a rotating platform located in the acquisition area of the two devices, comprising:
placing an object to be imaged on a rotating table and fixing;
starting the structured light scanning equipment and the CT scanning equipment; the structured light equipment is provided with two sets of acquisition assemblies, and acquisition points of the two sets of acquisition assemblies are respectively positioned on the same side and the opposite surface side of the spiral CT;
recording first imaging data of the human body contour data acquired by the structured light equipment; imaging each slice of the human body, and recording second imaging data;
and generating complete information of the object to be imaged, which contains external structure information and internal structure information, according to the first imaging data and the second imaging data.
Preferably, for the structured light scanning process, calibrating relevant parameters in a data model of a corresponding system platform in advance to obtain calibration data; the human body contour data collected by the structured light device is used for recording first imaging data, and the method specifically comprises the following steps:
acquiring a frame image projected onto an object to be scanned by a laser through a camera, and converting the frame image into a gray image;
calculating to obtain the gray centroid of one or more corresponding laser scanning points in each frame of image;
calculating the three-dimensional coordinate of the gray centroid according to the pixel distance of the gray centroid in the image and the calibration data, and recording the three-dimensional coordinate as first imaging data; wherein the pixel distance is a distance between a mapping position of the laser in the image generated by the data model and a corresponding laser scanning point acquired in the image.
Preferably, calibrating the relevant parameters in the data model corresponding to the system platform to obtain calibration data, specifically comprising:
placing a calibration object on the platform, and measuring a series of actual distance values q from the calibration object to a line connecting the laser and the cameraiAnd the calibration image collected by the camera corresponding to each distance value;
obtaining the distance px of the preset calibration point positions in the calibration imageiSubstitution into a formula derived from similar triangles
Figure BDA0002240376420000031
Calculating to obtain calibration data of each relevant parameter;
wherein, f is the distance from the lens to the image sensor in the camera, s is the distance between the camera and the laser, PixelSize is the physical size of the image pixel, and offset is the offset distance of the image origin relative to the image edge in the triangular distance measurement.
Preferably, the acquisition device of the method includes a first laser and a first camera for acquiring a first laser scanning signal, and a second laser and a second camera for acquiring a second laser scanning signal, and the method further includes:
synchronously analyzing the image data of the video frames acquired by the first camera and the second camera;
storing a group of or multi-component analyzed index map formed by feature point positions, and analyzing the index map and historical scanning data obtained by scanning an object to be scanned to obtain one or more difference distances as the representation content of the index map;
for newly acquired scanning data, before point-by-point analysis, the characterization content is matched; and if the matching degree exceeds a first preset threshold value, further matching the index map, and when the matching result exceeds a second preset threshold value, replacing the scanning result of the corresponding area with the scanning content corresponding to the index map for presentation.
Preferably, the method further comprises:
for the area finished in the alternative presentation mode, after the 3D modeling of the whole object to be scanned is finished, the content of the alternative presentation area is corrected;
if the proofreading result is the same, storing the 3D modeling;
if the checking results are different, the size of the index map and/or the granularity of the corresponding representation content which can meet the checking results is analyzed further according to the type and the texture resolution of the object to be scanned, the index map and the corresponding representation content.
Preferably, the method further comprises calibrating camera internal parameters, specifically:
calibrating the camera for multiple times to obtain an internal parameter matrix and a distortion vector; the internal parameter matrix and the distortion vector are used for carrying out distortion correction on a frame image shot by a camera; and the frame image after distortion correction is used for the conversion gray-scale image processing.
Preferably, after imaging by two kinds of scanning devices, second imaging data of the internal structure of the human body and first imaging data of the external shape are obtained, wherein the internal information of the human body is generated according to the second imaging data, and the method specifically includes:
performing one-dimensional Fourier transform according to projection data obtained on the linear array detector, and performing convolution operation on the projection data and a filter function to obtain projection data subjected to convolution filtering in each direction; the projection data obtained on the linear array detector is the second imaging data;
carrying out back projection on the projection data along all directions, including evenly distributing the projection data to each matrix unit according to the original path of the projection data, and overlapping to obtain the CT value of each matrix unit; and obtaining a tomographic image of the scanned object after proper processing.
Preferably, the performing one-dimensional fourier transform on the projection data obtained from the linear array detector, and performing convolution operation on the projection data and the filter function to obtain convolution-filtered projection data in each direction includes:
performing one-dimensional Fourier transform on projection data obtained on the linear array detector;
preferably, the generating, according to the first imaging data and the second imaging data, complete information of an object to be imaged, which includes external structure information and internal structure information, specifically includes:
generating external structure information of the object to be imaged according to the first imaging data;
dividing the interior of the object to be imaged into a plurality of unit bodies with fixed sizes according to the X-ray passing path according to the external structure information of the object to be imaged; wherein the plurality of unit bodies of fixed size are stacked to have the same shape as the external structural information;
and calculating the plurality of unit bodies with fixed sizes according to the acquired second imaging data so as to generate internal structure information.
In a second aspect, the present invention further provides a method and an apparatus for imaging internal and external structures of a human body based on structured light and CT, for implementing the method for imaging internal and external structures of a human body based on structured light and CT in the first aspect, the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the method for structured light and CT based intra-and intra-anatomical imaging of the human body according to the first aspect.
In a third aspect, the present invention further provides a non-volatile computer storage medium, where computer-executable instructions are stored in the computer storage medium and executed by one or more processors, so as to implement the method for imaging structures in and out of a human body based on structured light and CT in the first aspect.
The invention solves the defects of structural light and spiral CT comprehensive imaging in the aspect of human body, can realize real-time imaging in the processing speed, does not need to provide too many data characteristics in the matching of later-stage data fusion algorithm, and simplifies the imaging process.
Furthermore, the method provided by the invention can provide better operation environment for clinicians, and can give a certain detailed understanding to human body structure during surgical operation.
Furthermore, the invention provides a better visual angle for biological researchers to research the whole physiological structure of the human body, and the research on the human body can be further developed.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart of a method for imaging structures inside and outside a human body based on structured light and CT according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of structured light imaging in a method for imaging structures inside and outside a human body based on structured light and CT according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the principle of structured light imaging in a method for imaging structures inside and outside a human body based on structured light and CT according to an embodiment of the present invention;
FIG. 4 is a graph showing a relationship between a pixel distance and an actual distance in structured light imaging in a method for imaging structures inside and outside a human body based on structured light and CT according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for obtaining a grayscale centroid in structured light imaging according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for controlling a stepping motor in structured light imaging according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a structured light imaging system according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of another embodiment of the present invention in structured light imaging;
FIG. 9 is a flow chart of an improved structured light imaging method according to an embodiment of the present invention;
FIG. 10 is a schematic flow chart of another improved structured light imaging method provided by embodiments of the present invention;
FIG. 11 is a schematic flow chart diagram of a CT imaging method according to an embodiment of the present invention;
FIG. 12 is a schematic flow chart of projective transformation in a CT imaging method according to an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating a mapping transformation in a CT imaging method according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of an imaging device for internal and external structures of a human body based on structured light and CT provided by an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The line structured light adopts line laser with specific wavelength as a light source, the light emitted by the line structured light is projected on an object, and the distortion of the returned laser line is calculated through a certain algorithm to obtain the position and depth information of the object. The device is simple, and the precision is higher, because laser has good monochromaticity, receives the influence of object surface texture or colour very little, compares in multi-thread laser radar cost greatly reduced. The single frame image can only obtain the position of one line, and the scanning motion is needed to form three-dimensional structure information.
At present, the single-point laser range finder on the market calculates the depth of an object from a laser by means of calculating the phase difference of emitted/reflected laser, and the method has high precision, but dimension expansion is difficult to carry out, and hardware cost is very high.
The system adopts a triangular distance measurement mode to obtain the depth information of the object, dimension expansion is carried out, the cost is low, and the precision and the imaging speed have great optimization space.
In CT, X-ray beams are used to scan the layer surface with certain thickness of human body part, and the X-rays transmitted through the layer surface are received by detector instead of film and converted into visible light, which is converted into electric signal by photoelectric converter and then converted into digital signal by A/D converter for computer processing. The image is processed by dividing the selected slice into several cubes of the same volume, called voxels (voxels). The data obtained by scanning is calculated to obtain the X-ray attenuation coefficient or absorption coefficient of each voxel, and then the X-ray attenuation coefficient or absorption coefficient is arranged into a matrix, namely a digital matrix is formed. Each digit in the digital matrix is converted into small blocks with unequal gray scales from black to white through a digital/analog converter, and the small blocks are called pixels (pixels) and are arranged according to the original matrix sequence, namely a CT image is formed. Therefore, the CT image is a gray-scale image composed of a certain number of pixels, is a digital image, and is a reconstructed tomographic image. The X-ray absorption coefficient for each voxel can be calculated by different mathematical methods.
When X-rays penetrate a human organ or tissue, the absorption coefficient of the X-rays at each point is different because the human organ or tissue is composed of a plurality of material components and different densities. An object passing along the X-ray beam is segmented into a number of small unit volumes (voxels), with each voxel having an equal thickness (l). Assuming l is small enough to make each voxel uniform, the absorption coefficient of each voxel is constant, and if the incident intensity I0, the transmitted intensity I, and the voxel thickness l are known, the sum of the absorption coefficients along the X-ray path, μ 1+ μ 2+ … … + μ n, can be calculated. In order to create a CT image, the absorption coefficients μ 1, μ 2, μ 3 … … μ n for each voxel must first be determined. To obtain n absorption coefficients, n or more independent equations as in the above equation need to be established. Therefore, the CT imaging apparatus performs multiple scans from different directions to acquire enough data to establish an equation for solving the absorption coefficient. The absorption coefficient is a physical quantity, and is a representation of the magnitude of the linear average attenuation of the substance corresponding to each pixel in the CT image to X-ray. In practical applications, the attenuation coefficient of water is used as a reference, so the CT value is defined as the relative value between the absorption coefficient μ i of the measured tissue of the human body and the absorption coefficient μ w of water, and is expressed by the following formula: CT ═ μ i/μ w.
And then converting the CT value of each pixel on the image plane into gray scale, and obtaining the gray scale distribution on the image plane, namely a CT image.
The nature of CT images is attenuation coefficient μ imaging. The obtained projection values are processed by a computer through a certain algorithm, so that the attenuation coefficient value of each voxel can be solved, and the two-dimensional distribution (attenuation coefficient matrix) of the attenuation coefficient values is obtained. And converting the attenuation coefficient value of each voxel into the CT value of the corresponding pixel according to the definition of the CT value to obtain the two-dimensional distribution (CT value matrix) of the CT value. Then, the CT value of each pixel on the image plane is converted into a gray scale, and a gray scale distribution on the image plane is obtained, which is a CT image.
Next, the implementation of the present invention is illustrated by specific examples.
Example 1:
the embodiment 1 of the invention provides a human body internal and external structure imaging method based on structured light and CT, structured light equipment, CT scanning equipment and a rotating platform positioned in an acquisition area of the two equipment, as shown in figure 1, the method comprises the following steps:
in step 201, an object to be imaged is placed on a rotating table and fixed.
In step 202, the structured light scanning apparatus and the CT scanning apparatus are turned on, wherein the structured light apparatus has two sets of acquisition assemblies, and the acquisition points of the two sets of acquisition assemblies are respectively located on the same side and the opposite side of the spiral CT.
In step 203, recording first imaging data of the human body contour data acquired by the structured light equipment; each slice of the human body is imaged and second imaging data is recorded.
In step 204, complete information of the object to be imaged, which includes external structure information and internal structure information, is generated according to the first imaging data and the second imaging data.
The embodiment of the invention solves the defects of structural light and spiral CT comprehensive imaging in the aspect of human body, not only can realize real-time imaging in the processing speed, but also does not need to provide too many data characteristics in the matching of later-stage data fusion algorithm, thereby simplifying the imaging process.
Furthermore, the method provided by the embodiment of the invention can provide a better operation environment for a clinician, and can have a certain detailed understanding on the human body structure during a surgical operation.
Furthermore, the embodiment of the invention provides a better visual angle for biological researchers to research the whole physiological structure of the human body, and the research on the human body can be further developed.
Example 2:
the embodiment 1 of the invention provides a human body internal and external structure imaging method based on structured light and CT, and the efficiency of data integration of the structured light and the CT is improved by arranging a specific structured light scanning device and a CT scanning device. In the embodiment of the present invention, a key improvement is performed from a single side of the structured light scanning, so as to provide technical support for the improvement scheme provided in embodiment 3 of the present invention.
In the embodiment of the invention, for the structured light scanning process, relevant parameters in a data model of a corresponding system platform are calibrated in advance to obtain calibration data; the human body contour data acquired by the structured light device is recorded with first imaging data, as shown in fig. 2, and specifically includes:
in step 301, a frame image projected onto an object to be scanned by a laser is captured by a camera and converted into a grayscale image.
The embodiment of the invention formally utilizes the characteristic that the laser has good monochromaticity and is rarely influenced by the texture or color of the surface of an object, and further, each frame of image is collected through gray processing, thereby simplifying the whole calculation process.
In step 302, the grayscale centroid corresponding to one or more laser scanning points in each frame image is calculated.
In the embodiment of the present invention, each laser scanning point presents a gray scale region in the image after gray scale processing, and the gray scale centroid is calculated according to each gray scale region and is used for calculating an object which effectively acquires content in the frame image.
In step 303, calculating a three-dimensional coordinate of the grayscale centroid according to the pixel distance of the grayscale centroid in the image and the calibration data, and recording the three-dimensional coordinate as first imaging data; wherein the pixel distance is a distance between a mapping position of the laser in the image generated by the data model and a corresponding laser scanning point acquired in the image.
The point source database is corresponding to each object to be scanned and used for generating a 3D scanning result; in the embodiment of the present invention, the point source database is not necessarily only a discrete grayscale centroid carrying three-dimensional coordinate information. In the subsequent extension scheme of the embodiment of the present invention, the implementation of extending the storage content of the point source database will be further described.
The invention provides a 3D scanning method based on line structured light, which utilizes a calibrated data model and combines a calculation means of gray centroid to generate a high-precision three-dimensional model; because the processing of each frame of image is simplified to the scanning area of the line structured light, the processing efficiency of the collected image in the whole process is improved.
In an embodiment of the present invention, a method for calibrating a relevant parameter in a data model corresponding to the system platform to obtain calibration data is specifically provided, and a corresponding calibration system is shown in fig. 3, which specifically includes:
placing a calibration object on a platform (such as a rotary platform shown in FIG. 3, usually driven by a stepping motor), and measuring a series of actual distance values q from the calibration object to the laser and camera lineiAnd the calibration image collected by the camera corresponding to each distance value;
obtaining the distance px of the preset calibration point positions in the calibration imageiSubstitution into a formula derived from similar triangles
Figure BDA0002240376420000101
Calculating to obtain calibration data of each relevant parameter; preferably, the preset calibration point (taking two points, including a first calibration point and a second calibration point as an example) requires a connection line between the first calibration point and the laser, and a connection line between the second calibration point and the camera lens to be parallel. The preferable setting mode of the calibration points can greatly improve the establishment speed of the equation, thereby quickly finishing the calculation process of the corresponding calibration data.
Wherein, f is the distance from the lens to the image sensor in the camera, s is the distance between the camera and the laser, PixelSize is the physical size of the image pixel, and offset is the offset distance of the image origin relative to the image edge in the triangular distance measurement.
In a mode of actually combining with a computer to realize the acquisition of the calibration data, an optional realization idea is further provided, which is specifically realized as follows:
the distance q from the first calibration point to the connecting line of the laser and the camera is artificially set to be a group (20 corresponding to the y-axis coordinate as shown in figure 4)i(y-axis coordinates as shown in FIG. 4); corresponding to each of said distances qiPixel distance px represented in an imagei(x-axis coordinates as shown in fig. 4) to fit a corresponding relationship curve; and solving according to the relation curve and the formula (1) to obtain calibration data. The calibration data includes f, s, PixelSize, offset, etc. in the formula (1). So that in the subsequent actual 3D scanning process, the actual distance value q can be obtained by calculating the pixel point by using the formula (1) according to each gray centroid pointi
In the embodiment of the present invention, the calculating to obtain the grayscale centroid corresponding to one or more laser scanning points in each frame of image specifically includes, as shown in fig. 5:
in step 401, each pixel point in the image is screened one by one according to a preset gray threshold, so as to determine one or more gray areas corresponding to the one or more laser scanning points.
In step 402, the method is represented by the formula
Figure BDA0002240376420000102
And
Figure BDA0002240376420000103
calculating to obtain a corresponding gray scale centroid; wherein x isiAnd yiIs the pixel coordinate, fijIs the gray value of the corresponding coordinate point.
In combination with the embodiment of the present invention, there is also a preferred implementation scheme, in consideration of distortion of a picture taken by a camera due to performance difference of the camera, and therefore, before performing gray scale processing on each image on the basis of the embodiment of the present invention, preferably, the method further includes calibration of camera internal parameters, specifically:
calibrating the camera for multiple times to obtain an internal parameter matrix and a distortion vector; the internal parameter matrix and the distortion vector are used for carrying out distortion correction on a frame image shot by a camera; and the frame image after distortion correction is used for the conversion gray-scale image processing.
The embodiment of the invention also provides a specific implementation mode for driving the stepping motor to scan, and the specific implementation mode can be matched with the linear structure light scanning scheme provided by the embodiment of the invention to achieve more effective scanning results. The laser scanning point projected onto the object to be scanned by the laser is collected by the camera, as shown in fig. 6, the method specifically includes:
in step 501, working parameters of the stepping motor are set through a serial port; wherein the operating parameters of the stepper motor include: one or more of acceleration, deceleration, number of circumferential pulses, and angular velocity of the motor motion.
The set working parameters of the stepping motor are usually selected and matched within an allowed working range of the purchased stepping motor, and are correspondingly set through a serial port, specifically, data input is completed through a constructor, which is exemplified as follows:
Figure BDA0002240376420000111
wherein setAcc is a set acceleration time, 200 is the number of pulses; setDec is the number of pulses to set deceleration; setSubdivision is the set number of pulses required for one rotation; setSpeed is speed, the parameter meaning 5 degrees per second; setAngle is the working angle range, specifically 360 °.
In step 502, according to the working parameters of the stepping motor and the texture detail resolution of the object to be scanned, the working mode of the stepping motor is set, so that the stepping motor drives the turntable to drive the object to be scanned and/or the laser-camera assembly, and the process of collecting the laser scanning point projected by the laser onto the object to be scanned by the camera is completed.
Since the embodiment of the invention adopts the mode of generating the scanning result according to the gray centroid, the precision of the practical theory is very high, but the balance of the effect and the efficiency also needs to be considered in the specific operation process, so the concept of the texture detail resolution of the object to be scanned is provided in the step 502, and the optimal scanning rotation speed matched with the texture detail resolution of the current object to be scanned is set by combining the rotation speed and the acceleration performance of the stepping motor. The acceleration and deceleration of the motor motion are used for carrying out difference control when the system relates to a position needing to be supplemented with scanning; for example: the accelerated rotation does not need the supplementary scanning area, and the deceleration operation is carried out for the supplementary scanning area.
In the embodiment of the invention, the camera can be a common USB RGB camera with the highest frame rate of 30 fps; the resolution size is 640x 480; the physical focal length is 4.3 mm; the pixel size is 3 um. The laser can be at 100mw, the laser wavelength is 650m, and the minimum linewidth is 0.4mm, and can be adjusted.
Example 3:
in the embodiment of the present invention, with respect to the calibration method and the obtained calibration data described in embodiment 2, a specific implementation example is given for the calculation of the three-dimensional coordinates of the grayscale centroid according to the pixel distance of the grayscale centroid in the image and the calibration data involved in step 303. The data model comprises a plane model (shown in fig. 7) and a vertical model (shown in fig. 8), wherein the plane model is used for calculating the separation (specifically, PA shown in fig. 8) between a laser scanning point a and the laser scanning point on the object to be scanned when the laser scanning point a extends to the rotating shaft according to the projection angle (shown as a mark P in fig. 8), and converting the laser scanning point a into X and Y coordinate values in three-dimensional coordinates according to a deflection angle θ; the vertical model is configured to provide the plane model with a skew angle θ of the optical path of the corresponding laser scanning point with respect to the horizontal optical path, so that the plane model calculates a distance (i.e., a distance of a line segment BP) between the corresponding laser scanning point and the emission point when the corresponding laser scanning point reaches the rotation axis (as indicated by P indicated in fig. 7 and 8), and calculates a Z-axis coordinate, and then calculates a three-dimensional coordinate of the grayscale centroid according to a pixel distance of the grayscale centroid in the image and the calibration data, specifically including:
according to the formula
Figure BDA0002240376420000121
And calculating the distance between the corresponding laser scanning point and the emission point when the corresponding laser scanning point reaches the object to be scanned.
According to the formula (d-q)iCos theta) to a laser scanning point A on an object to be scanned to a rotating shaftA vertical distance AP'; wherein d is the vertical distance between the straight line where the camera and the laser are located and the rotating shaft, and theta is the deflection angle of the projection corresponding laser scanning point relative to the horizontal plane;
calculating to obtain the three-dimensional coordinate [ (d-q) of the laser scanning point by combining the rotation angle gamma of the target to be detectedi*cosθ)*sinγ,(d-qi*cosθ)*cosnγ,(d-qi*cosθ)*tanθ]. Stated another way, the three-dimensional coordinates can also be expressed as (AP ' × sin γ, AP ' × cosn γ, AP ' × tan θ), where when the initial state starts to scan, the corresponding rotation angle is the initial value 0, and the rotation angle is the value of the rotation angle of the target to be scanned driven by the stepper motor. In the three-dimensional coordinates, the origin coordinate is the intersection of the rotating shaft and the plane perpendicular to the camera and the laser.
Example 4:
after the implementation scheme described in embodiment 2 is provided, a preferable extension scheme is further elaborated through embodiment 3 of the present invention, and the extension scheme of embodiment 3 is particularly suitable for the method described in embodiment 2 to be applied to a network environment, where one end of a network where a system is located is scanned, and another end of the network where a three-dimensional model is requested to be displayed is located. To this end, an embodiment of the present invention proposes a preferred embodiment, where the acquisition device of the method includes a first laser and a first camera for acquiring a first laser scanning signal, and a second laser and a second camera for acquiring a second laser scanning signal, as shown in fig. 9, and the method further includes:
in step 601, image data of video frames acquired by the first camera and the second camera are analyzed synchronously.
When a difference between a first acquisition area formed by the first camera and the first laser and a second acquisition area formed by the second camera and the second laser is a preset angle relative to the rotating shaft, the method provided by the embodiment of the invention has more obvious advantages; the preset angle is 60-180 degrees.
In step 602, an index map composed of feature points and extracted from one or more groups is stored, and the index map and historical scan data obtained by scanning an object to be scanned are analyzed to obtain one or more difference distances as a representation content of the index map.
The feature points are preferably corner points, inflection points or boundary points of the three-dimensional model, and the index map contains local three-dimensional information in an area enclosed by the feature points as boundaries; for the index map, the local three-dimensional information is complete (the complete is relative to the region, and the scanning points inside the region are complete).
The intent of step 602 is to determine its unique characterization content by analyzing the index map with the acquired historical scan data. In the embodiment of the present invention, the characterization content may be a three-dimensional coordinate of a group of feature points, may also be a length of a straight line formed by one or more continuous feature points, may also be a vector graph formed by a plurality of feature points, and the like, which is not limited herein.
In step 603, for newly acquired scan data, before performing point-by-point analysis, the characterization content is matched; and if the matching degree exceeds a first preset threshold value, further matching the index map, and when the matching result exceeds a second preset threshold value, replacing the scanning result of the corresponding area with the scanning content corresponding to the index map for presentation.
In the above preferred embodiment of the present invention, for a specific object to be scanned having a multiple complex structure, a concept of an index map is provided, a high probability repeat region can be quickly located according to the representation content of the corresponding index map, a means of directly replacing the corresponding region with the index map is used to realize quick establishment of a preliminary three-dimensional model, and when a user browses the three-dimensional model, the checking of the corresponding replacement region is completed in the background. Therefore, a more efficient reaction mode is provided for remotely presenting the three-dimensional model of the object to be scanned, namely, a remote requesting user can be ensured to observe the three-dimensional model at the first time, and the requesting user can observe the details of the three-dimensional model and finish proofreading so as to meet the requirements of the requesting user on different stages of the three-dimensional model. I.e. a macroscopic view of the three-dimensional model at an initial stage and a detailed read-out of the three-dimensional model at a higher stage.
In connection with the embodiment of the present invention, there is also a preferred embodiment, as shown in fig. 10, the method further includes:
in step 701, for the area completed in the alternative presentation mode, after the 3D modeling of the whole object to be scanned is completed, the content of the alternative presentation area is checked.
In step 702, if the proofreading result is the same, the 3D modeling is saved.
In step 703, if the checking results are not the same, the size of the index map and/or the granularity of the corresponding characterizing content, which can satisfy the checking results, is analyzed further according to the type and texture resolution of the object to be scanned, the index map and the corresponding characterizing content.
In combination with the embodiment of the present invention, preferably, when presenting, the uncorrected three-dimensional model content can be displayed in different and normal colors, so as to remind the user that the corresponding area is still in the process of correction.
Example 5:
in embodiment 1, after imaging by two types of scanning devices, second imaging data of an internal structure of a human body and first imaging data of an external shape are obtained, wherein internal information of the human body is generated according to the second imaging data, and an implementation in the embodiment of the present invention is specifically as shown in fig. 11, including:
in step 801, performing one-dimensional fourier transform on projection data obtained from a linear array detector, and performing convolution operation on the projection data and a filter function to obtain convolution-filtered projection data in each direction; and the projection data obtained on the linear array detector is the second imaging data.
In step 802, performing back projection on the projection data along various directions, including equally distributing the projection data to each matrix unit according to the original path, and overlapping to obtain the CT value of each matrix unit; and obtaining a tomographic image of the scanned object after proper processing.
In the embodiment of the invention, the projection under each acquisition projection angle is convoluted before back projection, so that the shape artifact caused by a point spread function is improved, and the reconstructed image has better quality.
In the embodiment of the present invention, a one-dimensional fourier transform is performed on the projection data obtained from the linear array detector, and then a convolution operation is performed on the projection data and a filter function to obtain convolution-filtered projection data in each direction, and a specific detailed implementation step is also provided, as shown in fig. 12, including:
in step 901, a one-dimensional fourier transform is performed on the projection data obtained by the linear array detector.
In step 902, a preset filter is obtained, and according to the angle of phi _ i, the obtained original projection p (x _ r, phi _ i) is subjected to convolution filtering to obtain a filtered projection; the filtered projections are back-projected to obtain the density of the original image in the direction satisfying x _ r ═ r cos ((θ - Φ _ i)).
In step 903, all back-projections are superimposed to obtain a reconstructed projection.
Acquiring a preset filter, and performing convolution filtering on the acquired original projection p (x _ r, phi _ i) according to an angle of a certain phi _ i to acquire a filtered projection; performing back projection on the filtered projection to obtain the density of the original image in the direction satisfying x _ r ═ r cos ((theta-phi _ i)); the corresponding character relationship is shown in fig. 13, where m-n is another coordinate system corresponding to x-y, and this process is a filtered back-projection calculation of the map for each angle of 360 degrees, so that p (n, θ) (n stands for abscissa) is a time-domain function, which simplifies the fourier transform process.
And superposing all the 3600 back projections to obtain the reconstructed projections.
The basic condition of this function x _ r ═ r cos ((θ - Φ _ i)) is that the frequency of the two-dimensional image function is bounded. The filter function in the frequency domain is expressed as:
Figure BDA0002240376420000161
wherein ρ represents a frequency; rho0Is a preset reference value threshold.
The result of the R-L convolution function for discrete images, since the CT image is combined for each frame, is:
Figure BDA0002240376420000162
wherein T is a sampling interval, the value of T is determined as 1/2 rho by the sampling theorem, and n represents the number of sampling points. After convolution processing, the image has obvious filtering effect, and some high-frequency noise signals are filtered.
Example 6:
fig. 14 is a schematic structural diagram of an in-and-out-of-human-body structure imaging device based on structured light and CT according to an embodiment of the present invention. The structured light and CT based in-and-out-of-the-body structure imaging apparatus of the present embodiment includes one or more processors 21 and a memory 22. In fig. 14, one processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or other means, and fig. 14 illustrates the connection by a bus as an example.
The memory 22, as a non-volatile computer-readable storage medium for a method and apparatus for imaging structures in and out of the human body based on structured light and CT, may be used to store non-volatile software programs and non-volatile computer-executable programs, such as the method for imaging structures in and out of the human body based on structured light and CT in embodiment 1. The processor 21 executes the structured light and CT based intra-and intra-anatomical imaging method by executing non-volatile software programs and instructions stored in the memory 22.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22 and, when executed by the one or more processors 21, perform the method for structured light and CT based intra-and intra-anatomical imaging in embodiment 1 described above, e.g., perform the various steps shown in fig. 1, 2, 5, 6, 9-12 described above.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A human body internal and external structure imaging method based on structured light and CT is characterized by comprising structured light equipment, CT scanning equipment and a rotating platform positioned in the acquisition areas of the two equipment, and comprises the following steps:
placing an object to be imaged on a rotating table and fixing;
starting the structured light scanning equipment and the CT scanning equipment; the structured light equipment is provided with two sets of acquisition assemblies, and acquisition points of the two sets of acquisition assemblies are respectively positioned on the same side and the opposite surface side of the CT scanning equipment;
recording the human body contour data acquired by the structured light equipment into first imaging data; imaging each slice of the human body and recording the imaged slice into second imaging data;
generating complete information of an object to be imaged, which contains external structure information and internal structure information, according to the first imaging data and the second imaging data;
for the structured light scanning process, calibrating relevant parameters in a data model of a corresponding system platform in advance to obtain calibration data; the human body contour data collected by the structured light device is used for recording first imaging data, and the method specifically comprises the following steps:
acquiring a frame image projected onto an object to be scanned by a laser through a camera, and converting the frame image into a gray image;
calculating to obtain the gray centroid of one or more corresponding laser scanning points in each frame of image;
calculating the three-dimensional coordinate of the gray centroid according to the pixel distance of the gray centroid in the image and the calibration data, and recording the three-dimensional coordinate as first imaging data; wherein the pixel distance is a distance between a mapping position of the laser in the image generated by the data model and a corresponding laser scanning point acquired in the image.
2. The method for imaging structures inside and outside the human body based on structured light and CT as claimed in claim 1, wherein calibrating the relevant parameters in the data model corresponding to the system platform to obtain calibration data specifically comprises:
placing a calibration object on the platform, and measuring a series of actual distance values q from the calibration object to a line connecting the laser and the cameraiAnd the calibration image collected by the camera corresponding to each distance value;
obtaining the distance px of the preset calibration point positions in the calibration imageiSubstitution into a formula derived from similar triangles
Figure FDA0002781852700000021
Calculating to obtain calibration data of each relevant parameter;
wherein, f is the distance from the lens to the image sensor in the camera, s is the distance between the camera and the laser, PixelSize is the physical size of the image pixel, and offset is the offset distance of the image origin relative to the image edge in the triangular distance measurement.
3. The method of claim 1, wherein the acquisition device of the method comprises a first laser and a first camera for acquiring a first laser scanning signal, and a second laser and a second camera for acquiring a second laser scanning signal, the method further comprising:
synchronously analyzing the image data of the video frames acquired by the first camera and the second camera;
storing a group of or multi-component analyzed index map formed by feature point positions, and analyzing the index map and historical scanning data obtained by scanning an object to be scanned to obtain one or more difference distances as the representation content of the index map;
for newly acquired scanning data, before point-by-point analysis, the characterization content is matched; and if the matching degree exceeds a first preset threshold value, further matching the index map, and when the matching result exceeds a second preset threshold value, replacing the scanning result of the corresponding area with the scanning content corresponding to the index map for presentation.
4. The method for structured light and CT based in-and-out-of-body structural imaging according to claim 3, further comprising:
for the area finished in the alternative presentation mode, after the 3D modeling of the whole object to be scanned is finished, the content of the alternative presentation area is corrected;
if the proofreading result is the same, storing the 3D modeling;
if the checking results are different, the size of the index map and/or the granularity of the corresponding representation content which can meet the checking results is analyzed further according to the type and the texture resolution of the object to be scanned, the index map and the corresponding representation content.
5. The method for imaging structures inside and outside the human body based on structured light and CT as claimed in any of claims 1-4, further comprising calibration of camera internal parameters, in particular:
calibrating the camera for multiple times to obtain an internal parameter matrix and a distortion vector; the internal parameter matrix and the distortion vector are used for carrying out distortion correction on a frame image shot by a camera; and the frame image after distortion correction is used for the conversion into the gray image.
6. The method for imaging structures inside and outside the human body based on structured light and CT according to claim 1, wherein after imaging with two scanning devices, second imaging data of the structure inside the human body and first imaging data of the external shape are obtained, wherein generating internal information of the human body according to the second imaging data specifically comprises:
performing one-dimensional Fourier transform according to projection data obtained on the linear array detector, and performing convolution operation on the projection data and a filter function to obtain projection data subjected to convolution filtering in each direction; the projection data obtained on the linear array detector is the second imaging data;
carrying out back projection on the projection data along all directions, including evenly distributing the projection data to each matrix unit according to the original path of the projection data, and overlapping to obtain the CT value of each matrix unit; and obtaining a tomographic image of the scanned object after proper processing.
7. The method for imaging structures inside and outside human body based on structured light and CT as claimed in claim 6, wherein the performing a one-dimensional fourier transform on the projection data obtained from the linear array detector, and performing a convolution operation with the filter function to obtain the convolution-filtered projection data in each direction specifically comprises:
performing one-dimensional Fourier transform on projection data obtained on the linear array detector;
acquiring a preset filter, and performing convolution filtering on the acquired original projection p (x _ r, phi _ i) according to the angle of phi _ i to acquire a filtered projection; and performing back projection on the filtered projection to obtain the density of the original image in the direction satisfying x _ r ═ r cos ((theta-phi _ i)), wherein:
phi _ i: the included angle between the coordinate system and the imaging direction during filtering; x _ r: projection of the image density to be solved in the X-axis direction; p is the projection of all x _ r values in the included angle region of phi _ i; r: the total density of the image; θ: scanning an angle;
and superposing all back projections to obtain the reconstructed projections.
8. The method according to claim 1, wherein generating complete information of an object to be imaged including external structure information and internal structure information according to the first imaging data and the second imaging data specifically comprises:
generating external structure information of the object to be imaged according to the first imaging data;
dividing the interior of the object to be imaged into a plurality of unit bodies with fixed sizes according to the X-ray passing path according to the external structure information of the object to be imaged; wherein the plurality of unit bodies of fixed size are stacked to have the same shape as the external structural information;
and calculating the plurality of unit bodies with fixed sizes according to the acquired second imaging data so as to generate internal structure information.
9. An in-vivo and in-vitro structure imaging device based on structured light and CT, the device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the method for structured light and CT based imaging of structures in and out of the human body according to any of claims 1-8.
CN201910998038.5A 2019-10-21 2019-10-21 Human body internal and external structure imaging method and device based on structured light and CT Active CN110680371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910998038.5A CN110680371B (en) 2019-10-21 2019-10-21 Human body internal and external structure imaging method and device based on structured light and CT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910998038.5A CN110680371B (en) 2019-10-21 2019-10-21 Human body internal and external structure imaging method and device based on structured light and CT

Publications (2)

Publication Number Publication Date
CN110680371A CN110680371A (en) 2020-01-14
CN110680371B true CN110680371B (en) 2021-03-19

Family

ID=69113963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910998038.5A Active CN110680371B (en) 2019-10-21 2019-10-21 Human body internal and external structure imaging method and device based on structured light and CT

Country Status (1)

Country Link
CN (1) CN110680371B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5652658A (en) * 1993-10-19 1997-07-29 View Engineering, Inc. Grid array inspection system and method
CN206603827U (en) * 2016-12-23 2017-11-03 大连三生科技发展有限公司 A kind of dentistry 3D printing system based on cloud
CN106984811A (en) * 2017-03-18 2017-07-28 珠海新茂义齿科技有限公司 A kind of artificial tooth metal laser 3D printing method
CN107221025B (en) * 2017-05-31 2020-01-03 天津大学 System and method for synchronously acquiring three-dimensional color point cloud model of object surface
CN108182665A (en) * 2017-12-26 2018-06-19 南京邮电大学 A kind of CT system image rebuilding method based on filtered back projection-iterative algorithm
CN109727277B (en) * 2018-12-28 2022-10-28 江苏瑞尔医疗科技有限公司 Body surface positioning tracking method for multi-eye stereo vision

Also Published As

Publication number Publication date
CN110680371A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
US7869560B2 (en) X-ray CT apparatus and image processing apparatus
JP4424799B2 (en) Three-dimensional imaging method and system
EP0492896A2 (en) Reconstructing a 3-D image
JP4495926B2 (en) X-ray stereoscopic reconstruction processing apparatus, X-ray imaging apparatus, X-ray stereoscopic reconstruction processing method, and X-ray stereoscopic imaging auxiliary tool
JP5878119B2 (en) X-ray CT apparatus and control method thereof
CN100581479C (en) Method for reestablishing three-D ultrasonic image
CN108186037B (en) Method and equipment for determining region of interest
AU2006315013A1 (en) System and method for reconstructing an image by rectilinear trajectory scanning
AU2018301580B2 (en) Three-dimensional ultrasound image display method
JP2007089674A (en) Shape of appearance measuring apparatus and x-ray ct apparatus
US5473655A (en) Artifact reduction by z-dependent filtration of three-dimensional cone beam data
CN1973776A (en) Three-dimensional ultrasonic detection method
JP3628725B2 (en) Method and apparatus for reducing artifacts in tomographic images
CN101371786A (en) Method and system of X ray image three-dimensional reconstruction
CN110057847B (en) TR (transmitter-receiver) tomography projection rearrangement method and device
CN105319225B (en) A kind of scan method for realizing plaques high-resolution large-viewing open country CL imaging
JP4982091B2 (en) Radiation tomography equipment
US6975897B2 (en) Short/long axis cardiac display protocol
CN110680371B (en) Human body internal and external structure imaging method and device based on structured light and CT
WO2017215528A1 (en) Three-dimensional imaging method and system
CN1722981A (en) Real-time navigational aid system for radiography
WO2020037582A1 (en) Graph-based key frame selection for 3-d scanning
Schiffers et al. Disassemblable fieldwork CT scanner using a 3D-printed calibration phantom
CN110969694B (en) Unconstrained scanning and voxel-based three-dimensional real-time bone imaging method
CN111184535B (en) Handheld unconstrained scanning wireless three-dimensional ultrasonic real-time voxel imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant