CN115019589B - Intelligent CT teaching simulation system based on optics - Google Patents

Intelligent CT teaching simulation system based on optics Download PDF

Info

Publication number
CN115019589B
CN115019589B CN202210845658.7A CN202210845658A CN115019589B CN 115019589 B CN115019589 B CN 115019589B CN 202210845658 A CN202210845658 A CN 202210845658A CN 115019589 B CN115019589 B CN 115019589B
Authority
CN
China
Prior art keywords
imaging
light source
light
camera
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210845658.7A
Other languages
Chinese (zh)
Other versions
CN115019589A (en
Inventor
王国鹤
张雪君
张雁琦
庞学明
孙少凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiamaohong Beijing Medical Technology Development Co ltd
Original Assignee
Tianjin Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Medical University filed Critical Tianjin Medical University
Priority to CN202210845658.7A priority Critical patent/CN115019589B/en
Publication of CN115019589A publication Critical patent/CN115019589A/en
Application granted granted Critical
Publication of CN115019589B publication Critical patent/CN115019589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/286Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for scanning or photography techniques, e.g. X-rays, ultrasonics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Chemical & Material Sciences (AREA)
  • Medical Informatics (AREA)
  • Medicinal Chemistry (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention relates to the field of medical teaching equipment, and provides an intelligent CT teaching simulation system based on optics to assist teaching, which helps students to improve understanding and research ability of CT imaging principles and CT new technologies. The intelligent CT teaching simulation system based on optics comprises a light source module, a scanning module, an acquisition module, an intelligent control and reconstruction module and a display module, wherein the light source module and the scanning module are used for simulating CT scanning, signals obtained by scanning are acquired by the acquisition module and then are transmitted to the intelligent control and reconstruction module, and the intelligent control and reconstruction module controls the modules and transmits results to the display module for display. The invention is mainly applied to the design and manufacture of medical teaching equipment.

Description

Intelligent CT teaching simulation system based on optics
Technical Field
The invention relates to the field of medical teaching equipment, in particular to an intelligent CT teaching simulation system based on optics.
Background
CT (Computed Tomography ) currently plays a very important role in disease diagnosis and examination. In the teaching and training process of medical image professions, compared with simple theoretical explanation, students can better understand and master CT operation, imaging principle and imaging quality control through experimental teaching, and the teaching method is significant for improving teaching effects and cultivating medical image professions talents.
However, the clinical CT apparatus is expensive and has a radiation risk, and is high in protection and use environment, and it is difficult for universities to put into costs and conditions satisfying the demands. The existing CT simulator technology has a scanning and display module, but the projection image and the CT image are both stored data, but not the projection image acquired truly and the tomogram reconstructed, so that students can only be familiar with the operation flow of CT. However, the CT imaging principle is complex, and a system which can intuitively simulate the projection and reconstruction process of CT besides the simulation operation is not yet available at present so as to facilitate the understanding of students on the CT imaging principle. Furthermore, with the development of artificial intelligence and CT technology, intelligent positioning and intelligent dose control are the directions of development of CT technology. The development of an intelligent CT teaching simulation system has important value for students to understand, learn and research new intelligent positioning and dosage control technologies.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an intelligent CT teaching simulation system based on optics to assist teaching, and help students to improve understanding and research ability of CT imaging principles and CT new technologies. The intelligent CT teaching simulation system comprises an optical-based intelligent CT teaching simulation system, a light source module, a scanning module, an acquisition module, an intelligent control and reconstruction module and a display module;
the light source module comprises a light source, a light beam homogenizing device and an adjustable slit, wherein the adjustable slit comprises an upper flat plate, a lower flat plate, a left flat plate and a right flat plate and is used for adjusting the size of a light-transmitting window so as to adjust the irradiation view; the light source is a switchable wavelength light source and is used for simulating dual-energy CT or simulating different tube voltage regulation by using different wavelengths, and the light source module can adopt two structures: one is that the light source continues to propagate through the adjustable slit after passing through the diffuse scattering sheet; one is that the light source continues to propagate through the adjustable slit after being reflected by the diffuse reflection sheet;
the scanning module comprises a vertical translation table, a rotary table, an imaging imitation body and a water tank, wherein the imaging imitation body is arranged on the rotary table, the rotary table is arranged on the vertical translation table and used for rotating the imaging imitation body to perform multi-angle projection acquisition, the vertical translation table moves up and down with the rotary table, when the vertical translation table is motionless, the rotary table rotates to simulate the translation stepping scanning of the CT, and when the rotary translation table rotates, the vertical translation table moves vertically downwards to simulate the spiral scanning of the CT, and the imaging imitation body is an imaging mould body with certain light absorption distribution and allowing light to pass through;
the two surfaces of the light transmitted through the water tank are of a flat plate structure and are provided with light-transmitting windows;
the acquisition module comprises an iris diaphragm, a lens and a camera, wherein light emitted by the light source module passes through the imaging imitation body and then passes through the light-transmitting window of the water tank, sequentially passes through the iris diaphragm and the lens, and is finally acquired by the camera;
the intelligent control and reconstruction module comprises a computer, and the rotating platform, the vertical translation platform and the camera are connected with the computer through electrical connection wires. On one hand, the computer utilizes an internal intelligent control module to control the rotation of the scanning module rotating platform and the movement of the vertical translation platform and control the image acquisition of the camera, and on the other hand, the projection image acquired by the camera is reconstructed into a tomographic image by using a corresponding reconstruction algorithm through an internal reconstruction module after the acquisition is completed;
in addition, the intelligent control module of the computer is also used for intelligent positioning and intelligent dosage control;
the display module is used for displaying projection images, sinograms and reconstructed image displays of each angle to simulate the projection images, sinograms and reconstructed image displays of the CT images.
The intelligent positioning function has two implementation flows: first kind: 1) setting an inspection part, namely an interested region in advance by a user, 2) acquiring an image by a camera, 3) determining the interested region of the acquired image by an image recognition and segmentation technology, 4) calculating the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 5) if the interested region is larger than the imaging visual field of a system, controlling the vertical translation stage to move an imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; and if the region of interest is smaller than or equal to the system imaging field of view, controlling the vertical translation stage to move the imaging object so that the central coordinate of the region of interest in the vertical direction coincides with the central coordinate of the system imaging field of view to serve as an imaging scanning position. Namely, the intelligent positioning function is realized; second kind: 1) acquiring an image by a camera, 2) manually sketching and determining an interested region on the acquired image by a user, 3) calculating the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 4) if the interested region is larger than the imaging visual field of the system, controlling the vertical translation stage to move the imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; and if the region of interest is smaller than or equal to the system imaging field of view, controlling the vertical translation stage to move the imaging object so that the central coordinate of the region of interest in the vertical direction coincides with the central coordinate of the system imaging field of view to serve as an imaging scanning position. Namely, the intelligent positioning function is realized.
The intelligent dosage control function predicts the dosage required by imaging according to the region to be imaged, thereby automatically controlling and selecting the proper intensity of the light source or the exposure time of the camera, and the specific steps are as follows: 1) Training a neural network mapped from a single projection image to a three-dimensional tomographic image through a deep learning method, determining a neural network model after training and verification, wherein the input of training data and verification data used in the deep learning is the single projection image, and outputting the training data and the verification data as corresponding real three-dimensional tomographic images; 2) Acquiring a single projection image of an area to be imaged by a camera, and predicting a three-dimensional tomographic image corresponding to the area to be imaged by using a neural network model learned in advance; 3) Estimating the dosage required by imaging by using the estimated three-dimensional tomographic image; 4) The intensity of the light source or the exposure time of the camera is automatically controlled by the computer according to the calculated estimated dose.
The intelligent dosage control function predicts the dosage required by imaging according to the region to be imaged, thereby automatically controlling and selecting the proper intensity of the light source or the exposure time of the camera, and the detailed steps are as follows:
1) The computer learns a large amount of data through a deep learning method to train a neural network mapped from a single projection image to a three-dimensional tomographic image, a neural network model is determined after training and verification, the training data and the verification data used in the deep learning are input into the single projection image, and the training data and the verification data are output into corresponding real three-dimensional tomographic images; 2) Collecting a single projection image of the region to be imaged by a camera, and predicting a three-dimensional fault corresponding to the region to be imaged by a computer by using a neural network model learned beforeAn image; 3) The computer predicts the dosage required by imaging by using the predicted three-dimensional tomographic image, namely the light intensity of the light source under the fixed exposure time or the exposure time of the camera under the fixed light intensity of the light source, and the light intensity I of the incident camera 2 Response value I with camera 3 Relation I 3 =I 2 * D x E, where E is exposure time and D is I in exposure time units 2 And I 3 The transform coefficient of the light source, the light intensity I 1 And intensity of light transmitted through the imitation body, i.e. intensity I of incident camera 2 Is related as I 2 =I 1 *(M*C i ) Wherein M is the attenuation of the light intensity of the light source before incidence on the imitation body, C i For projection attenuation of the light intensity of each angle calculated by using the estimated three-dimensional tomographic image after the simulation, i=1, 2, …, n, n is the projection angle, the light intensity I of the light source 1 And the response value I of the camera 3 Is related as I 1 =I 2 /(M*C i )=I 3 /(D*E)/(M*C i )=I 3 /(D*E*M*C i ). First case when the aim is to obtain good imaging quality, the light intensity I of the light source is constant when the exposure time E is constant 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Wherein I 3max Is I 3 Maximum value C max Is C i I.e. the corresponding light intensity decay of the incident light through the imitation body is minimal, or when the light source intensity I 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ) The method comprises the steps of carrying out a first treatment on the surface of the Second case when the aim is to use a lower dose, the light intensity I of the light source is set at the exposure time E 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Is greater than I 3min /(D*E*M*C min ) Wherein I 3min C is the image noise value when the exposure time is E and the camera does not have illumination input min Is C i Corresponding to the maximum attenuation of the intensity of the incident light passing through the imitation body; or when the light intensity I of the light source 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ) Is greater than I 3min /(D*I 1 *M*C min ) The method comprises the steps of carrying out a first treatment on the surface of the 4) According to the meterThe estimated dose calculation computer automatically controls the intensity of the light source or the exposure time of the camera.
The neural network used is a convolutional neural network.
The imaging process is as follows:
1) The water tank in the scanning module is filled with water, the imaging imitation body is mounted on the rotating platform, and a human operator or a computer controls the vertical translation platform to move the imaging imitation body to a proper imaging position through the intelligent positioning function of the intelligent control module;
2) The light source of the light source module is turned on, light emitted by the light source forms a relatively uniform light beam after passing through the light beam homogenizing device, the light beam passes through a light passing window on the side surface of the water tank after passing through the adjustable slit, irradiates the imaging imitation body, the light passes through the light passing window on the other side of the water tank after passing through the imitation body, and enters the camera through the iris diaphragm and the lens in sequence, and the intensity of the light source or the exposure time of the camera is controlled by a manual or computer through the dose intelligent control function of the intelligent control module so as to be used for subsequent projection acquisition;
3) For translational step scanning, in the two scanning modes, a computer controls a vertical translation stage to be motionless, a rotary platform drives an imaging imitation body to rotate, each time the imaging imitation body rotates for a certain angle, a camera collects the projection of light emitted by a light source, which passes through the imaging imitation body, and records the projection as one angle, so that the projections of a plurality of angles are repeatedly collected, the rotary platform returns to the initial angle when scanning begins to stop, then the vertical translation stage moves downwards for a certain distance, and the collection is continuously repeated to finish the projection of other parts of the imitation body in the vertical direction; the second is that the computer controls the vertical translation stage to be motionless, the rotating platform drives the imaging imitation body to rotate all the time, the camera collects projection of one angle at intervals of a certain time until projection of a plurality of angles is collected, the rotating platform stops at the initial angle when the rotating platform returns to start scanning, then the vertical translation stage moves downwards for a certain distance, and the collection is continuously repeated to complete projection of other parts of the imitation body in the vertical direction; for spiral scanning, the computer controls the vertical translation platform to move downwards while the rotation platform drives the imaging imitation body to rotate, projection at a plurality of angles is collected through the camera, and in the process of collecting projection, projection images are displayed in the display module so as to observe and monitor the state of the collected images in real time;
4) The computer utilizes the plurality of angle projections acquired in the 3) to reconstruct through a reconstruction module in the intelligent control and reconstruction module, and then the reconstruction result is displayed through a display module.
The invention has the characteristics and beneficial effects that:
1) CT imaging is to use X-rays to pass through a human body along a straight line, and to acquire X-ray projections in multiple directions for image reconstruction to obtain a sectional image. The invention uses light source (such as visible light) to pass through the imitation body made of transparent material along straight line to reconstruct projection, and its physical process and reconstruction process are identical to CT imaging. Therefore, the invention can intuitively simulate the CT projection and reconstruction process and can assist students to understand the principle of CT imaging deeply. The invention adopts the light source (such as visible light) to carry out projection reconstruction, completely avoids the radiation hazard of X rays, reduces the shielding protection requirement, and is very suitable for teaching demonstration of CT principle. Meanwhile, compared with the requirements of manufacturing cost, maintenance and use environment of a true CT, the invention has much lower cost and requirements.
2) According to the invention, the irradiation range of the light source can be controlled by adjusting the size of the adjustable slit, so that the irradiation range adjustment of the CT bulb tube is simulated; through the linkage control of the vertical translation stage and the rotary stage, the translation step scanning and the spiral scanning of CT can be simulated; the light source is a switchable wavelength light source for simulating dual energy CT or simulating different tube voltage adjustments with different wavelengths.
3) The intelligent control and reconstruction module has an intelligent positioning function and a dosage intelligent control function. The intelligent positioning function can automatically move the region of interest to the imaging region through the computer control translation stage, so that manual operation is reduced, and imaging efficiency is improved. The intelligent dose control function can automatically adjust the intensity of the light source and the exposure time of the camera through the neural network model to control the irradiation dose to obtain the optimal imaging quality or ensure the imaging quality, and meanwhile, lower irradiation dose is used to solve the problems of inaccurate dose adjustment and complicated adjustment by manual experience.
4) The display module provided by the invention can simulate the projection image, the sinogram and the reconstructed image of the CT image to be displayed. The invention successfully integrates the scanning operation, the CT imaging principle and the teaching demonstration of CT image display into a set of system.
Description of the drawings:
figure 1 is a general schematic of the present invention. (a) The module diagram (b) of the intelligent CT teaching simulation system based on optics is a structural schematic diagram of the intelligent CT teaching simulation system based on optics.
Fig. 2 shows two configurations of the light source module. (a) the light source propagates back through the diffuse diffuser. (b) the light source is reflected by the diffuse reflection sheet to propagate backward. The arrow is the direction of light propagation.
Fig. 3 is a schematic diagram of an adjustable slit.
Fig. 4 is a schematic diagram of a scan module. The arrow is the direction of light propagation.
Fig. 5 is a flow chart of two intelligent positioning functions.
Fig. 6 is a flow chart of intelligent dosage control.
Detailed Description
Aiming at the defects existing in the prior art, the invention provides an intelligent CT teaching simulation system based on optics to assist teaching, which helps students to improve understanding and research ability of CT imaging principles and CT new technologies. CT imaging is to use X-rays to pass through a human body along a straight line, and to acquire X-ray projections in multiple directions for image reconstruction to obtain a sectional image. The invention uses light source (such as visible light) to pass through the imitation body made of transparent material along straight line to reconstruct projection, and its physical process and reconstruction process are identical to CT imaging. Therefore, the invention can intuitively simulate the CT projection and reconstruction process and can assist students to understand the principle of CT imaging deeply.
An intelligent CT teaching simulation system based on optics comprises a light source module, a scanning module, an acquisition module, an intelligent control and reconstruction module and a display module.
The light source module comprises a light source 1, a beam homogenizing device 2 (e.g. a diffusing/reflecting sheet) and an adjustable slit 3. The adjustable slit 3 comprises an upper plate, a lower plate, a left plate and a right plate, and is used for adjusting the size of a light-transmitting window so as to adjust the irradiation view. Wherein the light source 1 is a switchable wavelength light source for simulating dual energy CT or simulating different tube voltage adjustments with different wavelengths. The light source module may adopt two structures. One is that the light source 1 continues to propagate through the adjustable slit 3 after passing through the diffusely scattering sheet 2. One is that the light source 1 continues to propagate through the adjustable slit 3 after being reflected by the diffusely reflecting sheet 2.
The scanning module comprises a vertical translation stage 4, a rotary platform 5, an imaging imitation body 6 and a water tank 7. The imaging modality 6 is mounted on a rotary stage 5, and the rotary stage 5 is mounted on a vertical translation stage 4. The rotary platform 5 is used for rotating the imaging imitation body 6 to perform multi-angle projection acquisition. The vertical translation stage 4 can be moved up and down with the rotary stage 5. When the vertical translation stage 4 is stationary, the rotary stage 5 rotates, which can simulate a translational step scan of the CT. While the rotary translation stage 5 is rotating, the vertical translation stage 4 is vertically moved downward, so that a helical scan of CT can be simulated. The imaging imitation 6 is an imaging phantom which has a certain light absorption distribution and allows light to pass through.
The two faces of the water tank 7 (the two faces through which light propagates) are of flat plate structure and are provided with light-transmitting windows.
The acquisition module comprises an iris 8, a lens 9 and a camera 10. Light emitted by the light source module passes through the imaging imitation body 6 and then passes through the light-transmitting window of the water tank 7, sequentially passes through the iris diaphragm 8 and the lens 9, and finally is collected by the camera 10.
The intelligent control and reconstruction module comprises a computer 11. The rotary stage 5, the vertical translation stage 4 and the camera 10 are connected to a computer 11 by electrical connection lines. The computer 11 utilizes an internal intelligent control module to control the rotation of the scanning module rotary platform 5 and the movement of the vertical translation platform 4, and control the image acquisition of the camera 10, and utilizes an internal reconstruction module to reconstruct the projection image acquired by the camera 10 into a tomographic image by using a corresponding reconstruction algorithm (for example, a translation step-by-step scanning adopts a filtered back projection algorithm, a spiral scanning firstly utilizes a projection image of a layer adjacent to a tomographic image to be reconstructed to linearly interpolate to the reconstructed tomographic image to obtain the projection image of the reconstructed tomographic image, and then adopts the filtered back projection algorithm to reconstruct the projection image acquired by the camera 10).
The intelligent control module of the computer 11 further comprises intelligent positioning and intelligent dosage control functions.
The display module is used for displaying projection images, sinograms and reconstructed image displays (such as cross section displays, maximum density projection displays and three-dimensional displays) of each angle so as to simulate the projection images, sinograms and reconstructed image displays of the CT images. The projection image acquired by a camera is reconstructed into a fault image by using a corresponding reconstruction algorithm, specifically, a translational step-by-step scanning is adopted, a filtering back projection algorithm is adopted, a spiral scanning is firstly utilized to linearly interpolate the projection image of a layer adjacent to the fault to be reconstructed to the reconstructed fault to obtain a projection image of the reconstructed fault, and then the filtered back projection algorithm is adopted for reconstruction.
The intelligent positioning function has two implementation flows: first kind: 1) a user sets an inspection part, namely an interested region in advance, 2) a camera 10 collects an image, 3) a computer 11 determines the interested region for collecting the image through an image recognition and segmentation technology, 4) the computer 11 calculates the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 5) if the interested region is larger than the imaging visual field of a system, the computer 11 controls a vertical translation stage 4 to move an imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system to serve as an imaging scanning starting position; if the region of interest is less than or equal to the system imaging field of view, the computer 11 controls the vertical translation stage 4 to move the imaging object such that the center coordinates of the region of interest in the vertical direction coincide with the center coordinates of the system imaging field of view as the imaging scanning position. Namely, the intelligent positioning function is realized; second kind: 1) the camera 10 collects an image, 2) a user manually sketches and determines an interested region on the collected image, 3) the computer 11 calculates the upper limit and the lower limit and the central coordinate of the interested region in the vertical direction, 4) if the interested region is larger than the imaging visual field of the system, the computer 11 controls the vertical translation stage 4 to move the imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; if the region of interest is less than or equal to the system imaging field of view, the computer 11 controls the vertical translation stage 4 to move the imaging object such that the center coordinates of the region of interest in the vertical direction coincide with the center coordinates of the system imaging field of view as the imaging scanning position. Namely, the intelligent positioning function is realized.
The dose intelligent control function can estimate the dose required for imaging according to the region to be imaged, so as to automatically control and select the proper intensity of the light source 1 or the exposure time of the camera 10. The method comprises the following specific steps: 1) The computer 11 learns a large amount of data by a deep learning method to train a neural network mapped from a single projection map to a three-dimensional tomographic image, and determines a neural network model after training and verification. The training data and the verification data used in the deep learning are input into a single projection image, and are output into corresponding real three-dimensional tomographic images. The network used is a convolutional neural network, such as a U-net network, etc. 2) The computer 11 predicts the three-dimensional tomographic image corresponding to the region to be imaged by using the neural network model learned before by acquiring a single projection view of the region to be imaged by the camera 10. 3) The computer 11 predicts the dose required for imaging using the predicted three-dimensional tomographic image. 4) The computer 11 automatically controls the intensity of the light source 1 or the exposure time of the camera 10 based on the calculated pre-estimated dose.
The imaging process is as follows:
1) The water tank 7 in the scanning module is filled with water and the imaging modality 6 is mounted to the rotating platform 5. The manual or computer 11 controls the vertical translation stage 4 to move the imaging subject 6 to the proper imaging position through the intelligent positioning function of the intelligent control module.
2) The light source 1 of the light source module is turned on. The light emitted by the light source 1 forms a relatively uniform light beam after passing through the light beam homogenizing device 2, passes through a light passing window on the side surface of the water tank 7 after the light passing size is regulated by the adjustable slit 3, and irradiates the imaging imitation body 6. The light passes through the imitation body 6 and then passes through the light-transmitting window on the other side of the water tank 7, and sequentially passes through the iris diaphragm 8, and the lens 9 enters the camera 10. The human or computer 11 controls the intensity of the light source 1 or the exposure time of the camera 10 for subsequent projection acquisitions by means of a dose intelligent control function of the intelligent control module.
3) For translational step scanning, in the two scanning modes, the first mode is that the computer 11 controls the vertical translation stage 4 to be motionless, the rotary platform 5 drives the imaging imitation body 6 to rotate, the projection of light emitted by the light source 1 through the imaging imitation body 6 is collected once through the camera 10 and recorded as one-angle projection when the rotation platform is stopped, the projections of a plurality of angles are repeatedly collected in this way, the rotary platform 5 returns to the initial angle when the scanning starts to stop, then the vertical translation stage 4 moves downwards for a certain distance, and the collection is continuously repeated to complete the projection of other parts of the imitation body 6 in the vertical direction; the second is that the computer 11 controls the vertical translation stage 4 to be motionless, the rotating platform 5 drives the imaging imitation body 6 to rotate all the time, and the camera 10 collects projections of one angle at regular intervals until projections of a plurality of angles are collected. The rotation platform 5 is stopped at an initial angle when the scanning starts, then the vertical translation platform 4 moves downwards for a certain distance, and the acquisition is continuously repeated, so that the projection of other parts of the imitation body 6 in the vertical direction is completed. For helical scanning, the computer 11 controls the vertical translation stage 4 to move downwards while the rotation platform 5 drives the imaging imitation body 6 to rotate, projections of a plurality of angles are collected through the camera 10, and projection images are displayed in a display module in the process of collecting the projections so as to observe and monitor the state of the collected images in real time.
4) The computer 11 utilizes the plurality of angle projections acquired in the 3) to reconstruct through a reconstruction module in the intelligent control and reconstruction module, and then the reconstruction result is displayed through a display module.
An intelligent CT teaching simulation system based on optics comprises a light source module, a scanning module, an acquisition module, an intelligent control and reconstruction module and a display module.
The light source module comprises a light source 1, a beam homogenizing device 2 and an adjustable slit 3. The adjustable slit 3 comprises an upper plate, a lower plate, a left plate and a right plate, and is used for adjusting the size of a light-transmitting window so as to adjust the irradiation view. The light source can be a plurality of wavelength LEDs, a plurality of wavelength LED arrays or a halogen lamp matched with the filter to obtain different emission wavelengths. The beam homogenizing device 2 may be a diffuse diffuser or a diffuse reflector. The light source module may adopt two structures. One is that the light source 1 continues to propagate through the adjustable slit 3 after passing through the diffusely scattering sheet 2. One is that the light source 1 continues to propagate through the adjustable slit 3 after being reflected by the diffusely reflecting sheet 2.
The scanning module comprises a vertical translation stage 4, a rotary platform 5, an imaging imitation body 6 and a water tank 7. The imaging modality 6 is mounted on a rotary stage 5, and the rotary stage 5 is mounted on a vertical translation stage 4. Wherein the imaging imitation 6 is an imaging phantom having a certain light absorption distribution and allowing light to pass through, such as a transparent colloid locally containing dye.
The two faces of the water tank 7 (the two faces through which light propagates) are of flat plate structure and are provided with light-transmitting windows.
The acquisition module comprises an iris 8, a lens 9 and a camera 10. Light emitted by the light source module passes through the imaging imitation body 6 and then passes through the light-transmitting window of the water tank 7, sequentially passes through the iris diaphragm 8 and the lens 9, and finally is collected by the camera 10.
The intelligent control and reconstruction module comprises a computer 11. The rotary stage 5, the vertical translation stage 4 and the camera 10 are connected to a computer 11 by electrical connection lines. The computer 11 utilizes an internal intelligent control module to control the rotation of the scanning module rotary platform 5 and the movement of the vertical translation platform 4, and control the image acquisition of the camera 10, and utilizes an internal reconstruction module to reconstruct the projection image acquired by the camera 10 into a tomographic image by using a corresponding reconstruction algorithm (for example, a translation step-by-step scanning adopts a filtered back projection algorithm, a spiral scanning firstly utilizes a projection image of a layer adjacent to a tomographic image to be reconstructed to linearly interpolate to the reconstructed tomographic image to obtain the projection image of the reconstructed tomographic image, and then adopts the filtered back projection algorithm to reconstruct the projection image acquired by the camera 10).
The intelligent control module of the computer 11 also includes intelligent positioning and intelligent dosage control functions.
The intelligent positioning function has two implementation flows: first kind: 1) a user sets an inspection part, namely an interested region in advance, 2) a camera 10 collects an image, 3) a computer 11 determines the interested region for collecting the image through an image recognition and segmentation technology, 4) the computer 11 calculates the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 5) if the interested region is larger than the imaging visual field of a system, the computer 11 controls a vertical translation stage 4 to move an imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system to serve as an imaging scanning starting position; if the region of interest is less than or equal to the system imaging field of view, the computer 11 controls the vertical translation stage 4 to move the imaging object such that the center coordinates of the region of interest in the vertical direction coincide with the center coordinates of the system imaging field of view as the imaging scanning position. Namely, the intelligent positioning function is realized; second kind: 1) the camera 10 collects an image, 2) a user manually sketches and determines an interested region on the collected image, 3) the computer 11 calculates the upper limit and the lower limit and the central coordinate of the interested region in the vertical direction, 4) if the interested region is larger than the imaging visual field of the system, the computer 11 controls the vertical translation stage 4 to move the imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; if the region of interest is less than or equal to the system imaging field of view, the computer 11 controls the vertical translation stage 4 to move the imaging object such that the center coordinates of the region of interest in the vertical direction coincide with the center coordinates of the system imaging field of view as the imaging scanning position. Namely, the intelligent positioning function is realized.
The dose intelligent control function can estimate the dose required for imaging according to the region to be imaged, so as to automatically control and select the proper intensity of the light source 1 or the exposure time of the camera 10. The method comprises the following specific steps: 1) The computer 11 learns a large amount of data by a deep learning method to train a neural network mapped from a single projection map to a three-dimensional tomographic image, and determines a neural network model after training and verification. The training data and the verification data used in the deep learning are input into a single projection image, and are output into corresponding real three-dimensional tomographic images. The network used is a convolutional neural network, such as a U-net network, etc. 2) The computer 11 predicts the three-dimensional tomographic image corresponding to the region to be imaged by using the neural network model learned before by acquiring a single projection view of the region to be imaged by the camera 10. 3) The computer 11 predicts the dose required for imaging, i.e., the light intensity of the light source 1 with the exposure time fixed or the exposure time of the camera 10 with the light intensity of the light source fixed, using the predicted three-dimensional tomographic image. The calculation method comprises the following steps: let the intensity I of the incident camera 10 2 Response value I to camera 10 3 Relation I 3 =I 2 * D x E, where E is exposure time and D is I in exposure time units 2 And I 3 Is a constant and can be experimentally measured. Intensity I of light source 1 1 And the intensity of light transmitted through the imitation body 6, i.e. the intensity I of light incident on the camera 10 2 Is related as I 2 =I 1 *(M*C i ) Wherein M is the attenuation of the light intensity of the light source 1 before incidence on the imitation body 6, and is a constant, and C is obtained by measurement i For the projection attenuation of the light intensity of each angle calculated by the estimated three-dimensional tomographic image after passing through the dummy 6 ((i=1, 2, …, n, n is projection angle). The light intensity I of the light source 1 1 And a response value I of the camera 10 3 Is related as I 1 =I 2 /(M*C i )=I 3 /(D*E)/(M*C i )=I 3 /(D*E*M*C i ). If a better imaging quality is desired, the light intensity I of the light source 1 is set at a constant exposure time E 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Wherein I 3max Is I 3 Maximum value C max Is C i I.e. the corresponding light intensity decay of the incident light through the sheds 6 is minimal. Or when the light source 1 is at intensity I 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ). If a lower dose is desired, the intensity I of the light source 1 is set at a constant exposure time E 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Is slightly larger than I 3min /(D*E*M*C min ) Wherein I 3min C is the image noise value when the exposure time is E and the camera does not have illumination input min Is C i Corresponding to the maximum attenuation of the intensity of the incident light through the proxy 6. Or when the light source 1 is at intensity I 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ) Is slightly larger than I 3min /(D*I 1 *M*C min ). 4) The computer 11 automatically controls the intensity of the light source 1 or the exposure time of the camera 10 based on the calculated pre-estimated dose.
The display module is used for displaying projection images, sinograms and reconstructed image displays (such as cross section displays, maximum density projection displays and three-dimensional displays) of each angle so as to simulate the projection images, sinograms and reconstructed image displays of the CT images.
The imaging process is as follows:
1) The water tank 7 in the scanning module is filled with water and the imaging modality 6 is mounted to the rotating platform 5. The manual or computer 11 controls the vertical translation stage 4 to move the imaging subject 6 to the proper imaging position through the intelligent positioning function of the intelligent control module.
2) The light source 1 of the light source module is turned on. The light emitted by the light source 1 forms a relatively uniform light beam after passing through the light beam homogenizing device 2, passes through a light passing window on the side surface of the water tank 7 after the light passing size is regulated by the adjustable slit 3, and irradiates the imaging imitation body 6. The light passes through the imitation body 6 and then passes through the light-transmitting window on the other side of the water tank 7, and sequentially passes through the iris diaphragm 8, and the lens 9 enters the camera 10. The human or computer 11 controls the intensity of the light source 1 or the exposure time of the camera 10 for subsequent projection acquisitions by means of a dose intelligent control function of the intelligent control module.
3) For translational step scanning, in the two scanning modes, the first mode is that the computer 11 controls the vertical translation stage 4 to be motionless, the rotary platform 5 drives the imaging imitation body 6 to rotate, the projection of light emitted by the light source 1 through the imaging imitation body 6 is collected once through the camera 10 and recorded as one-angle projection when the rotation platform is stopped, the projections of a plurality of angles are repeatedly collected in this way, the rotary platform 5 returns to the initial angle when the scanning starts to stop, then the vertical translation stage 4 moves downwards for a certain distance, and the collection is continuously repeated to complete the projection of other parts of the imitation body 6 in the vertical direction; the second is that the computer 11 controls the vertical translation stage 4 to be motionless, the rotating platform 5 drives the imaging imitation body 6 to rotate all the time, and the camera 10 collects projections of one angle at regular intervals until projections of a plurality of angles are collected. The rotation platform 5 is stopped at an initial angle when the scanning starts, then the vertical translation platform 4 moves downwards for a certain distance, and the acquisition is continuously repeated, so that the projection of other parts of the imitation body 6 in the vertical direction is completed. For helical scanning, the computer 11 controls the vertical translation stage 4 to move downwards while the rotation platform 5 drives the imaging imitation body 6 to rotate, projections of a plurality of angles are collected through the camera 10, and projection images are displayed in a display module in the process of collecting the projections so as to observe and monitor the state of the collected images in real time.
4) The computer 11 utilizes the plurality of angle projections acquired in the 3) to reconstruct through a reconstruction module in the intelligent control and reconstruction module, and then the reconstruction result is displayed through a display module.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention.

Claims (4)

1. An intelligent CT teaching simulation system based on optics is characterized by comprising a light source module, a scanning module, an acquisition module, an intelligent control and reconstruction module and a display module;
the light source module comprises a light source, a light beam homogenizing device and an adjustable slit, wherein the adjustable slit comprises an upper flat plate, a lower flat plate, a left flat plate and a right flat plate and is used for adjusting the size of a light-transmitting window so as to adjust the irradiation view; the light source is a switchable wavelength light source and is used for simulating dual-energy CT or simulating different tube voltage regulation by using different wavelengths, and the light source module adopts one of the following two structures: one is that the light source continues to propagate through the adjustable slit after passing through the diffuse scattering sheet; one is that the light source continues to propagate through the adjustable slit after being reflected by the diffuse reflection sheet;
the scanning module comprises a vertical translation table, a rotary table, an imaging imitation body and a water tank, wherein the imaging imitation body is arranged on the rotary table, the rotary table is arranged on the vertical translation table and used for rotating the imaging imitation body to perform multi-angle projection acquisition, the vertical translation table moves up and down with the rotary table, when the vertical translation table is motionless, the rotary table rotates to simulate the translation stepping scanning of the CT, and when the rotary translation table rotates, the vertical translation table moves vertically downwards to simulate the spiral scanning of the CT, and the imaging imitation body is an imaging mould body with certain light absorption distribution and allowing light to pass through;
the two surfaces of the light transmitted through the water tank are of a flat plate structure and are provided with light-transmitting windows;
the acquisition module comprises an iris diaphragm, a lens and a camera, wherein light emitted by the light source module passes through the imaging imitation body and then passes through the light-transmitting window of the water tank, sequentially passes through the iris diaphragm and the lens, and is finally acquired by the camera;
the intelligent control and reconstruction module comprises a computer, wherein the rotating platform, the vertical translation platform and the camera are connected with the computer through an electrical connection line, the computer controls the rotation of the rotating platform and the movement of the vertical translation platform of the scanning module by utilizing the internal intelligent control module on one hand, and controls the image acquisition of the camera, and on the other hand, the projection image acquired by the camera is reconstructed into a tomographic image by utilizing a corresponding reconstruction algorithm through the internal reconstruction module after the acquisition is completed;
the intelligent control module of the computer is also used for intelligent positioning and intelligent dosage control;
the display module is used for displaying projection images, sinograms and reconstruction images of each angle so as to simulate the projection images, sinograms and reconstruction images of the CT images;
the intelligent dosage control function predicts the dosage required by imaging according to the region to be imaged, thereby automatically controlling and selecting the proper intensity of the light source or the exposure time of the camera, and the detailed steps are as follows:
1) The computer learns a large amount of data through a deep learning method to train a neural network mapped from a single projection image to a three-dimensional tomographic image, a neural network model is determined after training and verification, the training data and the verification data used in the deep learning are input into the single projection image, and the training data and the verification data are output into corresponding real three-dimensional tomographic images;
2) Acquiring a single projection image of the region to be imaged by a camera, and predicting a three-dimensional tomographic image corresponding to the region to be imaged by a computer by using the neural network model learned before;
3) The computer predicts the dosage required by imaging by using the predicted three-dimensional tomographic image, namely the light intensity of the light source under the fixed exposure time or the exposure time of the camera under the fixed light intensity of the light source, and the light intensity I of the incident camera 2 Response value I with camera 3 Relation I 3 =I 2 * D x E, where E is exposure time and D is I in exposure time units 2 And I 3 The transform coefficient of the light source, the light intensity I 1 And intensity of light transmitted through the imitation body, i.e. intensity I of incident camera 2 Is related as I 2 =I 1 *(M*C i ) Wherein M is the light intensity of the light sourceAttenuation before incidence of the imitation body, C i For projection attenuation of the light intensity of each angle calculated by using the estimated three-dimensional tomographic image after the simulation, i=1, 2, …, n, n is the projection angle, the light intensity I of the light source 1 And the response value I of the camera 3 Is related as I 1 =I 2 /(M*C i )=I 3 /(D*E)/(M*C i )=I 3 /(D*E*M*C i ) First case when the aim is to obtain good imaging quality, the light intensity I of the light source is constant when the exposure time E is constant 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Wherein I 3max Is I 3 Maximum value C max Is C i I.e. the corresponding light intensity decay of the incident light through the imitation body is minimal, or when the light source intensity I 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ) The method comprises the steps of carrying out a first treatment on the surface of the Second case when the aim is to use a lower dose, the light intensity I of the light source is set at the exposure time E 1 The maximum value of (2) should be less than I 3max /(D*E*M*C max ) Is greater than I 3min /(D*E*M*C min ) Wherein I 3min C is the image noise value when the exposure time is E and the camera does not have illumination input min Is C i Corresponding to the maximum attenuation of the intensity of the incident light passing through the imitation body; or when the light intensity I of the light source 1 At a timing, the exposure time E should be less than I 3max /(D*I 1 *M*C max ) Is greater than I 3min /(D*I 1 *M*C min );
4) The intensity of the light source or the exposure time of the camera is automatically controlled by the computer according to the calculated estimated dose.
2. The intelligent optical-based CT teaching simulation system according to claim 1, wherein the intelligent positioning function has two implementation processes: first kind: 1) setting an inspection part, namely an interested region in advance by a user, 2) acquiring an image by a camera, 3) determining the interested region of the acquired image by an image recognition and segmentation technology, 4) calculating the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 5) if the interested region is larger than the imaging visual field of a system, controlling the vertical translation stage to move an imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; if the region of interest is smaller than or equal to the imaging visual field of the system, controlling the vertical translation stage to move the imaging object so that the central coordinate of the region of interest in the vertical direction coincides with the central coordinate of the imaging visual field of the system to serve as an imaging scanning position, namely realizing an intelligent positioning function; second kind: 1) acquiring an image by a camera, 2) manually sketching and determining an interested region on the acquired image by a user, 3) calculating the upper limit, the lower limit and the central coordinate of the interested region in the vertical direction, 4) if the interested region is larger than the imaging visual field of the system, controlling the vertical translation stage to move the imaging object so that the lower limit coordinate of the interested region is slightly higher than the lower limit coordinate of the imaging visual field of the system as an imaging scanning starting position; if the region of interest is smaller than or equal to the imaging visual field of the system, the vertical translation stage is controlled to move the imaging object so that the central coordinate of the region of interest in the vertical direction coincides with the central coordinate of the imaging visual field of the system to serve as an imaging scanning position, and therefore the intelligent positioning function is achieved.
3. The intelligent optical-based CT teaching simulation system according to claim 1, wherein the neural network used is a convolutional neural network.
4. The intelligent optical-based CT teaching simulation system according to claim 1, wherein the imaging process is as follows:
1) The water tank in the scanning module is filled with water, the imaging imitation body is mounted on the rotating platform, and a human operator or a computer controls the vertical translation platform to move the imaging imitation body to a proper imaging position through the intelligent positioning function of the intelligent control module;
2) The light source of the light source module is turned on, light emitted by the light source forms a relatively uniform light beam after passing through the light beam homogenizing device, the light beam passes through a light passing window on the side surface of the water tank after passing through the adjustable slit, irradiates the imaging imitation body, the light passes through the light passing window on the other side of the water tank after passing through the imitation body, and enters the camera through the iris diaphragm and the lens in sequence, and the intensity of the light source or the exposure time of the camera is controlled by a manual or computer through the dose intelligent control function of the intelligent control module so as to be used for subsequent projection acquisition;
3) For translational step scanning, in the two scanning modes, a computer controls a vertical translation stage to be motionless, a rotary platform drives an imaging imitation body to rotate, each time the imaging imitation body rotates for a certain angle, a camera collects the projection of light emitted by a light source, which passes through the imaging imitation body, and records the projection as one angle, so that the projections of a plurality of angles are repeatedly collected, the rotary platform returns to the initial angle when scanning begins to stop, then the vertical translation stage moves downwards for a certain distance, and the collection is continuously repeated to finish the projection of other parts of the imitation body in the vertical direction; the second is that the computer controls the vertical translation stage to be motionless, the rotating platform drives the imaging imitation body to rotate all the time, the camera collects projection of one angle at intervals of a certain time until projection of a plurality of angles is collected, the rotating platform stops at the initial angle when the rotating platform returns to start scanning, then the vertical translation stage moves downwards for a certain distance, and the collection is continuously repeated to complete projection of other parts of the imitation body in the vertical direction; for spiral scanning, the computer controls the vertical translation platform to move downwards while the rotation platform drives the imaging imitation body to rotate, projection at a plurality of angles is collected through the camera, and in the process of collecting projection, projection images are displayed in the display module so as to observe and monitor the state of the collected images in real time;
4) The computer utilizes the plurality of angle projections acquired in the 3) to reconstruct through a reconstruction module in the intelligent control and reconstruction module, and then the reconstruction result is displayed through a display module.
CN202210845658.7A 2022-07-19 2022-07-19 Intelligent CT teaching simulation system based on optics Active CN115019589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210845658.7A CN115019589B (en) 2022-07-19 2022-07-19 Intelligent CT teaching simulation system based on optics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210845658.7A CN115019589B (en) 2022-07-19 2022-07-19 Intelligent CT teaching simulation system based on optics

Publications (2)

Publication Number Publication Date
CN115019589A CN115019589A (en) 2022-09-06
CN115019589B true CN115019589B (en) 2023-11-28

Family

ID=83080164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210845658.7A Active CN115019589B (en) 2022-07-19 2022-07-19 Intelligent CT teaching simulation system based on optics

Country Status (1)

Country Link
CN (1) CN115019589B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035466A (en) * 2004-10-05 2007-09-12 皇家飞利浦电子股份有限公司 Method and system for the planning of imaging parameters
CN102551671A (en) * 2011-12-23 2012-07-11 天津大学 Photon counting-type dynamic diffusion fluorescence tomography method and device
CN102599887A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Optical projection tomography method based on helical scanning track
CN102661919A (en) * 2012-05-22 2012-09-12 江西科技师范大学 Microscopical hyperspectral chromatography three-dimensional imaging device
CN102743159A (en) * 2012-07-26 2012-10-24 中国科学院自动化研究所 Optical projection tomographic imaging system
CN103512905A (en) * 2013-04-16 2014-01-15 西北工业大学 Method used for rapid determination of exposure parameters of digital radiography (DR)/computed tomography (CT) imaging system
CN103622673A (en) * 2013-11-11 2014-03-12 西安电子科技大学 Autofluorescent fault molecular imaging equipment compatible with magnetic resonance
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array
CN105030266A (en) * 2014-04-21 2015-11-11 株式会社东芝 X-ray computer tomographic apparatus and scan plan setting supporting apparatus
CN107095689A (en) * 2010-12-08 2017-08-29 拜耳医药保健有限责任公司 Estimate the method and system of patient radiation dose in medical image scan
CN112450955A (en) * 2020-11-27 2021-03-09 上海优医基医疗影像设备有限公司 CT imaging automatic dose adjusting method, CT imaging method and system
CN112738391A (en) * 2020-12-23 2021-04-30 上海奕瑞光电子科技股份有限公司 Automatic exposure control method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220035961A1 (en) * 2020-08-03 2022-02-03 Ut-Battelle, Llc System and method for artifact reduction of computed tomography reconstruction leveraging artificial intelligence and a priori known model for the object of interest

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035466A (en) * 2004-10-05 2007-09-12 皇家飞利浦电子股份有限公司 Method and system for the planning of imaging parameters
CN107095689A (en) * 2010-12-08 2017-08-29 拜耳医药保健有限责任公司 Estimate the method and system of patient radiation dose in medical image scan
CN102599887A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Optical projection tomography method based on helical scanning track
CN102551671A (en) * 2011-12-23 2012-07-11 天津大学 Photon counting-type dynamic diffusion fluorescence tomography method and device
CN102661919A (en) * 2012-05-22 2012-09-12 江西科技师范大学 Microscopical hyperspectral chromatography three-dimensional imaging device
CN102743159A (en) * 2012-07-26 2012-10-24 中国科学院自动化研究所 Optical projection tomographic imaging system
CN103512905A (en) * 2013-04-16 2014-01-15 西北工业大学 Method used for rapid determination of exposure parameters of digital radiography (DR)/computed tomography (CT) imaging system
CN103622673A (en) * 2013-11-11 2014-03-12 西安电子科技大学 Autofluorescent fault molecular imaging equipment compatible with magnetic resonance
CN105030266A (en) * 2014-04-21 2015-11-11 株式会社东芝 X-ray computer tomographic apparatus and scan plan setting supporting apparatus
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array
CN112450955A (en) * 2020-11-27 2021-03-09 上海优医基医疗影像设备有限公司 CT imaging automatic dose adjusting method, CT imaging method and system
CN112738391A (en) * 2020-12-23 2021-04-30 上海奕瑞光电子科技股份有限公司 Automatic exposure control method and system

Also Published As

Publication number Publication date
CN115019589A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
Rosenthal et al. Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography
JP2022110067A (en) Methods and systems for patient scan setup
US9008397B2 (en) Tomography system based on Cerenkov luminescence
CN108601572B (en) X-ray imaging system and method for constructing two-dimensional X-ray image
CN104055489B (en) A kind of blood vessel imaging device
CN102697514A (en) Selection of optimal viewing angle to optimize anatomy visibility and patient skin dose
CN106345072A (en) Real-time detecting method and system for multi-leaf collimator blade position of linear accelerator
CN106205268B (en) X-ray analog camera system and method
JP2003532873A (en) Optical computed tomography in opaque media
KR20240013724A (en) Artificial Intelligence Training Using a Multipulse X-ray Source Moving Tomosynthesis Imaging System
CN105342597B (en) A kind of quantitative laser blood flow detection method
CN109924949A (en) A kind of near infrared spectrum tomography rebuilding method based on convolutional neural networks
CN115019589B (en) Intelligent CT teaching simulation system based on optics
CN114145761A (en) Fluorine bone disease medical imaging detection system and use method thereof
CN112435554A (en) CT teaching simulation system and control method thereof
Chacko et al. Three-dimensional reconstruction of transillumination tomographic images of human breast phantoms by red and infrared lasers
CN114241074B (en) CBCT image reconstruction method for deep learning and electronic noise simulation
US20230260172A1 (en) Deep learning for sliding window phase retrieval
CN109567838A (en) A kind of X ray absorption spectrometry lesion detector
JP2023001051A (en) System and method for computed tomography image reconstruction
JP2009500136A (en) X-ray or infrared imaging method and imaging apparatus
CN206726528U (en) A kind of X ray simulates camera system
CN202569211U (en) Stereotactic radiotherapy device
US20210110597A1 (en) Systems and methods for visualizing anatomical structures
CN112666194A (en) Virtual digital DR image generation method and DR virtual simulation instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240228

Address after: 533, No.7 Liulin Road East, Sujiatuo Town, Haidian District, Beijing, 100194

Patentee after: Jiamaohong (Beijing) Medical Technology Development Co.,Ltd.

Country or region after: China

Address before: 300070 No. 22 Meteorological Observatory Road, Heping District, Tianjin

Patentee before: Tianjin Medical University

Country or region before: China