WO2023283866A1 - 呼吸运动模型的构建方法和无标记呼吸运动预测方法 - Google Patents
呼吸运动模型的构建方法和无标记呼吸运动预测方法 Download PDFInfo
- Publication number
- WO2023283866A1 WO2023283866A1 PCT/CN2021/106415 CN2021106415W WO2023283866A1 WO 2023283866 A1 WO2023283866 A1 WO 2023283866A1 CN 2021106415 W CN2021106415 W CN 2021106415W WO 2023283866 A1 WO2023283866 A1 WO 2023283866A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion
- respiratory
- position frame
- frame
- body surface
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 274
- 230000000241 respiratory effect Effects 0.000 title claims abstract description 120
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000010276 construction Methods 0.000 title claims abstract description 16
- 230000029058 respiratory gaseous exchange Effects 0.000 claims abstract description 66
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 8
- 239000003550 marker Substances 0.000 claims description 25
- 238000004088 simulation Methods 0.000 claims description 17
- 238000006073 displacement reaction Methods 0.000 claims description 15
- 239000012528 membrane Substances 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000003187 abdominal effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 15
- 210000001015 abdomen Anatomy 0.000 description 12
- 230000000630 rising effect Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 206010028980 Neoplasm Diseases 0.000 description 5
- 238000001959 radiotherapy Methods 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 239000004816 latex Substances 0.000 description 3
- 229920000126 latex Polymers 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000036391 respiratory frequency Effects 0.000 description 2
- 230000036387 respiratory rate Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 206010043417 Therapeutic response unexpected Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 230000037433 frameshift Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the invention belongs to the field of medical technology, and in particular relates to a method for constructing a respiratory motion model, a markerless respiratory motion prediction method, and a respiratory motion simulation device.
- the existing abdominal surface tracking technology mainly focuses on the markers placed on the surface of the human chest, that is, a number of markers are set on the patient's abdominal surface, and then the movement displacement of these markers on the patient's abdominal surface is tracked in real time by an optical tracker.
- the Cyberknife system uses three markers to record the movement of the skin surface.
- the tracker (synchronous breathing tracking system, Synchrony) obtains the breathing rhythm, that is, the abdominal surface information.
- the technical problem solved by the invention is: how to construct a breathing motion model that can characterize body surface motion information to a greater extent.
- a construction method of a respiratory exercise model comprising:
- images corresponding to the head position frame and the end position frame of at least one motion time zone are generated, wherein each of the motion time zones does not exceed one-half of the breathing motion cycle, and each motion time zone
- the corresponding exercise state is exhalation state or inhalation state
- a breathing motion model is constructed according to the position offset and relative displacement corresponding to each of the motion time zones.
- the number of the exercise time zones is at least two, and each of the exercise time zones together constitutes a complete breathing exercise cycle.
- the method for acquiring body surface motion sequence images of the subject to be predicted breathing during a period of time includes:
- An overall respiratory motion frequency curve is generated according to the motion data corresponding to each marker point, and the overall respiratory motion frequency curve and the continuous frame RGB image set together constitute body surface motion sequence image data.
- the period of time includes several breathing motion cycles
- the method for generating images corresponding to the head position frame and the end position frame of at least one motion time zone according to the body surface motion sequence image data includes:
- a B-spline-based free deformation method is used to perform elastic registration processing on the images respectively corresponding to the head position frame and the end position frame of each of the moving time zones to obtain the position offset.
- the interpolation calculation method is any one of linear interpolation, cubic spline interpolation and cubic polynomial interpolation.
- the present application also discloses a prediction method of unmarked breathing movement, the prediction method comprising:
- the initial position image is input into the respiratory motion model obtained according to the above construction method, and the respiratory motion model outputs body surface position images at different moments in real time.
- the prediction method also includes a self-correction step:
- the respiratory motion model outputs the body surface corrected position corresponding to each moment according to the adjusted position frame index.
- the prediction method also includes a self-correction step:
- the respiratory motion model outputs the body surface corrected position corresponding to each moment according to the adjusted position frame index.
- the application also discloses a breathing motion simulation device, which includes:
- the driving assembly is located on one side of the elastic membrane and is used to drive the elastic membrane to perform elastic reciprocating motion.
- the invention discloses a construction method of a respiratory motion model and a marker-free respiratory motion prediction method, which have the following technical effects compared with the existing methods:
- the respiratory motion model constructed by the historical motion sequence image data of the object to be predicted can reflect the overall motion trend, and there is no need to mark the object to be predicted in the actual prediction, which can realize the overall prediction of the body surface and maximize the The surface motion information is represented.
- Fig. 1 is the flowchart of the construction method of the respiratory motion model of embodiment one of the present invention
- Fig. 2 is the waveform diagram of the depth value of 11 mark points in Embodiment 1 of the present invention.
- Fig. 3 is a schematic diagram of division of exercise time zones within a breathing exercise cycle according to Embodiment 1 of the present invention.
- Fig. 4 is a schematic diagram of the effect comparison before and after the self-calibration step of Embodiment 2 of the present invention.
- FIG. 5 is a schematic structural diagram of a breathing motion simulation device according to Embodiment 3 of the present invention.
- Fig. 6 is a schematic diagram of image and motion data acquisition of the respiratory motion simulation device according to Embodiment 3 of the present invention.
- Fig. 7 is a visualized schematic diagram of the breathing motion simulation device in Embodiment 3 of the present invention under different breathing states;
- Fig. 8 is a schematic diagram of 11 marking points set on the respiratory exercise simulation device of Embodiment 3 of the present invention.
- FIG. 9 is a schematic diagram of motion errors obtained under three different interpolation methods according to an embodiment of the present invention.
- Fig. 10 is a schematic diagram of trajectory prediction of each marker point according to an embodiment of the present invention.
- Fig. 11 is a partially enlarged view of trajectory prediction of one of the marked points in Fig. 10;
- FIG. 12 is a schematic diagram of computer equipment according to Embodiment 5 of the present invention.
- the existing respiratory motion prediction method based on marker tracking technology, due to the limited number of marker points, the representation of the respiratory signal is insufficient
- the provided method first monitors the moving image data of the object to be predicted in a period of time, and then obtains the image data that can represent the overall motion trend in each moving time zone according to the moving image data statistics, and then uses the image registration method and interpolation method to calculate and obtain The relative displacement corresponding to each position frame in the exercise time zone, and construct a respiratory motion model according to the relative displacement, and finally use the respiratory motion model to predict the subsequent respiratory motion.
- the respiratory motion model constructed by this method based on statistical ideas can reflect the overall motion trend, and it is not necessary to mark the predicted object in the actual prediction, and can realize the prediction of the whole body surface and represent the surface motion information to the greatest extent.
- the first embodiment discloses a construction method of a breathing motion model, as shown in Figure 1, the construction method includes the following steps:
- Step S10 Obtain the image data of the body surface movement sequence when the subject to be predicted breathes within a period of time;
- Step S20 Generate images corresponding to the head position frame and the end position frame of at least one motion time zone according to the body surface motion sequence image data, wherein each motion time zone does not exceed one-half of the respiratory motion cycle, and every The exercise state corresponding to each exercise time zone is exhalation state or inhalation state;
- Step S30 Perform elastic registration processing on the images corresponding to the head position frame and the end position frame of each of the moving time zones to obtain the position offset, and use the position offset to perform interpolation calculation to obtain each of the The relative displacement corresponding to other position frames in the motion time zone;
- Step S40 Calculate and construct a breathing motion model according to the position offset and relative displacement corresponding to each of the motion time zones.
- step S10 in order to statistically obtain the respiratory movement law of the object to be predicted, several marker points need to be set on the object to be predicted, so as to track the depth data of the marker points. Specifically, step S10 includes the following steps:
- Step S101 setting several marker points on the body surface of the object to be predicted, wherein the marker points move along with the body surface when the object to be predicted performs respiratory movement.
- the number of marking points is 11, and yellow marking points are used.
- the yellow marking points are placed on different positions of the abdomen of the object to be predicted for subsequent establishment and testing of the model.
- the subject to be predicted can use a respiratory motion simulation device, and for a detailed description, see the content of Embodiment 3 below.
- Step S102 Use the depth camera to obtain the continuous frame RGB image set and the motion data corresponding to each marker point when the object to be predicted breathes within a period of time.
- the depth camera is used to treat the predicted object, and the depth data corresponding to each marker point in several respiratory movement cycles and the continuous frame RGB images are captured by the depth camera.
- Step S103 Generate an overall respiratory motion frequency curve according to the motion data corresponding to each marker point, and the overall respiratory motion frequency curve and the continuous frame RGB image set together constitute body surface motion sequence image data.
- the abscissa of the depth value waveform diagram is the position frame, which represents the time series, and the ordinate is the depth value of the abdomen, which represents the degree of abdominal undulation.
- 11 corresponding depth value waveforms can be obtained according to the motion data corresponding to 11 marker points, and the average value of the 11 depth values corresponding to the same position frame is used as the statistical depth value of the current position frame , traversing all position frames to obtain each statistical depth value, thereby forming a statistical waveform diagram of depth value, which can represent the overall movement trend of the abdomen during respiratory movement, and serve as the overall frequency curve of respiratory movement.
- the number of marked points and the number of breathing motion cycles can be set by yourself, and are not limited here.
- step S20 includes the following steps:
- Step S201 Count the number of times that the exercise state corresponding to the exercise time zone appears in the overall frequency curve of respiratory exercise
- Step S202 Extract the RGB image corresponding to the initial position frame under each described motion state and the RGB image corresponding to the end position frame under each described motion state from the continuous frame RGB image set;
- Step S203 converting the RGB image corresponding to the initial position frame in each movement state and the RGB image corresponding to the end position frame in each movement state into a grayscale image;
- Step S204 Calculate the average grayscale image of the initial position frame according to the grayscale image corresponding to the initial position frame in each movement state, and use it as the image corresponding to the head position frame.
- the grayscale image corresponding to the end position frame in the state is calculated to obtain the average grayscale image of the end position frame, and used as the image corresponding to the end position frame.
- the number of exercise time zones is two, which are two exercise time zones AB and BC respectively, and the size of each exercise time zone is 1/2 of the breathing movement cycle, where
- the state corresponding to one exercise time zone AB is the complete exhalation state.
- the depth value data of the abdomen changes from the minimum value to the maximum value, that is, from the trough to the peak; the state corresponding to the other exercise time zone BC is the complete inhalation state.
- the two exercise time zones form a complete breathing exercise cycle.
- the corresponding exercise state on the overall frequency curve of respiratory exercise is the rising stage state.
- the number of times the rising stage state appears in the entire respiratory exercise overall frequency curve is counted.
- the position frame corresponding to the trough of the rising stage state is the initial position frame
- the position frame corresponding to the peak of each rising stage state is the end position frame.
- the corresponding RGB image is extracted from the continuous frame RGB image set
- the end position frame of each rising stage state the corresponding RGB image is extracted from the continuous frame RGB image set.
- step S30 since the contraction of the abdomen is an elastic movement during breathing exercise, that is, each point on the outer surface of the abdomen moves not only in the depth direction (z axis), but also in the plane direction (xy axis). axis) is displaced, so elastic registration processing is required.
- the free deformation method based on B-splines is used to register the images corresponding to the head position frame and the end position frame of each time zone of the movement to obtain the position offset, wherein the basic method of the free deformation method based on B-splines The principle is as follows:
- the coordinate position (relative to the pixel grid) after cubic B-spline elastic deformation that is, the position offset, can be expressed as:
- ⁇ i+l, j+m represents the coordinate position of the nearest neighbor 4*4 control points
- i, j represents the unit index number of the nearest neighbor 4*4 control points
- i [x/n x ]-1
- j [y/n y ]-1
- u x/n x -[x/n x ]
- u and v Indicates the relative position of the unit control grid in x and y
- n x and ny indicate the spacing of the unit control grid in the x and y directions
- B l indicates the lth basis function of the B-spline.
- Control points are used as parameters of the B-spline free deformation method, and the degree of non-rigid deformation that can be modeled basically depends on the resolution of the control point grid, ⁇ is used to represent the control point ⁇ i ,j (0 ⁇ i ⁇ n x , 0 ⁇ j ⁇ n y ), and the distance between each control point is ⁇ .
- the optimal solution of ⁇ is obtained.
- the main idea is to search for the minimum spatially transformed position of the similarity measure E ssd , denoted as
- N is the total number of pixels in the image registration region
- I1 and I2 denote the grayscale functions of the reference image and the image registered in 2D space.
- the metric E ssd is smallest when the two images are best matched.
- I 1 is the image of the frame at the head position
- I 2 is the image of the frame at the end position.
- an interpolation calculation method is used to obtain the relative displacement corresponding to other position frames in the movement time zone.
- the interpolation calculation method includes Any of linear interpolation, cubic spline interpolation, and cubic polynomial interpolation.
- the calculation process of the interpolation calculation method is well known to those skilled in the art and will not be repeated here.
- the number of other position frames in the motion time zone can be limited by itself. If a more accurate representation of the motion process is required, more positions can be obtained through interpolation calculation.
- the relative displacement of the frame is described in this specification.
- a respiratory motion model is constructed according to the obtained position offset and each relative displacement, and the respiratory motion model can characterize the respiratory motion law in the corresponding exercise time zone or the corresponding respiratory exercise cycle, so that the previous position frame can The position state prediction of the position state of the next position frame is obtained.
- the abnormal data in the monitored body surface motion sequence image data is extracted as new historical motion data, wherein the first depth threshold and the second depth threshold can be set, and the peak is smaller than the first depth threshold and the trough
- the data of the corresponding period can be identified as abnormal data, and a new breathing motion model is constructed using the abnormal data.
- the specific construction process can refer to the above description, and will not be repeated here. In this way, the best prediction model can be selected according to the breathing motion situation during actual prediction.
- the second embodiment provides a prediction method for unmarked respiratory movement, the prediction method is: acquire one of the body surface images of the subject to be predicted when breathing as the initial position image; input the initial position image into the device according to the first embodiment
- the respiratory motion model outputs the position of the body surface at different moments in real time.
- the above-mentioned respiratory motion model is an overall trend model based on statistical ideas, which can predict the overall motion situation. Due to the non-ideal periodicity of respiratory motion, the actual respiratory rate curve does not match the predicted respiratory rate curve perfectly. One reason is that the position frame shifts. For example, in the actual monitoring process, at a certain moment, the actual position frame corresponding to the peak value of the abdominal depth is different from the predicted position frame when the abdominal depth value reaches the peak value in the respiratory motion prediction model. There are two situations. The first situation is that the predicted respiratory frequency curve has an overall offset relative to the actual respiratory frequency curve, and the second situation is that the actual respiratory movement period increases or decreases relative to the respiratory movement prediction period. Therefore, In order to improve the prediction accuracy, the prediction method of this embodiment adds a self-correction step.
- the self-calibration step is: use the depth camera to monitor the body surface depth data of the object to be predicted in real time when breathing; obtain the corresponding actual position when the selected position reaches the predetermined depth under the current breathing movement cycle according to the body surface depth data frame, and obtain the corresponding preset position frame when the selected position reaches the predetermined depth from the breathing motion model; use the difference between the actual position frame of the selected position and the preset position frame as the frame offset value; use the frame offset
- the shift value adjusts the position frame index corresponding to each moment in the next respiratory movement cycle; the respiratory movement model outputs the corrected position of the body surface corresponding to each moment according to the adjusted position frame index.
- the depth value waveform diagram output by the depth camera is for the same position in the picture, that is, the depth change of the same position.
- the selected position adopts the position targeted when the depth camera outputs the depth value waveform diagram.
- the frame of the preset position when the fixed position reaches the predetermined height is the same, but due to the non-ideal periodicity of breathing motion, the two are not exactly the same, so correction is required, and the difference between the two is used as the frame offset value.
- the predetermined height may be a peak depth or a trough depth.
- the sum of the preset position frame and the frame offset value at each moment is used as the position frame index at each moment, and the respiratory movement model is adjusted to obtain the position frame index to output the body surface corresponding to each moment Correct the position, so that the corrected position of the body surface is closer to the actual position at that moment, thereby improving the prediction accuracy.
- the prediction curves before and after correction are shown in Fig. 4, where F1-f1 represents the frame offset value.
- the self-calibration step is: use the depth camera to monitor the body surface depth data of the object to be predicted in real time when breathing; calculate the actual cycle of the current breathing motion according to the body surface depth data; calculate the actual cycle of the current breathing motion and the breathing motion
- the difference between the preset periods of respiratory movement in the model is used as the frame offset value; the frame offset value is used to adjust the position frame corresponding to each moment in the next respiratory movement cycle; the respiratory movement model is adjusted to obtain the position frame output every time The corrected position of the body surface corresponding to each moment.
- the depth camera outputs the waveform of the depth value in real time, which can be used to calculate the actual cycle of the current respiratory movement, where the unit of the cycle is the number of frames, so that the actual cycle of the current respiratory movement and the preset respiratory movement in the respiratory movement model can be calculated.
- the frame offset value is obtained by period subtraction, and the frame offset value is positive or negative, and then for the next respiratory movement cycle, the sum of the preset position frame and the frame offset value at each moment is used as the The position frame index, the respiratory motion model is adjusted to obtain the position frame index to output the body surface correction position image corresponding to each moment, so that the body surface correction position is closer to the actual position at that moment, thereby improving the prediction accuracy.
- Embodiment 3 also discloses a breathing motion simulation device.
- the breathing motion simulation device includes: a support member 100 , an elastic membrane 200 , and a driving assembly.
- the support member 100 includes two support arms arranged oppositely, and the support arms can adopt materials such as hard boxes, and the elastic film 100 can adopt latex film, etc., and the two ends of the elastic film 100 are fixedly connected to the two support arms, so that The elastic film 100 is in a flat and elastically taut state.
- the driving component is located on one side of the elastic membrane 100 to drive the elastic membrane to perform elastic reciprocating motion. Exemplarily, when the elastic membrane 100 is placed horizontally, the entire driving assembly is located below the elastic membrane 100 .
- Drive assembly comprises motor 300, lifting platform 400 and sponge 500, and the output shaft of motor 300 is connected with lifting platform 400, and sponge 500 is arranged on lifting platform 400, and sponge 500 is used for extruding elastic film 200.
- the breathing simulation method is as follows: motion control is carried out on the programmable single-chip microcomputer platform, the rotation direction and instantaneous speed of the motor are controlled according to the imported waveform data, and the driving motor reconstructs the human breathing phase and performs up and down reciprocating motion, that is, simulating the human breathing state from a physiological point of view The rising and falling movement trend of the lower abdomen.
- the driving device moves, it will exert force on the sponge and latex film above the drive, causing the sponge and latex film to deform, thus visually simulating the shape change of the human abdomen under the breathing state.
- a commercial RGB-D camera 600 is used to acquire abdominal surface images, and the camera is placed perpendicular to the abdominal surface.
- the breathing motion simulation device simulates three states of the human body: inhalation-exhalation, respiration-inhalation, and exhalation from top to bottom. It can be observed intuitively from Figure 7 that the changes in the simulated abdominal volume and area in the three states conform to the biological principles.
- Fig. 2 shows that the change of the depth value of the marked point conforms to the law of human breathing movement cycle.
- Fig. 1 shows that the change of the depth value of the marked point conforms to the law of human breathing movement cycle.
- the upper left figure shows the visualization of the real person in the inhalation state
- the upper right figure shows the visualization of the breathing motion simulation device in the inhalation state
- the lower left The figure shows the visualization of real people in the state of exhalation
- the lower right figure shows the visualization of the respiratory motion simulation device in the state of exhalation.
- this paper visualizes the tracking curves of all marked points.
- the dotted line represents the predicted marker trajectory
- the solid line represents the real marker trajectory
- the dotted line better predicts the movement trend of the solid line, which illustrates the feasibility and accuracy of the model.
- the fourth embodiment also discloses a computer-readable storage medium, the computer-readable storage medium stores the construction program of the respiratory motion model, and when the construction program of the respiratory motion model is executed by the processor, the respiratory motion of the first embodiment is realized.
- the construction method of motion model stores the construction program of the respiratory motion model, and when the construction program of the respiratory motion model is executed by the processor, the respiratory motion of the first embodiment is realized.
- Embodiment 5 also discloses a computer device.
- the terminal includes a processor 12 , an internal bus 13 , a network interface 14 , and a computer-readable storage medium 11 .
- the processor 12 reads the corresponding computer program from the computer-readable storage medium and executes it, forming a request processing device on a logical level.
- one or more embodiments of this specification do not exclude other implementations, such as logic devices or a combination of software and hardware, etc., that is to say, the execution subject of the following processing flow is not limited to each A logic unit, which can also be a hardware or logic device.
- the computer-readable storage medium 11 stores a program for constructing a respiratory motion model, and when the program for constructing a respiratory motion model is executed by a processor, the method for constructing a respiratory motion model in Embodiment 1 is realized.
- Computer-readable storage media includes both volatile and non-permanent, removable and non-removable media and can be implemented by any method or technology for storage of information.
- Information may be computer readable instructions, data structures, modules of a program, or other data.
- Examples of computer readable storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage , magnetic cassettes, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read-only memory
- EEPROM electrically
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
一种呼吸运动模型的构建方法和无标记呼吸运动预测方法。构建方法包括:获取待预测对象在一段时间内呼吸时的体表运动序列图像数据(S10);根据体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像,其中每个运动时区不超过二分之一呼吸运动周期;对每一运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量,并利用位置偏移量进行插值计算得到每一运动时区内其他位置帧所对应的相对位移量(S30);根据位置偏移量和相对位移量计算构建呼吸运动模型。该方法构建得到的呼吸运动模型能反映整体运动趋势,并且在实际预测时无须对待预测对象进行标记,最大程度地表征了表面运动信息。
Description
本发明属于医疗技术领域,具体地讲,涉及一种呼吸运动模型构建方法、无标记呼吸运动预测方法、呼吸运动模拟装置。
随着放射治疗的精确性日益提高,它在癌症治疗中得到了广泛的应用。然而,在放疗过程中,仍存在诸多干扰因素,导致治疗效果意外。其中,呼吸引起的解剖运动和变形是导致放射治疗计划和实施过程中出现错误的重要原因,尤其是在胸部和腹部的放射治疗中。并且由于呼吸运动是一种非自愿的生理运动,所以其对放疗的影响贯穿于整个治疗过程。在呼吸作用下,腹部和胸部的肿瘤可以移动35mm,如果呼吸运动计算不正确,就不能根据肿瘤的运动来控制放射束,来保证高精度的治疗。可能出现靶组织描绘错误、剂量计算错误以及对正常组织不必要的二次损伤等。当前的主流方法是,基于肿瘤内部与皮肤表面位移之间的关系建模来实现肿瘤的实时跟踪。因此,如何解决对皮肤表面位移进行表征和跟踪的问题是十分重要并且必要的。
现有腹部表面跟踪技术主要集中于放置在人类胸部表面的标记上,即在患者腹部表面上设置若干标记,然后通过光学跟踪仪来实时跟踪患者腹部表面这几个标记的运动位移状况。例如,在呼吸跟踪系统中,赛博刀系统是使用三个标记来记录皮肤表面的运动,首先让患者穿上胸部或腹部粘贴发红光二极管的背心平卧在体模内,然后用红光追踪仪(同步呼吸追踪系统,Synchrony)获得呼吸节奏,即腹部表面信息。
基于标记点的跟踪技术,由于其红外标记的数量是有限的,所以存在对呼吸信号的表征不足的问题,从而造成腹部靶向的误差累积。
发明内容
(一)本发明所要解决的技术问题
本发明解决的技术问题是:如何构建一种能更大程度地表征体表运动信息的呼吸运动模型。
(二)本发明所采用的技术方案
一种呼吸运动模型的构建方法,所述构建方法包括:
获取待预测对象在一段时间内呼吸时的体表运动序列图像数据;
根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像,其中每个所述运动时区不超过二分之一呼吸运动周期,且每个运动时区对应的运动状态为呼气状态或吸气状态;
对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量,并利用所述位置偏移量进行插值计算得到每一所述运动时区内其他位置帧所对应的相对位移量;
根据各个所述运动时区对应的位置偏移量和相对位移量计算构建呼吸运动模型。
所述运动时区的数量至少为两个,各个所述运动时区共同构成一个完整的呼吸运动周期。
可选择地,所述获取待预测对象在一段时间内呼吸时的体表运动序列图像的方法包括:
在所述待预测对象的体表设置若干标记点,其中在所述待预测对象进行呼吸运动时所述标记点跟随体表进行运动;
利用深度相机获取待预测对象在一段时间内呼吸时的连续帧RGB图像集和各个标记点对应的运动数据;
根据各个标记点对应的运动数据生成呼吸运动整体频率曲线,所述呼吸运动整体频率曲线和所述连续帧RGB图像集共同构成体表运动序列图像数据。
可选择地,所述一段时间包括若干个呼吸运动周期,所述根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像的方法包括:
统计所述运动时区对应的运动状态在所述呼吸运动整体频率曲线中出现的个数;
从所述连续帧RGB图像集中提取每个所述运动状态下初始位置帧对应的 RGB图像和每次所述运动状态下结束位置帧对应的RGB图像;
将各个所述运动状态下初始位置帧对应的RGB图像和各个所述运动状态下结束位置帧对应的RGB图像均转换为灰度图像;
根据各个所述运动状态下初始位置帧对应的灰度图像计算得到所述初始位置帧的平均灰度图像,并作为所述首端位置帧对应的图像,根据各个所述运动状态下结束位置帧对应的灰度图像计算得到所述结束位置帧的平均灰度图像,并作为所述末端位置帧对应的图像。
可选择地,采用基于B样条的自由变形方法对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量。
可选择地,所述插值计算的方法为线性插值、三次样条插值和三次多项式插值中的任意一种。
本申请还公开了一种无标记呼吸运动的预测方法,所述预测方法包括:
获取待预测对象呼吸时的其中一幅体表图像作为初始位置图像;
将所述初始位置图像输入至根据上述构建方法得到的呼吸运动模型中,所述呼吸运动模型实时输出不同时刻的体表位置图像。
可选择地,所述预测方法还包括自校正步骤:
采用深度相机实时监测待预测对象呼吸时的体表深度数据;
根据体表深度数据获取当前呼吸运动周期下,选定位置达到预定深度时对应的实际位置帧,并从呼吸运动模型中获取所述选定位置达到预定深度时对应的预设位置帧;
将所述选定位置的实际位置帧与预设位置帧的差值作为帧偏移值;
利用所述帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧索引;
所述呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置。
或者,所述预测方法还包括自校正步骤:
采用深度相机实时监测待预测对象呼吸时的体表深度数据;
根据体表深度数据计算当前呼吸运动的实际周期;
计算当前呼吸运动的实际周期与所述呼吸运动模型中呼吸运动的预设周期 之间的差值,作为帧偏移值;
利用所述帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧索引;
所述呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置。
本申请还公开了一种呼吸运动模拟装置,所述呼吸运动模拟装置包括:
支撑件;
弹性膜,安装于所述支撑件上;
驱动组件,位于所述弹性膜的一侧,用于驱动所述弹性膜进行弹性往复运动。
(三)有益效果
本发明公开了一种呼吸运动模型的构建方法和无标记呼吸运动预测方法,相对于现有方法,具有如下技术效果:
通过待预测对象的历史运动序列图像数据来构建得到的呼吸运动模型,该模型能反映整体运动趋势,并且在实际预测时无须对待预测对象进行标记,可以实现对体表整体进行预测,最大程度地表征了表面运动信息。
图1为本发明的实施例一的呼吸运动模型的构建方法的流程图;
图2为本发明的实施例一的11个标记点的深度值波形图;
图3为本发明的实施例一的一个呼吸运动周期内的运动时区划分示意图;
图4为本发明的实施例二的进行自校正步骤前后的效果对比示意图;
图5为本发明的实施例三的呼吸运动模拟装置的结构示意图;
图6为本发明的实施例三的呼吸运动模拟装置的图像和运动数据的采集示意图;
图7为本发明的实施例三的呼吸运动模拟装置在不同呼吸状态下的可视化示意图;
图8为本发明的实施例三的呼吸运动模拟装置上设置的11个标记点的示意 图;
图9为本发明的实施例的三种不同插值方法下得到的运动误差示意图;
图10为本发明的实施例的各个标记点的轨迹预测示意图;
图11为图10的其中一个标记点的轨迹预测局部放大图;
图12为本发明的实施例五的计算机设备示意图。
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
在详细描述本申请的各个实施例之前,首先简单描述本申请的发明构思:现有的基于标记点跟踪技术的呼吸运动预测方法,由于标记点的数量有限,对呼吸信号的表征不足,本申请提供的方法首先通过监测待预测对象在一段时间内的运动图像数据,接着根据运动图像数据统计得到能表征每个运动时区内整体运动趋势的图像数据,接着利用图像配准方法和插值方法计算得到运动时区内每个位置帧对应的相对位移量,并根据相对位移量构建得到呼吸运动模型,最后利用呼吸运动模型预测后续的呼吸运动。本方法基于统计思想构建得到的呼吸运动模型能反映整体运动趋势,并且在实际预测时无须对待预测对象进行标记,可以实现对体表整体进行预测,最大程度地表征了表面运动信息。
具体来说,本实施例一公开了一种呼吸运动模型的构建方法,如图1所示,构建方法包括如下步骤:
步骤S10:获取待预测对象在一段时间内呼吸时的体表运动序列图像数据;
步骤S20:根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像,其中每个所述运动时区不超过二分之一呼吸运动周期,且每个运动时区对应的运动状态为呼气状态或吸气状态;
步骤S30:对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量,并利用所述位置偏移量进行插值计算得到每一所述运动时区内其他位置帧所对应相对位移量;
步骤S40:根据各个所述运动时区对应的位置偏移量和相对位移量计算构建 呼吸运动模型。
在步骤S10中,为了统计得到待预测对象的呼吸运动规律,需要在待预测对象上设置若干标记点,以便于追踪标记点的深度数据。具体来说,步骤S10包括如下步骤:
步骤S101:在所述待预测对象的体表设置若干标记点,其中在所述待预测对象进行呼吸运动时所述标记点跟随体表进行运动。示例性地,标记点的数量为11个,采用黄色标记点,在待预测对象平躺时,将黄色标记点放在待预测对象的腹部不同位置上,用于后续建立和测试模型。示例性地,为了便于收集数据和重复测试,待预测对象可采用呼吸运动模拟装置,详细描述见下文的实施例三的内容。
步骤S102:利用深度相机获取待预测对象在一段时间内呼吸时的连续帧RGB图像集和各个标记点对应的运动数据。示例性地,将深度相机正对待预测对象,利用深度相机拍摄到若干个呼吸运动周期内各个标记点对应的深度数据以及连续帧RGB图像。
步骤S103:根据各个标记点对应的运动数据生成呼吸运动整体频率曲线,所述呼吸运动整体频率曲线和所述连续帧RGB图像集共同构成体表运动序列图像数据。
其中,根据每个标记点对应的运动数据可得到若干周期的深度值波形图,深度值波形图的横坐标为位置帧,即代表时间序列,纵坐标为腹部的深度值,即代表腹部起伏程度。示例性地,如图2所示,根据11标记点对应的运动数据可得到对应的11幅深度值波形图,将同一位置帧对应的11个深度值的平均值作为当前位置帧的统计深度值,遍历所有位置帧,得到各个统计深度值,从而形成一幅深度值统计波形图,该深度值统计波形图可以表征呼吸运动时的腹部整体运动趋势,作为呼吸运动整体频率曲线。需要说明的是,标记点的数量和呼吸运动周期数量可自行设定,在此不进行限定。11个标记点的运动数据具有11组,连续帧RGB图像集具有1组,即每一帧RGB图像同时包含11个标记点。
进一步地,步骤S20包括如下步骤:
步骤S201:统计所述运动时区对应的运动状态在所述呼吸运动整体频率曲线中出现的次数;
步骤S202:从所述连续帧RGB图像集中提取每次所述运动状态下初始位置 帧对应的RGB图像和每次所述运动状态下结束位置帧对应的RGB图像;
步骤S203:将各次所述运动状态下初始位置帧对应的RGB图像和各次所述运动状态下结束位置帧对应的RGB图像均转换为灰度图像;
步骤S204:根据各次所述运动状态下初始位置帧对应的灰度图像计算得到所述初始位置帧的平均灰度图像,并作为所述首端位置帧对应的图像,根据各次所述运动状态下结束位置帧对应的灰度图像计算得到所述结束位置帧的平均灰度图像,并作为所述末端位置帧对应的图像。
示例性地,如图3所示,为了便于阐述上述过程,运动时区的数量为两个,分别是AB、BC两个运动时区,每个运动时区的大小为二分之一呼吸运动周期,其中一个运动时区AB对应的状态为完整的呼气状态,此时腹部的深度值数据从最小值变化值最大值,即波谷至波峰;另一个运动时区BC对应的状态为完整的吸气状态,此时腹部的深度值数据从最大值变化值最大值,即波峰至波谷,这样两个运动时区组成一个完整的呼吸运动周期。
以呼气状态时的运动时区AB为例,此时在呼吸运动整体频率曲线上对应的运动状态是上升阶段状态,此时统计整条呼吸运动整体频率曲线中出现上升阶段状态的次数,每个上升阶段状态的波谷对应的位置帧为初始位置帧,每个上升阶段状态的波峰对应的位置帧为结束位置帧,根据每个上升阶段状态的初始位置帧从连续帧RGB图像集中提取对应的RGB图像,根据每个上升阶段状态的结束位置帧从连续帧RGB图像集中提取对应的RGB图像。接着将提取到的各幅RGB图像转换为灰度图像,接着根据各个上升阶段状态下初始位置帧对应的灰度图像计算得到初始位置帧的平均灰度图像,并作为该运动时区的首端位置帧对应的图像;根据各个上升阶段状态下结束位置帧对应的灰度图像计算得到结束位置帧的平均灰度图像,并作为该运动时区的末端位置帧对应的图像。这样便得到了一个运动时区首末两端位置帧的图像,类似地,对应另一个运动时区,即吸气状态下首末两端位置帧的图像计算原理与上述类似,在此不进行赘述。
进一步地,在步骤S30中,由于在进行呼吸运动时,腹部的收缩是一种弹性运动,即腹部外表面的每一点除了在深度方向(z轴)进行运动之外,还在平面方向(xy轴)发生位移,因此需要进行弹性配准处理。采用基于B样条的自由变形方法对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行配准处理得到位置偏移量,其中基于B样条的自由变形方法的基本原理如下:
基于B样条自由变形方法,计算移动后图像中每个像素的坐标,将图像中像素的移动分解为X和Y方向,分别定位X和Y坐标。
对于任意像素(x,y),三次B样条弹性变形后的坐标位置(相对于像素网格),即位置偏移量,可以表示为:
其中,φ
i+l,j+m表示最近邻4*4个控制点的坐标位置,i,j表示最近邻4*4个控制点的单元索引号,i=[x/n
x]-1,j=[y/n
y]-1,u=x/n
x-[x/n
x],v=y/n
y-[y/n
y],[]表示取下整数,u和v表示x、y的相对单元控制网格的位置,n
x和n
y表示单位控制网格在x和y方向的间距,B
l表示B样条的第l个基函数。
控制点作为B样条自由变形方法的参数,可以建模的非刚性变形程度基本上取决于控制点网格的分辨率,φ用于表示由n
x×n
y控制点φ
i,j(0≤i<n
x,0≤j<n
y)所组成的网格,并且每个控制点之间的间距为δ。
根据式(1),获得φ优解。主要思想是搜索相似性度量E
ssd的最小空间变换位置,表示为
其中,N是图像配准区域中的像素总数,I
1和I
2表示参考图像和在二维空间中配准的图像的灰度函数。当这两个图像匹配得最好时,度量值E
ssd最小。在本实施例一中,I
1为首端位置帧的图像,I
2为末端位置帧的图像,B样条自由变形方法的具体计算过程为现有技术,在此不进行赘述。
进一步地,在计算得到该运动时区的首端位置帧和结束位置帧之间的位置偏移量之后,采用插值计算方法得到该运动时区内其他位置帧对应的相对位移量,插值计算的方法包括线性插值、三次样条插值和三次多项式插值中的任意一种。其中插值计算方法的计算过程为本领域技术人员熟知,在此不进行赘述,运动时区的其他位置帧的数量可自行限定,如果需要更精确地表示运动过程, 可插值计算得到更多数量的位置帧的相对位移量。
进一步地,在步骤S40中,根据得到位置偏移量和各个相对位移量构建呼吸运动模型,该呼吸运动模型可表征对应运动时区或对应呼吸运动周期中呼吸运动规律,这样可以通过上一位置帧的位置状态预测得到下一位置帧的位置状态。
由于人体在进行呼吸时,腹部或胸部的运动并不是理想的周期运动,可能存在非规律运动,即异常情况,主要为波峰偏低和波谷偏高,为了提高模型的自适应能力,在另一实施方式中,将监测得到的体表运动序列图像数据中的异常数据提取出来,作为新的历史运动数据,其中可以设定第一深度阈值和第二深度阈值,波峰小于第一深度阈值、波谷大于第二深度阈值的两种情况下,对应周期的数据均可以认定为异常数据,利用该异常数据构建新的呼吸运动模型,具体的构建过程可参考上文的描述,在此不进行赘述。这样在实际预测时可以根据呼吸运动情况进行切换,选择最佳的预测模型。
本实施例二提供了一种无标记呼吸运动的预测方法,该预测方法为:获取待预测对象呼吸时的其中一幅体表图像作为初始位置图像;将初始位置图像输入至根据实施例一的构建方法得到的呼吸运动模型中,呼吸运动模型实时输出不同时刻的体表位置。
上述的呼吸运动模型,是基于统计思想构建的整体趋势模型,能预测到整体运动情况,由于呼吸运动的非理想周期性,实际的呼吸频率曲线与预测得到的呼吸频率曲线并非完美匹配,其中一种原因是位置帧发生偏移,例如实际监测过程中,在某一时刻,腹部深度值达到波峰时对应的实际位置帧与呼吸运动预测模型中腹部深度值达到波峰时的预测位置帧不相同。存在两种情况,第一种情况是预测得到的呼吸频率曲线相对于实际呼吸频率曲线发生整体偏移,第二种情况是实际呼吸运动周期相对于呼吸运动预测周期增大或减小了,因此为了提高预测精度,本实施例的预测方法增加了自校正步骤。
针对第一种情况,自校正步骤为:采用深度相机实时监测待预测对象呼吸时的体表深度数据;根据体表深度数据获取当前呼吸运动周期下,选定位置达到预定深度时对应的实际位置帧,并从呼吸运动模型中获取选定位置达到预定深度时对应的预设位置帧;将选定位置的实际位置帧与预设位置帧的差值作为帧偏移值;利用所述帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧索引;呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正 位置。
具体来说,该深度相机输出的深度值波形图是针对画面中同一个位置的,即同一个位置的深度变化情况。示例性地,选定位置采用深度相机输出深度值波形图时针对的位置,在理想情况下,选定位置在到达预定高度时,深度相机实时探测到的实际位置帧与呼吸运动模型中该选定位置在到达预定高度时的预设位置帧相同,但由于呼吸运动的非理想周期性,两者并不完全相同,因此需要进行校正,将两者之差作为帧偏移值。示例性地,预定高度可为波峰深度或波谷深度。接着针对下一个呼吸运动周期,将每一时刻的预设位置帧与帧偏移值之和作为每一时刻的位置帧索引,呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置,这样得到体表校正位置与该时刻的实际位置更加贴近,从而提高了预测精度。校正前后的预测曲线如图4所示,其中F1-f1表示帧偏移值。
针对第二种情况,自校正步骤为:采用深度相机实时监测待预测对象呼吸时的体表深度数据;根据体表深度数据计算当前呼吸运动的实际周期;计算当前呼吸运动的实际周期与呼吸运动模型中呼吸运动的预设周期之间的差值,作为帧偏移值;利用帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧;呼吸运动模型根据调整得到位置帧输出每个时刻对应的体表校正位置。
具体来说,深度相机实时输出深度值波形图,具体可以计算得到当前呼吸运动的实际周期,其中周期的单位采用帧数,这样将当前呼吸运动的实际周期与呼吸运动模型中呼吸运动的预设周期相减得到帧偏移值,帧偏移值为正值或为负值,接着针对下一个呼吸运动周期,将每一时刻的预设位置帧与帧偏移值之和作为每一时刻的位置帧索引,呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置图像,这样得到体表校正位置与该时刻的实际位置更加贴近,从而提高了预测精度。
本实施例三还公开了一种呼吸运动模拟装置,如图5所示,呼吸运动模拟装置包括:支撑件100、弹性膜200、驱动组件。其中支撑件100包括相对设置的两个支撑臂,支撑臂可采用硬质盒等材料,弹性膜100可采用乳胶膜等,将安装弹性膜100的两端固定连接于两个支撑臂上,使得弹性膜100处于平铺且弹性绷紧状态。驱动组件位于弹性膜100的一侧,以驱动所述弹性膜进行弹性往复运动。示例性地,当弹性膜100水平放置时,驱动组件整体位于弹性膜100下方。驱动组件包括电机300、升降平台400和海绵500,电机300的输出轴与 升降平台400连接,海绵500设置在升降平台400上,海绵500用于挤压弹性膜200。
呼吸模拟方法如下:在可编程单片机平台上进行运动控制,根据导入的波形数据控制电机的旋转方向和瞬时转速,驱动电机重构人体呼吸相,进行上下往复运动,即从生理角度模拟人体呼吸状态下腹部的起伏运动趋势。当驱动装置移动时,会对驱动上方的海绵和乳胶膜施加作用力,使海绵和乳胶膜发生变形,从而直观地模拟人体腹部在呼吸状态下的形态变化。如图6所示,与此同时,使用商用的RGB-D相机600进行腹部表面图像的采集,相机垂直于腹部表面放置。
为了验证呼吸运动模拟装置的合理性。首先,对模拟的腹部表面进行可视化观察。如图7所示,该呼吸运动模拟装置模拟了人体的三种状态:从上至下分别是吸气-呼气、呼吸-吸气、呼气。从图7中可以直观地观察到,三种状态下模拟的腹部体积和面积的变化符合生物学原理。其次,从模拟的腹部表面提取一段时间内11个像素的深度值。图2显示,标记点的深度值的变化符合人类呼吸运动循环的规律。在图8中,通过进行真人与呼吸运动模拟装置在同种状态下的可视化对比,左上图表示真人吸气状态下的可视化图,右上图表示呼吸运动模拟装置吸气状态下的可视化图,左下图表示真人呼气状态下的可视化图,右下图表示呼吸运动模拟装置呼气状态下的可视化图,可观察到这两种状态的形态变化非常接近。通过以上方法的测试,说明了本实施例三的腹部呼吸模拟器的合理性和可行性。
进一步验证呼吸运动模型的预测性能,为了进行定量比较,我们首先在腹部表面(实施例三的弹性膜)选取了11个均匀分布的点标记。如图8所示,利用全局二值阈值技术和轮廓检测,在每一帧图像中精确地确定每个点标记的中心坐标,得到呼吸运动期间腹部表面11个标记的相对位移数据。最后,根据图像分割方法测量的数据对模型的跟踪能力进行了评价,通过计算运动数据的平均绝对误差来进行定量分析。
进一步验证不同三种插值计算方法的效果,分别是线性插值、三次样条插值和三次多项式插值。由三种插值方法计算的运动误差如图9所示。从图7可以看出,除了标记7和标记9的平均绝对误差大于一个像素之外,其他标记点的误差小于一个像素。在标记点4-10处,三次多项式插值方法的误差小于其他两种插值方法。在标记点0-3处,线性插值误差较小,效果较好。
为了更直观地看到跟踪效果,本文将所有标记点的跟踪曲线可视化。如图10和图11所示,虚线表示预测的标记轨迹,实线表示真实的标记轨迹,虚线较好地预测了实线的运动趋势,说明了该模型的可行性和准确性。
本实施例四还公开了一种计算机可读存储介质,所述计算机可读存储介质存储有呼吸运动模型的构建程序,所述呼吸运动模型的构建程序被处理器执行时实现实施例一的呼吸运动模型的构建方法。
本实施例五还公开了一种计算机设备,在硬件层面,如图12所示,该终端包括处理器12、内部总线13、网络接口14、计算机可读存储介质11。处理器12从计算机可读存储介质中读取对应的计算机程序然后运行,在逻辑层面上形成请求处理装置。当然,除了软件实现方式之外,本说明书一个或多个实施例并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。所述计算机可读存储介质11上存储有呼吸运动模型的构建程序,所述呼吸运动模型的构建程序被处理器执行时实现实施例一的呼吸运动模型的构建方法。
计算机可读存储介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机可读存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
上面对本发明的具体实施方式进行了详细描述,虽然已表示和描述了一些实施例,但本领域技术人员应该理解,在不脱离由权利要求及其等同物限定其范围的本发明的原理和精神的情况下,可以对这些实施例进行修改和完善,这些修改和完善也应在本发明的保护范围内。
Claims (15)
- 一种呼吸运动模型的构建方法,其中,所述构建方法包括:获取待预测对象在一段时间内呼吸时的体表运动序列图像数据;根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像,其中每个所述运动时区不超过二分之一呼吸运动周期,且每个运动时区对应的运动状态为呼气状态或吸气状态;对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量,并利用所述位置偏移量进行插值计算得到每一所述运动时区内其他位置帧帧所对应的相对位移量;根据各个所述运动时区对应的位置偏移量和相对位移量计算构建呼吸运动模型。
- 根据权利要求1所述的呼吸运动模型的构建方法,其中,所述运动时区的数量至少为两个,各个所述运动时区共同构成一个完整的呼吸运动周期。
- 根据权利要求1所述的呼吸运动模型的构建方法,其中,所述获取待预测对象在一段时间内呼吸时的体表运动序列图像的方法包括:在所述待预测对象的体表设置若干标记点,其中在所述待预测对象进行呼吸运动时所述标记点跟随体表进行运动;利用深度相机获取待预测对象在一段时间内呼吸时的连续帧RGB图像集和各个标记点对应的运动数据;根据各个标记点对应的运动数据生成呼吸运动整体频率曲线,所述呼吸运动整体频率曲线和所述连续帧RGB图像集共同构成体表运动序列图像数据。
- 根据权利要求3所述的呼吸运动模型的构建方法,其中,所述一段时间包括若干个呼吸运动周期,所述根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像的方法包括:统计所述运动时区对应的运动状态在所述呼吸运动整体频率曲线中出现的个数;从所述连续帧RGB图像集中提取每个所述运动状态下初始位置帧对应的RGB图像和每次所述运动状态下结束位置帧对应的RGB图像;将各个所述运动状态下初始位置帧对应的RGB图像和各个所述运动状态下 结束位置帧对应的RGB图像均转换为灰度图像;根据各个所述运动状态下初始位置帧对应的灰度图像计算得到所述初始位置帧的平均灰度图像,并作为所述首端位置帧对应的图像,根据各个所述运动状态下结束位置帧对应的灰度图像计算得到所述结束位置帧的平均灰度图像,并作为所述末端位置帧对应的图像。
- 根据权利要求1所述的呼吸运动模型的构建方法,其中,采用基于B样条的自由变形方法对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量。
- 根据权利要求1所述的呼吸运动模型的构建方法,其中,所述插值计算的方法为线性插值、三次样条插值和三次多项式插值中的任意一种。
- 一种无标记呼吸运动的预测方法,其中,所述预测方法包括:获取待预测对象呼吸时的其中一幅体表图像作为初始位置图像;将所述初始位置图像输入至根据权利要求1所述构建方法得到的呼吸运动模型中,所述呼吸运动模型实时输出不同时刻的体表位置。
- 根据权利要求7所述的无标记呼吸运动的预测方法,其中,所述预测方法还包括自校正步骤:采用深度相机实时监测待预测对象呼吸时的体表深度数据;根据体表深度数据获取当前呼吸运动周期下,选定位置达到预定深度时对应的实际位置帧,并从呼吸运动模型中获取所述选定位置达到预定深度时对应的预设位置帧;将所述选定位置的实际位置帧与预设位置帧的差值作为帧偏移值;利用所述帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧索引;所述呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置。
- 根据权利要求7所述的无标记呼吸运动的预测方法,其中,所述预测方法还包括自校正步骤:采用深度相机实时监测待预测对象呼吸时的体表深度数据;根据体表深度数据计算当前呼吸运动的实际周期;计算当前呼吸运动的实际周期与所述呼吸运动模型中呼吸运动的预设周期之间的差值,作为帧偏移值;利用所述帧偏移值调整下一个呼吸运动周期中每个时刻对应的位置帧索引;所述呼吸运动模型根据调整得到位置帧索引输出每个时刻对应的体表校正位置。
- 根据权利要求7所述的无标记呼吸运动的预测方法,其中,所述运动时区的数量至少为两个,各个所述运动时区共同构成一个完整的呼吸运动周期。
- 根据权利要求7所述的无标记呼吸运动的预测方法,其中,所述获取待预测对象在一段时间内呼吸时的体表运动序列图像的方法包括:在所述待预测对象的体表设置若干标记点,其中在所述待预测对象进行呼吸运动时所述标记点跟随体表进行运动;利用深度相机获取待预测对象在一段时间内呼吸时的连续帧RGB图像集和各个标记点对应的运动数据;根据各个标记点对应的运动数据生成呼吸运动整体频率曲线,所述呼吸运动整体频率曲线和所述连续帧RGB图像集共同构成体表运动序列图像数据。
- 根据权利要求11所述的无标记呼吸运动的预测方法,其中,所述一段时间包括若干个呼吸运动周期,所述根据所述体表运动序列图像数据生成至少一个运动时区的首端位置帧和末端位置帧分别对应的图像的方法包括:统计所述运动时区对应的运动状态在所述呼吸运动整体频率曲线中出现的个数;从所述连续帧RGB图像集中提取每个所述运动状态下初始位置帧对应的RGB图像和每次所述运动状态下结束位置帧对应的RGB图像;将各个所述运动状态下初始位置帧对应的RGB图像和各个所述运动状态下结束位置帧对应的RGB图像均转换为灰度图像;根据各个所述运动状态下初始位置帧对应的灰度图像计算得到所述初始位置帧的平均灰度图像,并作为所述首端位置帧对应的图像,根据各个所述运动状态下结束位置帧对应的灰度图像计算得到所述结束位置帧的平均灰度图像,并作为所述末端位置帧对应的图像。
- 根据权利要求7所述的无标记呼吸运动的预测方法,其中,采用基于B 样条的自由变形方法对每一所述运动时区的首端位置帧和末端位置帧分别对应的图像进行弹性配准处理得到位置偏移量。
- 根据权利要求7所述的呼吸运动模型的构建方法,其中,所述插值计算的方法为线性插值、三次样条插值和三次多项式插值中的任意一种。
- 一种呼吸运动模拟装置,其中,所述呼吸运动模拟装置包括:支撑件;弹性膜,安装于所述支撑件上;驱动组件,位于所述弹性膜的一侧,用于驱动所述弹性膜进行弹性往复运动。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110784447.2 | 2021-07-12 | ||
CN202110784447.2A CN113674393B (zh) | 2021-07-12 | 2021-07-12 | 呼吸运动模型的构建方法和无标记呼吸运动预测方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023283866A1 true WO2023283866A1 (zh) | 2023-01-19 |
Family
ID=78538895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/106415 WO2023283866A1 (zh) | 2021-07-12 | 2021-07-15 | 呼吸运动模型的构建方法和无标记呼吸运动预测方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113674393B (zh) |
WO (1) | WO2023283866A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114247061B (zh) * | 2021-12-07 | 2024-08-06 | 苏州雷泰医疗科技有限公司 | 肿瘤动态跟踪控制方法、装置及放射治疗设备 |
CN117281540B (zh) * | 2023-10-25 | 2024-07-16 | 山东新华医疗器械股份有限公司 | 一种呼吸信号获取方法、装置、设备及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268895A (zh) * | 2014-10-24 | 2015-01-07 | 山东师范大学 | 一种联合空域和时域信息的4d-ct形变配准方法 |
CN106056589A (zh) * | 2016-05-24 | 2016-10-26 | 西安交通大学 | 一种呼吸运动补偿的超声造影灌注参量成像方法 |
CN106446572A (zh) * | 2016-09-27 | 2017-02-22 | 上海精劢医疗科技有限公司 | 基于边界元模型和局部区域修正的肺部呼吸运动获取方法 |
CN109727672A (zh) * | 2018-12-28 | 2019-05-07 | 江苏瑞尔医疗科技有限公司 | 患者胸腹部肿瘤呼吸运动预测跟踪方法 |
CN209785384U (zh) * | 2019-01-10 | 2019-12-13 | 四川捷祥医疗器械有限公司 | 新型仿真肺部膜拟器 |
US20200069971A1 (en) * | 2018-09-04 | 2020-03-05 | Hitachi, Ltd. | Position measurement device, treatment system including the same, and position measurement method |
CN111161333A (zh) * | 2019-12-12 | 2020-05-15 | 中国科学院深圳先进技术研究院 | 一种肝脏呼吸运动模型的预测方法、装置及存储介质 |
CN111179409A (zh) * | 2019-04-23 | 2020-05-19 | 艾瑞迈迪科技石家庄有限公司 | 一种呼吸运动建模方法、装置和系统 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101623198A (zh) * | 2008-07-08 | 2010-01-13 | 深圳市海博科技有限公司 | 动态肿瘤实时跟踪方法 |
US9143799B2 (en) * | 2011-05-27 | 2015-09-22 | Cisco Technology, Inc. | Method, apparatus and computer program product for image motion prediction |
FR3002732A1 (fr) * | 2013-03-01 | 2014-09-05 | Inst Rech Sur Les Cancers De L App Digestif Ircad | Procede automatique de determination predictive de la position de la peau |
CN104574329B (zh) * | 2013-10-09 | 2018-03-09 | 深圳迈瑞生物医疗电子股份有限公司 | 超声融合成像方法、超声融合成像导航系统 |
US10342464B2 (en) * | 2015-08-27 | 2019-07-09 | Intel Corporation | 3D camera system for infant monitoring |
GB201706449D0 (en) * | 2017-04-24 | 2017-06-07 | Oxehealth Ltd | Improvements in or realting to in vehicle monitoring |
CN108159576B (zh) * | 2017-12-17 | 2020-01-07 | 哈尔滨理工大学 | 一种放疗中人体胸腹表面区域呼吸运动预测方法 |
CN110269624B (zh) * | 2019-07-16 | 2024-02-06 | 浙江伽奈维医疗科技有限公司 | 一种基于rgbd相机的呼吸监测装置及其呼吸监测方法 |
CN112604186A (zh) * | 2020-12-30 | 2021-04-06 | 佛山科学技术学院 | 一种呼吸运动预测方法 |
-
2021
- 2021-07-12 CN CN202110784447.2A patent/CN113674393B/zh active Active
- 2021-07-15 WO PCT/CN2021/106415 patent/WO2023283866A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268895A (zh) * | 2014-10-24 | 2015-01-07 | 山东师范大学 | 一种联合空域和时域信息的4d-ct形变配准方法 |
CN106056589A (zh) * | 2016-05-24 | 2016-10-26 | 西安交通大学 | 一种呼吸运动补偿的超声造影灌注参量成像方法 |
CN106446572A (zh) * | 2016-09-27 | 2017-02-22 | 上海精劢医疗科技有限公司 | 基于边界元模型和局部区域修正的肺部呼吸运动获取方法 |
US20200069971A1 (en) * | 2018-09-04 | 2020-03-05 | Hitachi, Ltd. | Position measurement device, treatment system including the same, and position measurement method |
CN109727672A (zh) * | 2018-12-28 | 2019-05-07 | 江苏瑞尔医疗科技有限公司 | 患者胸腹部肿瘤呼吸运动预测跟踪方法 |
CN209785384U (zh) * | 2019-01-10 | 2019-12-13 | 四川捷祥医疗器械有限公司 | 新型仿真肺部膜拟器 |
CN111179409A (zh) * | 2019-04-23 | 2020-05-19 | 艾瑞迈迪科技石家庄有限公司 | 一种呼吸运动建模方法、装置和系统 |
CN111161333A (zh) * | 2019-12-12 | 2020-05-15 | 中国科学院深圳先进技术研究院 | 一种肝脏呼吸运动模型的预测方法、装置及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113674393A (zh) | 2021-11-19 |
CN113674393B (zh) | 2023-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107808377B (zh) | 一种肺叶中病灶的定位装置 | |
US11559221B2 (en) | Multi-task progressive networks for patient modeling for medical scans | |
WO2023283866A1 (zh) | 呼吸运动模型的构建方法和无标记呼吸运动预测方法 | |
JP4914921B2 (ja) | 患者モニタ | |
US9153022B2 (en) | Method and system for analyzing craniofacial complex images | |
US20100198101A1 (en) | Non-invasive location and tracking of tumors and other tissues for radiation therapy | |
US20160236009A1 (en) | Estimating position of an organ with a biomechanical model | |
CN108159576B (zh) | 一种放疗中人体胸腹表面区域呼吸运动预测方法 | |
CN103229210A (zh) | 图像配准装置 | |
CN115187608B (zh) | 一种基于体表显著性分析的呼吸特征提取方法 | |
CN118037793B (zh) | 一种术中x线和ct图像的配准方法和装置 | |
CN115005985A (zh) | 呼吸运动补偿数据处理方法、医学图像生成方法及装置 | |
JP2012517310A (ja) | 患者モニタおよび方法 | |
CN117788617A (zh) | 一种基于多头注意力的运动流形分解模型的pet呼吸运动图像伪影配准校正方法 | |
KR101460908B1 (ko) | 4차원 컴퓨터 단층촬영 영상의 폐종양 위치 추적 시스템 및 그 방법 | |
JP2004167109A (ja) | 3次元計測方法、3次元計測システム、画像処理装置、及びコンピュータプログラム | |
CN110443749A (zh) | 一种动态配准方法及装置 | |
JP5747878B2 (ja) | 画像処理装置及びプログラム | |
US11837352B2 (en) | Body representations | |
Wang et al. | A heat kernel based cortical thickness estimation algorithm | |
CN108154532A (zh) | 一种辅助评估spect图像甲状腺体积的方法 | |
JP2024506509A (ja) | トランスデューサアレイ配置を最適化する医用画像強調のための方法、システムおよび装置 | |
Zhang et al. | Temporal consistent 2D-3D registration of lateral cephalograms and cone-beam computed tomography images | |
Calow et al. | Photogrammetric measurement of patients in radiotherapy | |
Peng et al. | Unmarked external breathing motion tracking based on b-spline elastic registration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21949654 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21949654 Country of ref document: EP Kind code of ref document: A1 |