CN114430670A - Patient position detection method and device, radiation medical equipment and readable storage medium - Google Patents

Patient position detection method and device, radiation medical equipment and readable storage medium Download PDF

Info

Publication number
CN114430670A
CN114430670A CN201980100805.4A CN201980100805A CN114430670A CN 114430670 A CN114430670 A CN 114430670A CN 201980100805 A CN201980100805 A CN 201980100805A CN 114430670 A CN114430670 A CN 114430670A
Authority
CN
China
Prior art keywords
image
marker
patient
markers
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980100805.4A
Other languages
Chinese (zh)
Inventor
李大梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Our United Corp
Original Assignee
Our United Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Our United Corp filed Critical Our United Corp
Publication of CN114430670A publication Critical patent/CN114430670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Manufacturing & Machinery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Automation & Control Theory (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Radiation-Therapy Devices (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A patient position detection method and device, a radiation medical device and a computer readable storage medium. The radio medical apparatus comprising a patient fixation mechanism (11), a first camera mechanism (12) and a second camera mechanism (13), the method comprising: while the patient is fixed on the patient fixing device, a first image is acquired by the first imaging device (12) by imaging the at least one marker (20), and a second image is acquired by the second imaging device (13) by imaging the at least one marker (20); wherein the marker (20) is a position indicator (101) attached to a body surface of the patient; -deriving a position (102) of the at least one marker (20) in space based on the position of the at least one marker (20) in the first and second images; obtaining posture position data (103) of the patient based on the position of the at least one marker (20) in space. Monitoring of the movement of the patient during the treatment can be achieved.

Description

Patient position detection method and device, radiation medical equipment and readable storage medium Technical Field
The present disclosure relates to the medical field, and in particular, to a patient position detection method and apparatus, a radiation medical device, and a computer-readable storage medium.
Background
Patient position sensing has important applications in many medical activities. For example, in the course of radiotherapy (radiotherapy, which refers to the elimination of a lesion with radiation, and is a kind of physical therapy for a tumor), it is necessary to align a target region of a tumor of a patient with the isocenter of radiotherapy equipment, so that the radiation can be accurately irradiated to the target region of the tumor of the patient, cells in the target region of the tumor can be killed by the radiation with a large dose, and normal tissue cells around the cells can not be damaged. However, during the treatment, the patient often cannot maintain a posture for a long time, and the body may move, resulting in a decrease in treatment accuracy. For example, if the patient moves during the radiotherapy process, the target area of the patient deviates from the isocenter of the radiotherapy device, which may result in that the radiation cannot be accurately irradiated to the target area of the patient, not only the treatment effect of the target area of the tumor may be affected, but also the damage to the surrounding normal tissue cells may be increased.
Disclosure of Invention
The present disclosure provides a patient position detection method and apparatus, a radiation medical device, and a computer-readable storage medium, which can detect the movement of a patient during a treatment process.
In a first aspect, the present disclosure provides a patient position detection method, which is applied to a medical apparatus including a patient fixing mechanism, a first imaging mechanism, and a second imaging mechanism, the method including:
when a patient is fixed on the patient fixing mechanism, shooting at least one marker attached to the body surface of the patient through the first shooting mechanism to obtain a first image, and shooting at least one marker through the second shooting mechanism to obtain a second image; wherein the marker is a position indicator which is attached to the body surface of the patient and can emit or reflect visible light, and the first image and the second image are both visible light images;
obtaining a position of the at least one marker in space based on the position of the at least one marker in the first image and the second image;
obtaining posture position data of the patient based on the position of the at least one marker in space.
In a possible implementation manner, each of the markers is a light emitter, and before the obtaining of the position of each of the markers in the space based on the position of the at least one marker in the first image and the second image, the method further includes:
identifying the at least one marker in the first image and the second image based on a difference in brightness between the at least one marker and a background in the first image and the second image, respectively.
In one possible implementation, a reflective material is disposed on a surface of each of the markers, and when the patient is fixed on the patient fixing mechanism, a first image is obtained by the first imaging mechanism by capturing at least one of the markers, and a second image is obtained by the second imaging mechanism by capturing the at least one of the markers, the method includes:
while the patient is secured to the patient securing mechanism, capturing the first image with the first imaging mechanism providing illumination to the at least one marker and capturing the second image with the second imaging mechanism providing illumination to the at least one marker;
correspondingly, before the deriving the position of each marker in space based on the position of the at least one marker in the first image and the second image, the method further comprises:
identifying the at least one marker in the first image and the second image based on a difference in brightness between the at least one marker and a background in the first image and the second image, respectively.
In one possible implementation, at least two markers of different colors are attached to the body surface of the patient, and before the obtaining of the position of each marker in space based on the position of the at least one marker in the first image and the second image, the method further includes:
identifying each of the markers in the first image and the second image, respectively, based on a difference in color between different ones of the markers in the first image and the second image.
In one possible implementation, deriving the position of the at least one marker in space based on the position of the at least one marker in the first image and the second image comprises:
obtaining a spatial coordinate value of any one of the markers in the second direction based on a planar coordinate value of the marker in the second direction in the first image, obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of the marker in the first direction in the second image, and obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of the marker in the third direction in the first image and a planar coordinate value of the marker in the third direction in the second image;
the first direction is the shooting direction of the first camera shooting mechanism, the second direction is the shooting direction of the second camera shooting mechanism, the first direction is perpendicular to the second direction, and the third direction is perpendicular to the first direction and the second direction respectively.
In a second aspect, the present disclosure also provides a patient position detecting apparatus applied to a radiation medical device including a patient fixing mechanism, a first imaging mechanism and a second imaging mechanism, the apparatus including:
the shooting module is used for shooting at least one marker attached to the body surface of the patient through the first shooting mechanism to obtain a first image and shooting at least one marker through the second shooting mechanism to obtain a second image when the patient is fixed on the patient fixing mechanism; wherein the marker is a position indicator which is attached to the body surface of the patient and can emit or reflect visible light, and the first image and the second image are both visible light images;
a first processing module for deriving a position of the at least one marker in space based on the position of the at least one marker in the first image and the second image;
a second processing module for obtaining posture position data of the patient based on the position of the at least one marker in space.
In a possible implementation manner, each marker is a luminous body, the device further comprises a first identification module,
the first identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
In a possible implementation manner, a reflective material is disposed on a surface of each of the markers, and the photographing module is further configured to obtain the first image by photographing the first photographing mechanism providing light to the at least one marker and obtain the second image by photographing the second photographing mechanism providing light to the at least one marker when the patient is fixed on the patient fixing mechanism; correspondingly, the device also comprises a second identification module,
the second identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
In one possible implementation, at least two markers of different colors are attached to the body surface of the patient, the device further comprises a third identification module,
the third identification module is configured to identify each of the markers in the first image and the second image based on a color difference between different ones of the markers in the first image and the second image, respectively, before the position of each of the markers in space is derived based on the position of the at least one marker in the first image and the second image.
In one possible implementation, the first processing module is further configured to:
obtaining a spatial coordinate value of any one of the markers in the second direction based on a planar coordinate value of the marker in the second direction in the first image, obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of the marker in the first direction in the second image, and obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of the marker in the third direction in the first image and a planar coordinate value of the marker in the third direction in the second image;
the first direction is the shooting direction of the first camera shooting mechanism, the second direction is the shooting direction of the second camera shooting mechanism, the first direction is perpendicular to the second direction, and the third direction is perpendicular to the first direction and the second direction respectively.
In a third aspect, the present disclosure also provides a radiotherapy apparatus comprising a processor and a memory, the memory having stored therein program instructions, the processor being configured to invoke the program instructions in the memory to perform any of the above-described patient position detection methods.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium storing a computer program comprising program instructions configured to, when executed by a processor, cause the processor to perform any of the patient position detection methods described above.
According to the technical scheme, the two visible light images containing the marker can be obtained in the radioactive medical equipment based on visible light imaging, so that the positions of the marker in the space can be obtained in the two visible light images, the body state position data of the patient can be obtained by utilizing the attachment relation between the marker and the body surface of the patient, and the movement condition of the patient can be detected in the treatment process. Because the equipment to implement the disclosed technical solutions may be readily available (e.g., the patient securing mechanism may be a bed or a support, the camera mechanism may be a camera, and the marker may be tape or a simple geometric body), the present disclosure may help reduce equipment requirements needed to implement patient position detection, and may function as a supplemental or alternative patient position detection when other patient position monitoring means are unavailable or ineffective.
Drawings
Fig. 1 is a schematic view of an application scenario of a patient position detection method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of a patient position detection method provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating the principle of obtaining the position of a marker in space based on the position of the marker in an image in the patient position detection method provided by an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram of a patient position detection method provided by yet another embodiment of the present disclosure;
FIG. 5 is a block diagram of a patient position detection device according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a radiotherapy apparatus provided in an embodiment of the present disclosure.
Detailed Description
To make the principles and advantages of the present disclosure clearer, embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another.
Fig. 1 is a schematic view of an application scenario of a patient position detection method according to an embodiment of the present disclosure. Referring to fig. 1, the patient position detection method is applied to a medical apparatus including a patient fixing mechanism 11 (including a bed 111 and a table 112 on the bed 111), a first camera 12, and a second camera 13. In the scenario shown in fig. 1, the patient lies on a bed 111 in the patient fixing mechanism 11, and the right hand is laid on a hand-held table 112 in the patient fixing mechanism 11, and a marker 20 (a position indicator which is attached to the body surface of the patient and can reflect or emit visible light, as exemplified by a structure including a support rod fixed at the bottom on the body surface of the patient and a sphere fixed at the top of the support rod in fig. 1) is attached to the body surface of the head, the left shoulder, the right shoulder and the right hand of the patient. The first imaging means 12 is provided above the bed 111 and images from the front of the patient, and the second imaging means 13 is provided on one side of the bed 111 in the horizontal direction and images from the left side of the patient. It is to be understood that in order to secure the position detection accuracy, the photographing position and the photographing direction of the first photographing mechanism 12 and the photographing position and the photographing direction of the second photographing mechanism 13 may be fixed using a mechanical structure such as a bracket, a fastener, and/or a rail, etc., on the basis of the structure shown in fig. 1, and the mechanical structure may be a part of the patient fixing mechanism 11. When the patient's body is substantially fixed by the patient fixing mechanism 11 and the desired marker 20 is attached thereto, the position of the patient can be detected according to the method provided by the embodiment of the present disclosure, and the obtained result can be used to monitor whether the patient deviates from the treatment position, or provide the posture position data of the patient required for the treatment activity, etc.
It should be noted that the application scenario and the radiotherapy apparatus shown in fig. 1 are only an example, and can be adaptively changed according to different usage requirements, for example, the patient fixing mechanism 11 can also be a stand for fixing the patient in an upright posture, the first camera mechanism 12 and the second camera mechanism 13 can be respectively located at the front and the back of the patient, the at least one marker 20 can be both arranged on the head or the body of the patient, and the like, which are not listed herein.
Fig. 2 is a schematic flow chart of a patient position detection method according to an embodiment of the present disclosure. It should be noted that the method of the embodiment of the present disclosure can be applied to any medical activity or radiation medical device that needs to detect the position of a patient, such as Image-Guided radiation Therapy (IGRT), intracranial tumor resection surgery or other surgical operations involving the detection of the position of a patient. It is to be understood that the patient refers to the subject of such medical activities, such as a patient in need of radiation therapy or surgery. In one example, the patient position detection method may be installed on a radiation therapy device (e.g., a radiation therapy device, an imaging device, an operating table, etc.) in the form of software, thereby implementing a patient position detection process in medical activities. As an example, the execution subject of the method may be, for example, the radiotherapy apparatus, a control device connected to the radiotherapy apparatus, a processor of the radiotherapy apparatus, a server connected to the radiotherapy apparatus, or the like. Referring to fig. 2, the patient position detection method may include the following steps.
In step 101, a first image is taken of at least one marker attached to a body surface of a patient by a first imaging mechanism and a second image is taken of the at least one marker by a second imaging mechanism while the patient is fixed to a patient fixing mechanism.
In one example, as shown in fig. 1, four markers 20 are attached to four different positions on the surface of the patient, and the four markers 20 are captured by the first imaging mechanism 12 and the second imaging mechanism 13 in different capturing directions. During shooting, the shooting position and shooting direction of the first imaging mechanism 12 are fixed, and the shooting position and shooting direction of the second imaging mechanism 13 are fixed. After the shooting is completed, the first camera mechanism 12 and the second camera mechanism 13 may actively provide the first image and the second image through a wired or wireless connection, or the first image and the second image may be acquired from the first camera mechanism 12 and the second camera mechanism 13 respectively through a wired or wireless connection, the transmission process may be partially or completely performed inside the radiotherapy apparatus, and the transmission process may include a step of storing image data in a memory. In this way, the first image and the second image can be obtained separately, and it can be understood that each of the first image and the second image includes images of each of the four markers 20.
It should be noted that, in order to enable the first imaging mechanism and the second imaging mechanism to capture all the markers in different orientations, the capturing positions and capturing directions thereof need to be adapted to the number and positions of the markers, and thus at least one of the two aspects may need to be adapted before starting the detection. Of course, the shooting positions and the shooting directions of the two shooting mechanisms can be configured to make the view range cover the possible positions of all the markers as much as possible, so that the trouble of calculation mode that the positions of the subsequent markers in the space need to be adjusted adaptively after the shooting positions and the shooting directions of the two shooting mechanisms are changed is eliminated.
In step 102, a position of the at least one marker in space is obtained based on the position of the at least one marker in the first image and the second image.
In one example, as shown in fig. 1, the shooting position and shooting direction of the first camera 12, and the shooting position and shooting direction of the second camera 13 are fixed by the mechanical structure, so the positions of the four markers 20 in space can be obtained from the positions of the four markers 20 in the first image and the second image based on these known information. For example, the surface of the bed 111 is a horizontal plane, and the first imaging device 12 images the patient lying on the bed 111 from above in the vertical direction, so that the position coordinates of the four markers 20 in the first image relative to the boundaries of the bed 111 can be used as the projection coordinates of the four markers 20 in the horizontal plane; furthermore, the second camera mechanism 12 takes a picture from the left side of the patient in the horizontal direction at the same height as the surface of the bed 111, so that the distance between each marker 20 in the second image and the surface of the bed 111 can be used as the projection coordinate of the marker 20 in the vertical direction. Assuming that the horizontal plane is an X-Y plane and the vertical direction is a Z axis, the coordinate values of the four markers 20 on the X axis and the Y axis can be obtained according to the positions of the four markers 20 in the first image, and the coordinate values of the four markers 20 on the Z axis can be obtained according to the positions of the four markers 20 in the second image. In this way, the positions of all the markers 20 in space are obtained.
Fig. 3 is a schematic diagram illustrating the principle of obtaining the position of the marker in space based on the position of the marker in the image in the patient position detection method according to an embodiment of the present disclosure. Referring to fig. 3, taking three markers 20 at different positions as an example, the first camera 12 takes a first image P1 on the X-Y plane in the direction opposite to the Z-axis direction, and the second camera 13 takes a second image P2 on the Y-Z plane in the direction opposite to the X-axis direction, with the origin of the X-Y-Z spatial coordinate system at the lower left corner of the first image P1 and the lower right corner of the second image P2. Therefore, when image distortion is ignored, it is considered that the projection coordinates of marker 20 on first image P1 include the X-axis coordinate value and the Y-axis coordinate value among the spatial coordinate values thereof, and the projection coordinates of marker 20 on second image P2 include the Z-axis coordinate value and the Y-axis coordinate value among the spatial coordinate values thereof. As shown in fig. 3, of the three markers: the projected coordinates of the highest marker 20 on the first image P1 are (x1, y1), and the projected coordinates on the second image P2 are (z1, y1), whereby the spatial coordinate values of the marker 20 can be obtained (x1, y1, z 1); the projection coordinates of the marker 20 of the medium height on the first image P1 are (x2, y2), and the projection coordinates on the second image P2 are (z2, y2), whereby the spatial coordinate values of the marker 20 can be obtained (x2, y2, z 2); the projection coordinates of the lowest marker 20 on the first image P1 are (x3, y3), and the projection coordinates on the second image P2 are (z3, y3), whereby the spatial coordinate values of the marker 20 can be obtained (x3, y3, z 3). It can be seen that when the shooting direction of the first camera is perpendicular to the shooting direction of the second camera, the position of the at least one marker in space can be obtained as follows (the shooting direction of the first camera is the first direction, the shooting direction of the second camera is the second direction, and the third direction is perpendicular to the first direction and the second direction, respectively): obtaining a spatial coordinate value of any marker in the second direction based on a planar coordinate value of the marker in the second direction in the first image (e.g., obtaining a spatial coordinate value X1 in the X-axis direction based on a planar coordinate value X1 in the X-axis direction in the first image P1), obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of any marker in the first direction in the second image (e.g., obtaining a spatial coordinate value Z1 in the Z-axis direction based on a planar coordinate value Z1 in the Z-axis direction in the second image P2), obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of any marker in the third direction in the first image and a planar coordinate value in the third direction in the second image (e.g., obtaining a spatial coordinate value Y1 in the Y-axis direction in the first image P1, and/or the plane coordinate value Y1 in the Y-axis direction in the second image P2, a spatial coordinate value Y1 in the Y-axis direction is obtained, and when the two plane coordinate values are different, the spatial coordinate value may be, for example, the average value thereof). It should be understood that the above process may involve coordinate transformation or a process related thereto, depending on the choice of the origin position of the spatial coordinate system.
It should be understood that, in addition to the position of the marker in space being obtained based on the relative positional relationship between the marker and other image elements (such as image boundaries or object boundaries) in the image, the position of the marker in space may be obtained based on the pixel coordinates of the marker in the image (the transformation relationship from the pixel coordinates to the spatial position may be obtained from the shooting positions and shooting directions of the two cameras, or may be obtained by experimental calibration, and the data thereof may be stored in a memory, for example, in the form of a table).
In yet another example, the step 102 may include obtaining transformation data (for example, detecting the shooting positions and shooting directions of the two cameras and correcting the original transformation data accordingly to ensure the accuracy of the obtained positions of the markers in the space) required to obtain the positions of the at least one marker in the space based on the positions of the at least one marker in the first image and the second image, where the transformation data may be, for example, data indicating that the plane where the first image is located is an X-Y plane and the plane where the second image is located is a Y-Z plane, and may also be, for example, a transformation relationship between pixel coordinates and space positions obtained from the shooting positions and shooting directions of the two cameras. Therefore, the process of pre-configuring the transformation data can be omitted, and the shooting positions and the shooting directions of the two shooting mechanisms can be corrected in time when being changed, so that the precision of the position detection of the patient is improved.
It will also be appreciated that where the direction of capture by the first camera means and the direction of capture by the second camera means are not mutually perpendicular (i.e. at an angle other than 90 ° to each other), the spatial coordinate values of each marker may be derived from the planar coordinate values of that marker in the first and second images based on the angular value of that angle. For example, referring to fig. 3, the plane where the first image P1 is located may be fixed as an X-Y plane, the plane where the second image P2 is located is a plane obtained by rotating the Y-Z plane around the Y axis, and in this case, the spatial coordinate values of the marker in the X axis direction and the Y axis direction may be obtained from the projection coordinates of the marker on the first image P1 in the manner described above, and the spatial coordinate value of the marker in the Z axis direction may be obtained by trigonometric function operation based on the projection coordinates of the marker on the second image P2 in combination with the included angle between the two capturing directions. Of course, depending on the way of establishing the spatial coordinate system, the first image and the second image may not be on the X-Y plane, the Y-Z plane, or the X-Z plane, and then the position of each marker in the space needs to be calculated according to the projection coordinates of each marker on the first image and the second image based on the included angle between the plane where the first image and the second image are located and each coordinate axis according to the general operation way of spatial geometry. The related spatial geometry operation and coordinate transformation operation are well known to those skilled in the art, and are not described in detail herein.
In step 103, posture position data of the patient is obtained based on the position of the at least one marker in space.
It should be noted that the posture and position data of the patient is data used to represent the posture of the patient and the position of each part required for the medical activity or the radiation medical equipment, and may be a spatial coordinate value of at least one selected position point on the patient body, or an offset of at least one selected position point on the patient body with respect to its initial position, and may have a form of a picture or a table.
In one example, the posture position data of the patient is represented as the offset of the position of the three markers 20 respectively positioned on the head, the left shoulder and the right shoulder of the patient in the space relative to the initial position in fig. 1, and is used for representing the movement of the breast tumor target area of the patient relative to the treatment starting time, so that the treatment is stopped immediately when any offset is detected to exceed a specified value, and the radiation is prevented from irradiating the surrounding normal tissue cells to cause damage. In this example, the above-described step 103 may include calculating a distance value between the spatial position of each marker 20 and its initial spatial position, and issuing or executing a control instruction to stop the radiation output when any of the obtained distance values exceeds a predetermined threshold value.
It should be understood that, since the number of the markers 20 and the positions of the markers attached to the surface of the patient can be predetermined, the transformation relationship from the positions of the markers in the space to the posture position data of the patient can be obtained in advance by theoretical calculation or experimental measurement on the basis of the number, and thus the posture position data of the patient can be obtained as needed based on the positions of the markers in the space obtained in step 102. Furthermore, the transformation relationship from the pixel coordinates to the spatial position in step 102 may be combined with the transformation relationship from the position of the marker in space to the posture position data of the patient, that is, the transformation relationship from the position of the at least one marker in the first and second images to the posture position data of the patient may be obtained by theoretical calculation or experimental determination, and the posture position data of the patient may be obtained directly based on the position of the at least one marker in the first and second images by means of the transformation relationship (in this case, the process of obtaining the position of the at least one marker in space based on the position of the at least one marker in the first and second images is implicit in the transformation relationship).
It should also be understood that in the patient position detection process or the patient position monitoring process based on the process (i.e., continuously repeating the patient position detection process), the number of markers attached to the body surface of the patient and the attachment positions should be kept constant, and the overhead required to obtain the position of at least one marker in space at a time can be reduced by fixing the photographing positions and photographing directions of the two photographing mechanisms. Moreover, the number and the attachment positions of the markers attached to the body surface of the patient vary according to the different posture and position data of the patient to be obtained. For example, a single marker attached to the tip of the patient's nose may be sufficient for position detection of the patient's head, while a marker attached to the patient's limb may not be necessary for position detection of the patient's head.
It can be seen that, in the embodiment of the present disclosure, the position of the marker in the space can be obtained by shooting the marker by the first imaging mechanism and the second imaging mechanism, and the posture position data of the patient can be obtained by using the attachment relationship between the marker and the body surface of the patient, so that the movement condition of the patient can be detected in the treatment process. Because the equipment to implement the disclosed solution may be readily available (e.g., the patient securing mechanism may be a bed, a chair, or a support, the camera mechanism may be a camera, or an image sensor, and the markers may be tape, simple geometry on a belt, or simple geometry with a support stick), the present disclosure may help reduce the hardware requirements needed to implement patient position detection.
Fig. 4 is a schematic flow chart of a patient position detection method according to another embodiment of the present disclosure. To the extent possible, the methods provided by embodiments of the present disclosure may be combined with any of the implementations of the patient position detection methods described above. Referring to fig. 4, the method may include the following steps.
In step 301, a first image is captured by a first imaging mechanism that provides illumination to at least one marker and a second image is captured by a second imaging mechanism that provides illumination to at least one marker while the patient is secured to the patient securing mechanism.
That is, the first image pickup means and the second image pickup means may be provided with an illumination means for illuminating the subject while shooting. In one example, as shown in fig. 1, the first camera mechanism 12 may provide visible light illumination downward at its shooting position by using an illumination lamp disposed beside the lens, and the second camera mechanism 13 may provide visible light illumination forward at its shooting position by using an illumination lamp disposed beside the lens, which may help to improve the imaging effect of the first image and the second image, and is beneficial to improve the detection accuracy of subsequent position detection.
Correspondingly, the surface of the marker may be provided with a reflective material (e.g. reflective paint, reflective film, blazed grating, etc., the wavelength band of which the reflectivity is high may be adapted to the wavelength band of the illumination provided by the camera mechanism). In this way, the marker may have a relatively high brightness in the first and second images, more easily distinguished from other objects and the background.
In step 302, at least one marker is identified in the first image and the second image, respectively, based on a difference in brightness between the at least one marker and the background in the first image and the second image.
It will be appreciated that in the case where the marker may have relatively high brightness in the first and second images, the marker may be more easily identified from the first and second images by image processing techniques, and/or background and object details having relatively low brightness other than the marker may be filtered from the first and second images to enhance the identification effect.
In one example, the first image may be adjusted in brightness and contrast to make the difference in brightness between the two sides of the marker edge more visible, and then the contour line of each marker may be identified by extracting the edge, during which other contour lines or boundary lines may not be identified as edges because the difference in brightness between the two sides is not large enough. Therefore, each marker can be easily identified through the shape in the processed first image, and the position of each marker can be obtained; the above process may be relatively fast and accurate since image details that may affect recognition are extensively filtered out. Likewise, the same or similar processing may be performed for the second image.
It should be understood that, in addition to the above-mentioned effects being achieved by the camera means providing illumination and the surface of the marker reflecting light, the marker may have relatively high brightness in the first and second images by using a light emitter (e.g., an object having a luminescent paint on the surface, a lamp, a thin film light emitting device, etc.) as the marker, and the camera means may not be required to provide illumination and may help to improve the problem that the marker cannot be recognized because light is blocked and cannot be irradiated.
In step 303, each marker is identified in the first image and the second image, respectively, based on color differences between the different markers in the first image and the second image.
It should be understood that, in the above-described process, at least one marker can be viewed as a whole, and when there are a plurality of markers, the markers are not distinguished, so that the algorithm overhead of image recognition can be saved, but it may not be possible to cope with, for example, a change in the number of recognized markers (for example, a spherical object enters the shooting range in the scene of fig. 1) or a change in the positions of two markers.
In this regard, when there are a plurality of markers, the markers may be distinguished by colors, for example, three markers that emit red light, green light, and blue light, or markers that appear red, blue, and green under illumination by an imaging mechanism may be used. In this way, after the positions of all the markers in the first and second images are identified, different markers can be distinguished according to the color of the image area at the corresponding positions. Therefore, when an accident that the number of the recognized markers changes occurs, for example, corresponding processing can be performed according to the colors of the markers, for example, the spherical objects entering the shooting range are excluded according to the fact that the spherical objects do not have the preset colors, so that the recognition accuracy is guaranteed.
In step 304, a position of the at least one marker in space is derived based on the position of the at least one marker in the first image and the second image.
In step 305, posture position data of the patient is obtained based on the position of the at least one marker in space.
It should be understood that the above processes of steps 301 to 305 can be implemented by referring to the above description of the patient position detection method as shown in fig. 1, and are not repeated herein.
In yet another example, the visible light imaging based patient position detection methods described in embodiments of the present disclosure may be used in conjunction with non-visible light imaging to better enable various processes in radiation therapy. For example, since the position at which the marker is attached to the body surface of the patient can be regarded as being fixed, when image registration between X-ray transmission images or computed tomography images is performed, the attachment position point thereof can be obtained based on the identification of the marker, so that the attachment position point can be utilized to help reduce the amount of computation for image registration or to verify the accuracy of image registration. Based on this, more accurate image registration at smaller irradiation doses can be achieved.
In yet another example, rapid positioning of a patient may be facilitated using the apparatus of embodiments of the present disclosure. For example, the allowable space region of each marker when the positioning is completed may be configured in advance, and when the positioning is started, whether each marker is in the allowable space region may be determined in real time by the images acquired by the two imaging mechanisms, and the positioning may be guided according to the deviation obtained by the determination until each marker is in the allowable space region. Since the marker can be quickly recognized by the processing part based on the brightness difference with the surrounding environment, and the marker emitting or reflecting visible light and the camera of visible light are all easily available device parts, a simple and quick-response positioning guidance mode can be realized. In addition, it is possible to monitor whether each marker is in the allowable space region in real time during the treatment process, and interrupt the treatment process when any one marker moves out of the allowable space region, so as to respond to the accident that the patient suddenly moves the body, which may happen during the treatment process, more conveniently and timely find the accident condition compared with the non-visible light imaging device which needs time for imaging and is more complex.
Fig. 5 is a block diagram of a patient position detection apparatus according to an embodiment of the present disclosure. The apparatus of the disclosed embodiments may be applied to any medical activity or radiation Therapy device requiring patient position detection, such as Image-Guided radiation Therapy (IGRT), intracranial tumor resection, or other surgical procedures involving patient position detection. It is to be understood that the patient refers to the subject of such medical activities, such as a patient in need of radiation therapy or surgery. In one example, the patient position detection apparatus may be installed on a radiation therapy device (e.g., a radiation therapy device, an imaging device, an operating table, etc.) in the form of software, so as to implement a patient position detection process in medical activities. Referring to fig. 5, the patient position detecting apparatus includes:
a photographing module 41, configured to, when a patient is fixed to the patient fixing mechanism, photograph at least one marker attached to a body surface of the patient by the first photographing mechanism to obtain a first image, and photograph the at least one marker by the second photographing mechanism to obtain a second image; wherein the marker is a position indicator attached to a body surface of the patient;
a first processing module 42 for deriving a position of the at least one marker in space based on the position of the at least one marker in the first image and the second image;
a second processing module 43 for obtaining posture position data of the patient based on the position of the at least one marker in space.
In a possible implementation manner, each marker is a luminous body, the device further comprises a first identification module,
the first identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
In a possible implementation manner, a reflective material is disposed on a surface of each of the markers, and the photographing module is further configured to obtain the first image by photographing the first photographing mechanism providing light to the at least one marker and obtain the second image by photographing the second photographing mechanism providing light to the at least one marker when the patient is fixed on the patient fixing mechanism; correspondingly, the device also comprises a second identification module,
the second identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
In one possible implementation, at least two markers of different colors are attached to the body surface of the patient, the device further comprises a third identification module,
the third identification module is configured to identify each of the markers in the first image and the second image based on a color difference between different ones of the markers in the first image and the second image, respectively, before the position of each of the markers in space is derived based on the position of the at least one marker in the first image and the second image.
In one possible implementation, the first processing module 42 is further configured to:
obtaining a spatial coordinate value of any one of the markers in the second direction based on a planar coordinate value of the marker in the second direction in the first image, obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of the marker in the first direction in the second image, and obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of the marker in the third direction in the first image and a planar coordinate value of the marker in the third direction in the second image;
the first direction is the shooting direction of the first camera shooting mechanism, the second direction is the shooting direction of the second camera shooting mechanism, the first direction is perpendicular to the second direction, and the third direction is perpendicular to the first direction and the second direction respectively.
It should be understood that according to the above-described alternative implementation of the patient position detection method, the patient position detection apparatus can implement any one of the above-described patient position detection methods by corresponding configuration and arrangement, and detailed details are not repeated.
In the corresponding example of fig. 5, the patient position detection means is presented in the form of a functional unit/functional module. As used herein, a "unit/module" may refer to an Application Specific Integrated Circuit (ASIC), a processor and memory that execute one or more software or firmware programs, an Integrated logic Circuit, and/or other devices that may provide the described functionality. At least part of the functionality of at least one of the units and modules mentioned may be implemented, for example, by a processor executing program code stored in a memory.
Fig. 6 is a block diagram of a radiotherapy apparatus provided in an embodiment of the present disclosure. Referring to fig. 6, the radiomedical apparatus comprises a processor 41 and a memory 42, the memory 42 having stored therein program instructions, the processor 41 being configured to invoke the program instructions in the memory 42 to perform any of the patient position detection methods described above.
Processor 41 may include a central processing unit (CPU, single or multi-core), a Graphics Processing Unit (GPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, or multiple Integrated circuits for controlling program execution.
Memory 42 may include, but is not limited to, a Read-Only Memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disk storage, optical disk storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate or integrated with the processor.
In particular implementations, processor 41 may include one or more CPUs, as one embodiment. In a specific implementation, the above-described radiation therapy device may include multiple processors, as one example. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The above-described radiation therapy device may comprise a general purpose computer device or a special purpose computer device. In a specific implementation, the radiation medical device may include any one or more of a radiation source, an optical component (such as a slit, a beam expander, a collimator, a lens, etc.), a computed tomography device, an X-ray imaging device, and an operating table, and the computer device may be a desktop computer, a laptop computer, a web server, a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, or a device with similar structure.
Embodiments of the present disclosure also provide a computer storage medium for storing a computer program for use in any one of the above-described patient position detection methods, the computer program comprising program instructions. Any of the above-described patient position detection methods provided by the present disclosure may be implemented by executing a stored program.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the claims of the present disclosure.

Claims (12)

  1. A patient position detection method is applied to a medical device, the medical device comprises a patient fixing mechanism, a first camera mechanism and a second camera mechanism, and the method comprises the following steps:
    when a patient is fixed on the patient fixing mechanism, shooting at least one marker attached to the body surface of the patient through the first shooting mechanism to obtain a first image, and shooting at least one marker through the second shooting mechanism to obtain a second image; wherein the marker is a position indicator which is attached to the body surface of the patient and can emit or reflect visible light, and the first image and the second image are both visible light images;
    obtaining a position of the at least one marker in space based on the position of the at least one marker in the first image and the second image;
    obtaining posture position data of the patient based on the position of the at least one marker in space.
  2. The method of claim 1, wherein at least two markers of different colors are attached to a body surface of the patient, the method further comprising, prior to said deriving a position in space of each of the markers based on the position of the at least one marker in the first and second images:
    identifying each of the markers in the first image and the second image, respectively, based on a difference in color between different ones of the markers in the first image and the second image.
  3. The method of claim 1, wherein each of the markers has a light reflective material disposed on a surface thereof, and wherein capturing a first image of at least one of the markers with the first camera mechanism and capturing a second image of the at least one of the markers with the second camera mechanism while the patient is secured to the patient securing mechanism comprises:
    while the patient is secured to the patient securing mechanism, capturing the first image with the first imaging mechanism providing illumination to the at least one marker and capturing the second image with the second imaging mechanism providing illumination to the at least one marker;
    correspondingly, before the deriving the position of each marker in space based on the position of the at least one marker in the first image and the second image, the method further comprises:
    identifying the at least one marker in the first image and the second image based on a difference in brightness between the at least one marker and a background in the first image and the second image, respectively.
  4. The method of claim 1, wherein each of the markers is a luminophore, the method further comprising, prior to said deriving the position in space of each of the markers based on the position of the at least one marker in the first and second images:
    identifying the at least one marker in the first image and the second image based on a difference in brightness between the at least one marker and a background in the first image and the second image, respectively.
  5. The method of any one of claims 1 to 4, wherein deriving the position of the at least one marker in space based on the position of the at least one marker in the first and second images comprises:
    obtaining a spatial coordinate value of any one of the markers in the second direction based on a planar coordinate value of the marker in the second direction in the first image, obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of the marker in the first direction in the second image, and obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of the marker in the third direction in the first image and a planar coordinate value of the marker in the third direction in the second image;
    the first direction is the shooting direction of the first camera shooting mechanism, the second direction is the shooting direction of the second camera shooting mechanism, the first direction is perpendicular to the second direction, and the third direction is perpendicular to the first direction and the second direction respectively.
  6. A patient position detecting apparatus applied to a radiological medical device including a patient fixing mechanism, a first imaging mechanism, and a second imaging mechanism, the apparatus comprising:
    the shooting module is used for shooting at least one marker attached to the body surface of the patient through the first shooting mechanism to obtain a first image and shooting at least one marker through the second shooting mechanism to obtain a second image when the patient is fixed on the patient fixing mechanism; wherein the marker is a position indicator which is attached to the body surface of the patient and can emit or reflect visible light, and the first image and the second image are both visible light images;
    a first processing module for deriving a position of the at least one marker in space based on the position of the at least one marker in the first image and the second image;
    a second processing module for obtaining posture position data of the patient based on the position of the at least one marker in space.
  7. The device of claim 6, wherein each of the markers is a light emitter, the device further comprising a first identification module,
    the first identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
  8. The apparatus of claim 6, wherein each of the markers has a light reflective material disposed on a surface thereof, the camera module further configured to capture the first image by the first camera mechanism providing illumination to the at least one marker and the second image by the second camera mechanism providing illumination to the at least one marker while the patient is secured to the patient securing mechanism; correspondingly, the device also comprises a second identification module,
    the second identification module is configured to identify the at least one marker in the first image and the second image based on a brightness difference between the at least one marker and a background in the first image and the second image, respectively, before the position of each marker in space is obtained based on the position of the at least one marker in the first image and the second image.
  9. The apparatus of claim 6, wherein at least two markers of different colors are attached to a body surface of the patient, the apparatus further comprising a third identification module,
    the third identification module is configured to identify each of the markers in the first image and the second image based on a color difference between different ones of the markers in the first image and the second image, respectively, before the position of each of the markers in space is derived based on the position of the at least one marker in the first image and the second image.
  10. The apparatus of any of claims 6-9, wherein the first processing module is further to:
    obtaining a spatial coordinate value of any one of the markers in the second direction based on a planar coordinate value of the marker in the second direction in the first image, obtaining a spatial coordinate value of the marker in the first direction based on a planar coordinate value of the marker in the first direction in the second image, and obtaining a spatial coordinate value of the marker in the third direction based on at least one of a planar coordinate value of the marker in the third direction in the first image and a planar coordinate value of the marker in the third direction in the second image;
    the first direction is the shooting direction of the first camera shooting mechanism, the second direction is the shooting direction of the second camera shooting mechanism, the first direction is perpendicular to the second direction, and the third direction is perpendicular to the first direction and the second direction respectively.
  11. A radiotherapy apparatus comprising a processor and a memory having stored therein program instructions, the processor being configured to invoke the program instructions in the memory to perform the method of any of claims 1 to 5.
  12. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions configured to, when executed by a processor, cause the processor to perform the method of any of claims 1 to 5.
CN201980100805.4A 2019-09-27 2019-09-27 Patient position detection method and device, radiation medical equipment and readable storage medium Pending CN114430670A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108670 WO2021056452A1 (en) 2019-09-27 2019-09-27 Method and apparatus for detecting position of patient, radiotherapy device and readable storage medium

Publications (1)

Publication Number Publication Date
CN114430670A true CN114430670A (en) 2022-05-03

Family

ID=75166311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980100805.4A Pending CN114430670A (en) 2019-09-27 2019-09-27 Patient position detection method and device, radiation medical equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN114430670A (en)
WO (1) WO2021056452A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113521425A (en) * 2021-07-26 2021-10-22 克拉玛依市中心医院 Multifunctional visual gastric lavage machine for emergency treatment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4207632C2 (en) * 1992-03-11 1995-07-20 Bodenseewerk Geraetetech Device and method for positioning a body part for treatment purposes
EP2948056B1 (en) * 2013-01-24 2020-05-13 Kineticor, Inc. System and method for tracking and compensating for patient motion during a medical imaging scan
US20140275707A1 (en) * 2013-03-15 2014-09-18 Elekta AB Publ. Intra-fraction motion management system and method
EP3188660A4 (en) * 2014-07-23 2018-05-16 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN108635681B (en) * 2018-03-21 2020-11-10 西安大医集团股份有限公司 Positioning method and device, upper computer and radiotherapy system
CN110227214B (en) * 2019-07-12 2021-11-30 江苏瑞尔医疗科技有限公司 Radiotherapy positioning method based on positioning target

Also Published As

Publication number Publication date
WO2021056452A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
US20230293908A1 (en) Patient monitor
JP6340488B1 (en) Radiation dose monitoring system
US20190001155A1 (en) Radiotherapy system and treatment support apparatus
US20200242339A1 (en) Registration of frames of reference
US9492685B2 (en) Method and apparatus for controlling and monitoring position of radiation treatment system
US7428296B2 (en) Medical imaging system with a part which can be moved about a patient and a collision protection method
JP6473501B2 (en) Method for calibrating a patient monitoring system for use with a radiotherapy device
US9943271B2 (en) Method and control system for controlling a medical device
US10080542B2 (en) Information providing method and apparatus for aligning X-ray tube and detector of mobile X-ray apparatus, and wireless detector
US20170035374A1 (en) Interventional x-ray system with automatic iso-centering
CA2836201C (en) Method and system for forming a virtual model of a human subject
KR20170024561A (en) X-ray image apparatus nad control method for the same
CN111132730A (en) Calibration method for a patient monitoring system for use with a radiation therapy device
CN108992796B (en) Patient monitoring system
US20090296893A1 (en) Calibration of a multi-plane x-ray unit
JP2018515207A (en) Monitoring system
EP3525662A1 (en) An intelligent model based patient positioning system for magnetic resonance imaging
JP2023554160A (en) navigation support
US11389122B2 (en) Method for registering an X-ray image data set with a navigation system, computer program product, and system
CN114430670A (en) Patient position detection method and device, radiation medical equipment and readable storage medium
CN116529756A (en) Monitoring method, device and computer storage medium
US20210162235A1 (en) Tumor positioning method and apparatus
US20230210478A1 (en) Moiré marker for x-ray imaging
CN115485017A (en) Image display control method, image display control device, electronic device, and computer storage medium
KR101621773B1 (en) Method and apparatus for controlling position of radiation treatment system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination