CN107596578B - Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium - Google Patents
Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium Download PDFInfo
- Publication number
- CN107596578B CN107596578B CN201710859487.2A CN201710859487A CN107596578B CN 107596578 B CN107596578 B CN 107596578B CN 201710859487 A CN201710859487 A CN 201710859487A CN 107596578 B CN107596578 B CN 107596578B
- Authority
- CN
- China
- Prior art keywords
- image
- alignment mark
- initial alignment
- sub
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to a position determination method of an alignment mark, which comprises the following steps: acquiring a sequence of scan images of a subject in an initial alignment mark state; determining an isocenter in an initial alignment mark state from the scan image sequence, comprising: identifying initial alignment marks in each layer of scanned images in the sequence of scanned images; and determining the isocenter from the initial alignment mark; calculating the central point of the target object according to the scanning image sequence by using a neural network model; and determining the position of the alignment mark according to the spatial offset of the isocenter and the center point of the target object. According to the method, a worker does not need to manually traverse the scanned image sequence to identify and mark the initial alignment mark, manual errors are not introduced, and the marking accuracy can be improved. The invention also relates to an imaging device, a storage medium and a method for identifying an alignment mark.
Description
Technical Field
The present invention relates to the field of medical equipment technology, and in particular, to an alignment mark recognition and position determination method, an imaging device, and a storage medium.
Background
Before radiotherapy, a positioning operation is required so that the center of the target object is located at the isocenter of the radiotherapy apparatus. The isocenter is the intersection of the X-ray beam centers in different directions. The closer to the isocenter the intensity of X-ray radiation received. Therefore, it is necessary to paste a lead dot on the subject as a reference point for alignment. Conventionally, in the process of determining the position of the lead point, a doctor needs to page through the CT image sequence for searching so as to realize the identification and marking of the alignment mark. This process is very cumbersome and introduces human error easily, reducing the accuracy of the marking.
Disclosure of Invention
Based on this, it is necessary to provide a method of recognizing an alignment mark, an imaging apparatus, and a storage medium, which can improve the accuracy of the mark.
A method of determining the position of an alignment mark, comprising:
acquiring a sequence of scan images of a subject in an initial alignment mark state;
determining an isocenter in an initial alignment mark state from the scan image sequence, comprising:
identifying initial alignment marks in each layer of scanned images in the sequence of scanned images; and
determining the isocenter from the initial alignment mark;
calculating the central point of the target object according to the scanning image sequence by using a neural network model; and
and determining the position of the alignment mark according to the space offset of the isocenter and the center point of the target object.
According to the alignment mark position determining method, each initial alignment mark can be identified in the obtained scanning image sequence, and the isocenter is determined according to each initial alignment mark; the method also utilizes the neural network model to calculate the central point of the target object according to the scanning image sequence, thereby determining the position of the alignment mark according to the space offset of the central point and the central point. According to the method, a worker does not need to manually traverse the scanned image sequence to identify and mark the initial alignment mark, manual errors are not introduced, and the marking accuracy can be improved.
In one embodiment, the step of identifying the initial alignment mark in each layer of the scanned image sequence includes the following steps performed on each layer of the scanned image sequence:
acquiring a contour of a subject in a scan image, the contour being composed of contour points;
dividing an image area where the contour is located into a plurality of sub-image areas along the contour, wherein each sub-image area at least comprises a partial contour; and
and sequentially identifying each sub-image area and determining the initial alignment mark.
In one embodiment, in the step of dividing the image area where the contour is located into a plurality of sub-image areas along the contour, the contour point is used as a center point of the sub-image areas for division, and a distance between the center points of the adjacent sub-image areas is greater than the size of the initial alignment mark.
In one embodiment, the step of sequentially identifying the sub-image regions and determining the initial alignment mark includes:
sequentially identifying each sub-image area, marking the sub-image areas when the alignment marks are identified, and adding one to the number of the alignment marks;
judging whether the number of the alignment marks is larger than or equal to a target value;
and ending the scanning identification of each sub-image area in the scanned image when the number of the alignment marks is larger than or equal to the target value.
In one embodiment, when each sub-image area is identified, whether the gray scale of each pixel in the sub-image area is greater than a gray scale threshold value is judged, and the pixel area with the gray scale greater than the gray scale threshold value is identified as the alignment mark.
In one embodiment, after the step of sequentially identifying each sub-image region and determining the initial alignment mark and before the step of determining the isocenter according to the initial alignment mark, the method further includes: taking the scanned images of which the number of the alignment marks is greater than or equal to a target value as target scanned images;
the step of determining the isocenter from the initial alignment marks is determining the isocenter from initial alignment marks in the target scan image.
In one embodiment, the step of determining the isocenter from initial alignment marks in the target scan image comprises:
acquiring the position of each initial alignment mark in the target scanning image in a corresponding sub-image area;
calculating the position of each initial alignment mark in the target scanning image based on the position relation between the sub-image area and the target scanning image; and
and calculating the spatial position of the isocenter according to the position of each initial alignment mark in the target scanning image.
In one embodiment, the step of sequentially identifying the sub-image regions and determining the initial alignment mark comprises:
sequentially identifying each sub-image area and marking each sub-image area when the alignment mark is identified;
calculating the position of the mark area in the corresponding scanning image;
judging whether two mark areas with the distance smaller than the size of the initial alignment mark exist in the scanned image or not;
if two mark areas with the distance smaller than the size of the initial alignment mark exist, merging the two mark areas with the distance smaller than the size of the initial alignment mark to be used as one mark area, and then using the average value of the positions of the two mark areas as the position of the merged mark area; and returning to the step of judging whether two mark areas with the distance smaller than the size of the initial alignment mark exist in the scanned image after combination;
if there are no two mark regions having a distance less than the size of the initial alignment mark, the mark regions are identified as the initial alignment marks.
In one embodiment, the method further comprises the step of acquiring the spatial position of the scanning bed where the subject is located in the initial alignment mark state;
in the step of determining the isocenter in the initial alignment mark state according to the scan image sequence, the scan image for determining the isocenter corresponding to the initial alignment mark is determined according to a spatial position relationship between a spatial position corresponding to each scan image in the scan image sequence and the scan bed.
In one embodiment, the step of calculating the central point of the target object from the scan image sequence by using a neural network model comprises:
and inputting the scanning image sequence into the neural network model, automatically identifying the contour information of the target object region in the corresponding scanning image through the neural network model, and calculating the central point of the target object according to the contour information.
In one embodiment, the method further comprises the step of training the neural network model by using the image with the labeling information; the labeling information comprises at least one characteristic information; the feature information includes segmented contour information.
An image forming apparatus comprising:
a scanning device for acquiring a sequence of scan images of a subject in an initial alignment marker state; and
a processor connected to the scanning device for acquiring a sequence of scan images of a subject in an initial alignment marker state;
the processor is further configured to determine an isocenter in an initial alignment mark state from the sequence of scanned images, including: identifying initial alignment marks in each layer of scanned images in the sequence of scanned images; and determining the isocenter from the initial alignment mark;
the processor is further configured to calculate a center point of a target object according to the sequence of scanned images by using a neural network model, and determine a position of the alignment mark according to a spatial offset between the isocenter and the center point of the target object.
A storage medium having stored thereon a computer program which, when being executed by a processor, is operative to perform the steps of the method as in any of the previous embodiments.
A method for identifying an alignment mark includes:
acquiring a scan image of a subject in an alignment mark state; and
automatically identifying an initial alignment mark in the scanned image, comprising: and judging whether the gray scale of each pixel in the scanned image is greater than a gray scale threshold value or not, and identifying the pixel area with the gray scale greater than the gray scale threshold value as an alignment mark.
Drawings
FIG. 1 is a flow diagram of a method for determining a position of an alignment mark in one embodiment;
FIG. 2 is a diagram illustrating an initial alignment mark state in one embodiment;
FIG. 3 is a flowchart of step S122 in FIG. 1 in one embodiment;
FIG. 4 is a flowchart of step S330 in FIG. 3 in one embodiment;
FIG. 5 is a flow diagram of acquiring a position of an alignment mark in one embodiment;
FIG. 6 is a flowchart of step S330 in FIG. 3 in another embodiment;
FIG. 7 is a diagram of a reference map sequence with annotation information in an embodiment;
FIG. 8 is a diagram illustrating a process for modeling a neural network, according to one embodiment;
FIG. 9 is a flow diagram of a method for identifying alignment marks in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The method for determining the position of the alignment mark in one embodiment can be used in an imaging device. The imaging device may be a CT scanning device, but is not limited thereto. In the detailed description, only the CT scanning apparatus is taken as an example, but the invention is not limited thereto. The imaging device may be used in a radiation therapy planning system or a radiation therapy device. In one embodiment, the radiation treatment planning system and the radiation treatment apparatus are separate apparatuses. In other embodiments, the radiation treatment planning system may also be integrated in the radiation treatment apparatus. The alignment mark is pasted on the body of the examinee and serves as a positioning reference point of the imaging device and the radiotherapy device. During the positioning process, the center point of the target object on the body of the detected person and the isocenter of the imaging device and the radiotherapy device can be in a position by means of the reference point. Fig. 1 is a flowchart of a method for determining a position of an alignment mark in an embodiment, the method including the steps of:
in step S110, a scan image sequence of the subject in the initial alignment mark state is acquired.
The scan image sequence of the subject in the initial alignment mark state can be obtained by shooting through a scanning device in the imaging device, that is, the obtained scan image sequence contains the initial alignment mark. In one embodiment, the sequence of scan images may be acquired by a helical scan. The initial alignment mark is obtained by a worker by roughly judging the target object position in the body of the examinee according to experience, so that an external positioning system such as an external laser lamp system is used for supposing that laser is focused on the target object position, each lamp beam on the laser lamp projects a corresponding focus on the body surface of the examinee, the alignment mark is pasted at the corresponding focus position, the alignment mark indicates the isocenter of a scanning device in the imaging equipment, and imaging scanning is carried out under the condition to obtain a scanning image sequence. Typically, the number of alignment marks is the same as the number of laser beams on the external laser light system. In the present embodiment, the external laser lamp system is provided with three laser light sources, so that corresponding three focal points are formed on the body surface of the subject. The three laser sources respectively project three cross laser beams from the upper part, the left side and the right side of the scanning bed, and the position of the central point of the cross laser beam projected on the body surface of the detected body is the focus of the laser beam. The three laser beams are usually located on a CT slice and generally form an equilateral triangle, and the side formed by the left and right focal points is the bottom side, and the midpoint of the bottom side is the isocenter of the initial alignment mark, as shown in fig. 2. In fig. 2, 200 denotes an alignment mark, 20 denotes a positioning laser beam, and O denotes an isocenter. The isocenter O is typically required to coincide with the isocenter of the radiation treatment apparatus to maximize the radiation dose to the target object for the purpose of destroying the target object tissue.
Step S120, determining an isocenter in an initial alignment mark state according to the scan image sequence.
The alignment marks have a high CT value with respect to the subject. For example, the alignment mark may be formed of a lead material to form a lead dot structure. The CT value of the alignment mark is far higher than that of the normal body tissue of the detected object, and the CT value can be 2000-4000, so that the alignment mark appears as a bright point with higher gray value on the scanning image sequence (or on the CT image). Moreover, because the alignment mark is mostly pasted on the body surface of the subject and is close to the skin, the density of the surrounding air or the tissue of the subject is usually low, so that the gray value of the alignment mark in the image is obviously compared with the surrounding pixels, and the identification of the alignment mark is easy to realize based on the gray value.
The method comprises the steps of automatically identifying initial alignment marks in each scanned image in a scanned image sequence, determining a scanned image containing the initial alignment marks required for determining the isocenter according to an identification result, and determining the corresponding isocenter according to the position of each initial alignment mark in the scanned image. Step S120 includes steps S122 to S124.
In step S122, the initial alignment marks in the scanned images of the layers in the sequence of scanned images are identified.
Each scan image in the sequence of scan images corresponds to a different slice of the subject, such that the state of the initial scan marker contained in each scan image is different. Accordingly, there is a need for identification of initial alignment marks for each layer of the scanned image sequence. Specifically, whether an image area with a gray level higher than a gray level threshold exists in a scanned image or not is judged, if yes, the scanned image can be determined to contain the alignment mark, and the image area with the gray level higher than the gray level threshold is automatically identified as the alignment mark. The gray threshold is determined according to the gray value of the alignment mark in the scanned image, i.e. can be set as the CT value of the alignment mark.
In step S124, an isocenter is determined from each initial alignment mark.
The scanning image used for determining the isocenter can be determined according to the condition of the initial alignment marks identified in the scanning images of all layers in the scanning image sequence, and the isocenter is finally determined according to the initial alignment marks in the scanning image. As described above, since the relative positional relationship of the laser beams in an external positioning system such as an external laser lamp system is clear, after determining the focal point formed on the subject, that is, the position of the initial alignment mark, the position of the isocenter can be determined from the position of the initial alignment mark, as shown in fig. 2. Specifically, the position of the isocenter in the scanned image is calculated, and the spatial position of the isocenter is determined according to the position of the isocenter in the scanned image and the spatial position corresponding to the scanned image. In another embodiment, the coordinate system of the scan image, the gantry coordinate system and the scan bed coordinate system are the same coordinate system, so that the calculated coordinate position of the isocenter in the scan image can be directly used as its spatial position without further position transformation.
Through the steps, the initial alignment marks can be automatically identified, and after the target scanning image is determined, the isocenter in the initial alignment mark state is calculated according to the positions of the initial alignment marks in the scanning image. In the process, the worker does not need to manually traverse the scanned image sequence to complete the identification of the initial alignment mark, so that manual errors are not introduced, and the marking accuracy is improved.
Step S130, calculating the central point of the target object according to the scanning image sequence by using the neural network model.
And firstly determining the contour information of the target object area in the corresponding scanning image in the scanning image sequence by utilizing a neural network model, and calculating the central point of the target object according to the determined contour information of the target object area. The neural network model can be established in advance, so that the obtained scanning image sequence can be directly input into the neural network model, the automatic identification of the contour information of the target object area in the corresponding scanning image is realized, and the central point of the target object is calculated according to the contour identification result.
Similarly, in the process of calculating the central point of the target object, the position of the central point of the target object in the corresponding scanned image may be determined, and then the spatial position of the central point of the target object may be determined according to the position and the spatial position of the corresponding scanned image. When the scanning image coordinate system, the rack coordinate system and the scanning bed coordinate system are the same coordinate system, the coordinate position of the scanning image coordinate system, the rack coordinate system and the scanning bed coordinate system can also be directly used as the space position of the central point of the target object.
Step S140, determining the position of the alignment mark according to the spatial offset of the isocenter and the center point of the target object.
And determining the spatial offset of the isocenter and the target object according to the determined spatial positions of the isocenter and the target object. After the spatial offset is determined, the position of the alignment mark can be determined based on the offset. Specifically, after obtaining the spatial offset of the two, the scanning bed is controlled to move the spatial offset, so that the center point of the target object is moved to the isocenter of the imaging device. The focus formed on the body of the subject by the laser beam generated by an external positioning system such as an external laser lamp system is the correct position of the alignment mark. In an embodiment, the spatial position of each initial alignment mark may be determined according to the position of each initial alignment mark in the target scan image and the spatial position of the target scan image, and the position of the alignment mark to be determined may be obtained after the spatial position is shifted by the spatial shift amount.
According to the alignment mark position determining method, each initial alignment mark can be identified in the obtained scanning image sequence, and the isocenter is determined according to each initial alignment mark; the method also utilizes the neural network model to calculate the central point of the target object according to the scanning image sequence, thereby determining the position of the alignment mark according to the space offset of the central point and the central point. According to the method, a worker does not need to manually traverse the scanned image sequence and identify and mark the initial alignment mark, manual errors are not introduced, and the marking accuracy can be improved.
In one embodiment, in step S122, the steps shown in fig. 3 are performed for the scanned images of the layers in the image scanning sequence:
in step S310, the contour of the subject in the scan image is acquired.
The acquired contour is composed of contour points, that is, a set of contour points is automatically acquired when the contour of the subject in the scan image is acquired. Specifically, a skin segmentation algorithm is called for a scanned image sequence obtained by scanning, and a closed contour line (a two-dimensional coordinate point set) on each layer of scanned image is obtained. Since the skin of the subject is mostly curved, the closed contour line is a closed contour curve.
Step S320, dividing the image area where the contour is located into a plurality of sub-image areas along the contour, where each sub-image area at least includes a part of the contour.
In the segmentation process, only the image area where the contour line is located can be divided, and other areas are not divided, so that the identification of the initial alignment mark is realized by taking the contour line as a path, the search range required by identification can be greatly reduced, and the processing efficiency is improved. In an embodiment, the sum of the areas of the sub-image regions is smaller than the area of the scanned image where the sub-image regions are located, so that the whole scanned image is not required to be identified in the process of identifying the initial alignment mark, and the processing efficiency is improved. In one embodiment, in the dividing process, the image area where the contour line is located is divided by using the contour point as a central point of the sub-image area. And the distance between the central points of the adjacent sub-image areas is larger than the size of the initial alignment mark, so that the overlapping area between the two adjacent sub-image areas is reduced as much as possible, and the processing speed is further improved. The size of the sub-image area may also be determined based on the size of the initial alignment marks in the scanned image. In one embodiment, the size of each sub-image area may be set to 20X20 pixels.
Step S330, sequentially identifying each sub-image region and determining an initial alignment mark.
After the image area where the outline is located is divided, the divided sub-image areas are sequentially scanned to identify whether the sub-image areas contain the alignment marks. In the scanning process, the sub-image regions where a certain contour point in the contour line is located can be taken as a starting point, and the scanning can be sequentially performed on the sub-image regions clockwise or counterclockwise along the contour line. In other embodiments, each sub-image area may be scanned line by line. And judging whether the gray scale of each pixel of the sub-image area is greater than a gray scale threshold value or not, so as to determine that the sub-image area contains the alignment mark when the gray scale is greater than the gray scale threshold value.
In other embodiments, the scanned image may be divided in steps other than those shown in fig. 3, or the scanned image may not be divided.
In an embodiment, step S330 may be implemented by a process as shown in fig. 4, which includes the following sub-steps:
and S410, sequentially identifying each sub-image area, marking the sub-image areas when the alignment marks are identified, and adding one to the number of the alignment marks.
When the sub-image area is identified to contain the alignment mark, the sub-image area is marked. In an embodiment, the binarization processing may be performed for the sub-image area. For example, when the gray level of a pixel in the sub-image region is smaller than the gray level threshold, the value of the pixel is set to 0, and otherwise, the value is set to 1. In other embodiments, other marking methods may be used to distinguish whether the sub-image region includes the alignment mark. And when the sub-image area is identified to contain the alignment mark, adding one to the number of the alignment marks in the scanned image of the corresponding layer. And when the alignment mark is not identified, continuing to perform scanning identification on the next sub-image area until the scanning identification of all the sub-image areas in the layer image is completed. Therefore, after the scanned image of the layer is identified, the number of the alignment marks included in the layer can be counted.
In step S420, it is determined whether the number of alignment marks is greater than or equal to a target value.
When the number of the alignment marks is judged to be larger than or equal to the target value, the scanned image can be determined to be the target scanned image. The same judgment criterion may be adopted in step S124. When it is determined that the number of alignment marks is greater than or equal to the target value, step S440 is performed, otherwise step S430 is performed.
In step S430, it is determined whether the identification of all the sub-image regions in the current scanned image is completed.
If the identification of all the sub-image areas in the current scanned image is completed, step S440 is performed, otherwise, step S410 is performed.
Step S440 ends the scanning recognition of each sub-image area in the scanned image.
When the number of the alignment marks in the scanned image is judged to reach the target value, namely the isocenter in the initial alignment mark state can be determined, the subsequent sub-image area does not need to be scanned continuously, so that the scanning identification time is saved, and the whole processing efficiency is improved.
In an embodiment, after identifying that a certain sub-image region includes an alignment mark, that is, after step S410, the step of acquiring the position of the alignment mark is further performed. The step is specifically shown in fig. 5, and comprises the following steps:
step S510, obtaining the minimum value and the maximum value of all the pixels with value 1 in the sub-image region in the horizontal coordinate.
In this embodiment, each scanned image is provided with its own two-dimensional coordinate system, and each corresponding sub-image region is also provided with its own two-dimensional coordinate system. The relationship of the two-dimensional coordinate system in each sub-image area with respect to the coordinate system of the scanned image is clear, so that the coordinates of points corresponding to the scanned image can be obtained from the coordinates of points in the sub-image area. In the present embodiment, the horizontal coordinate is also the X-axis coordinate of the coordinate system corresponding to the sub-image area, and the vertical coordinate corresponds to the Y-axis coordinate of the coordinate system of the sub-image area. The minimum value of the lateral coordinate is X _ min, and the maximum value of the lateral coordinate is X _ max.
Step S520, obtaining the minimum value and the maximum value of all the pixels with value 1 in the sub-image region in the longitudinal coordinate.
The minimum value of the vertical coordinate is Y _ min, and the maximum value of the vertical coordinate is Y _ max.
Step S530, determining coordinates of the alignment mark in the sub-image area according to the minimum and maximum values of the lateral coordinates and the minimum and maximum values of the longitudinal coordinates.
The lateral coordinates of the alignment marks are: (X _ min + X _ max)/2.
The longitudinal coordinates of the alignment marks are: (Y _ min + Y _ max)/2.
In step S540, the position of the alignment mark in the scanned image is calculated based on the coordinate transformation from the sub-image area to the scanned image.
In other embodiments, the sub-image area, the coordinate system of the scanned image, and the coordinate system of the scanning bed are the same, so the obtained coordinates of the alignment mark in the sub-image area are the coordinates of the alignment mark in the scanned image, that is, the corresponding spatial position of the alignment mark, that is, the step S540 is not required to be performed.
In an embodiment, the step S330 may also be implemented by a process as shown in fig. 6, including the following steps:
step S602, sequentially identifying each sub-image area, and performing marking processing on each sub-image area when the alignment mark is identified.
And judging whether the gray scale of each pixel is greater than a gray scale threshold value or not for each sub-image area row by row or column by column, and marking the pixel when the gray scale of the pixel is greater than the gray scale threshold value. In an embodiment, the binarization processing may be performed for the sub-image area. For example, when the gray level of a pixel in the sub-image region is smaller than the gray level threshold, the value of the pixel is set to 0, and otherwise, the value is set to 1. In other embodiments, other marking symbols may be used to mark pixels with a gray level greater than a gray level threshold. Thus, when a partial or complete initial alignment mark is included in a sub-image region, it is necessary that there be marked pixels. Each marked pixel forms a mark area which forms a complete initial alignment mark or a part of the initial alignment mark.
In step S604, the position of the mark region in the corresponding scanned image is calculated.
Specifically, for the marked sub-image area, the minimum value X _ min and the maximum value X _ max of the abscissa of all the marked pixels (i.e., the mark area) are acquired, and the minimum value Y _ min and the maximum value Y _ max of the ordinate of all the marked pixels are acquired, so that the coordinates of the alignment mark in the sub-image area are calculated from the acquired values.
The lateral coordinates of the alignment marks are: (X _ min + X _ max)/2.
The longitudinal coordinates of the alignment marks are: (Y _ min + Y _ max)/2.
In an embodiment, the sub-image area, the coordinate system of the scanned image, and the coordinate system of the scanning bed are the same, so the obtained coordinates of the alignment mark in the sub-image area are the coordinates of the alignment mark in the scanned image, that is, the corresponding spatial position of the alignment mark. In other embodiments, the sub-image area and the scanned image have different coordinate systems, so that after the coordinates of the alignment mark in the sub-image area are obtained, coordinate transformation is performed according to the coordinate correspondence relationship between the two to obtain the coordinates of the alignment mark in the scanned image.
Step S606, it is determined whether there are two mark regions in the scanned image whose distance is smaller than the size of the initial alignment mark.
When one initial alignment mark is divided into two or more sub-image regions, one initial alignment mark is recognized as two or more mark regions in the scanned image, and thus a merging process is required for the two mark regions, so that the number and the positions of the finally recognized initial alignment marks in the scanned image are accurate. When there are two mark regions having a distance smaller than the initial alignment mark size in the scanned image, step S608 is performed, otherwise step S610 is performed.
In step S608, two mark regions having a distance smaller than the initial alignment mark are combined as one mark region, and the average value of the positions of the two mark regions is used as the position of the combined mark region.
After the merging of the mark areas is completed, the step S606 is executed again until there are no two mark areas in the scanned image, the distance of which is smaller than the size of the initial alignment mark, and each mark area is ensured to represent a complete initial alignment mark.
Step S610, identifying the mark region as an initial alignment mark.
In one embodiment, a step of setting, as the target scan image, a scan image in which the number of alignment marks is greater than or equal to the target value is further included between step S122 and step S124. That is, after step S122 is completed, it is determined whether the number of initial alignment marks in each scanned image is greater than or equal to the target value, so that the scanned image in which the number of alignment marks is greater than or equal to the target value is taken as the target scanned image. The number of initial alignment marks in each scanned image is compared to a target value, thereby determining the layer as a target scanned image when the initial alignment marks are greater than or equal to the target value. The target value may be equal to the total number of laser beams employed in the external laser lamp system. In the present embodiment, when the target value is 3 and the number of initial alignment marks is equal to 3, the scan image may be determined as the target scan image. In other embodiments, the external laser light system may be positioned using multiple laser beams so that the target value may be determined as a function of the actual conditions. The target value is set based on the isocenter which can be finally determined in the initial alignment mark state.
And when the number of the initial alignment marks in the scanned image is judged to be larger than or equal to the target value, taking the scanned image as a target scanned image. In step S124, the isocenter is determined according to each initial alignment mark in the target scan image. When there are a plurality of target scan images, the coordinates of the isocenter determined for each target scan image are averaged to obtain the coordinates of the final isocenter. In general, the number of target scan images is 1-2, depending on the CT slice thickness.
In an embodiment, in the step of identifying the initial alignment marks in each layer of the scanned image in the scanned image sequence, that is, in step S122, the number of the initial alignment marks is counted at the same time, so that after the initial alignment mark number of the current scanned image is determined to be greater than or equal to the target value, that is, the current scanned image can be used as the target scanned image, the identification process of the scanned image sequence is ended, and the isocenter is determined directly according to the initial alignment marks in the target scanned image, so that the processing efficiency can be improved.
In an embodiment, the method further comprises the step of acquiring the spatial position of the couch in which the subject is positioned in the initial alignment mark state. The spatial position of the scanning bed can be read at any time through a hardware interface, and each scanning image in the scanning image sequence has a corresponding spatial position. Thus, after the worker has finished applying the initial alignment marks, the worker manually presses a "setup confirm" button on the machine, which triggers the system to record the current spatial position of the scanning bed (which may also be referred to as the bed value). At this time, in the execution of step S120, it is not necessary to scan all the scan images in the scan image sequence, and it is only necessary to determine the scan image layer to be identified according to the spatial position corresponding to each scan image and the spatial position of the scan bed. For example, the scanning image in the preset spatial distance range of the spatial position of the scanning bed can be only scanned and identified to determine the position of the isocenter, so that the search range of the algorithm is greatly reduced, and the purpose of improving the processing speed is achieved. The preset spatial distance range may be a preset range in the bed entering or exiting direction. In an embodiment, the step of acquiring the spatial position of the couch in which the subject is positioned in the initial alignment mark state is also performed together in step S110.
In an embodiment, step S130 is to input the scan image sequence into a neural network model, automatically identify the contour information of the target object region in the corresponding scan image through the neural network model, and calculate the central point of the target object according to the contour information.
In one embodiment, before performing step S130, a step of training the neural network model by using the image with the labeling information is further performed. The image with the label information is a patient image with contour information and label information which are manually drawn by experts. Wherein the annotation information comprises at least one characteristic information. The feature information may include segmented contour information.
In an embodiment, the image with the labeling information may also be obtained by registering the patient image with the reference atlas sequence with the labeling information, that is, before the step of training the neural network model by using the image with the labeling information, a data preprocessing process is further required. The reference map sequence is typically obtained based on data from normal patients. In this embodiment, the reference atlas sequence is a target object atlas sequence obtained by analysis and integration of a large amount of patient data. The labeling information includes at least the contour information after the segmentation (may also be referred to as segmentation information). The contour information may include both the precise contour information and the surrounding contour information, as shown in fig. 7. Therefore, the feature information in the reference map sequence includes at least contour information and gradation information. The more the characteristic information, the more accurate the recognition result. Due to the difference between the patient image and the reference map sequence, such as the morphological details of the tissue and organ, the body posture during scanning, etc. If the segmentation information, the label and the like in the reference map sequence are used, the two need to be unified in the same coordinate system, namely, registration is performed. In an embodiment, the patient image may be non-linearly spatially transformed, i.e. non-rigidly registered, with the reference atlas sequence. And transforming the data of the patient image into the coordinate opera of the reference atlas sequence to obtain a transformed or registered image with the annotation information. In this embodiment, mutual information is used as a similarity measure of non-rigid registration, and a constrained model of spatial transformation is performed based on a Demons model, so as to complete the overall registration work. After registration, the labeling information on the reference map sequence can be directly applied to the patient sequence.
In one embodiment, before the step of training the neural network model by using the image with the labeling information, feature extraction is further performed on the image with the labeling information, so that the extracted feature data is output to the established neural network model to train the neural network model. Training features that can be used can be divided into two broad categories: gray information of the image and position information of the boundary contour. And performing wavelet decomposition on the image gray level information, and calculating image mean and variance of the image gray level in each frequency band, and characteristic information such as cross-correlation coefficient of the corresponding reference map sequence decomposition result. Performing boundary segmentation (such as Canny operator and level set algorithm) on the patient image, after removing abnormal contours such as too small area and unclosed contours, calculating the cross-correlation coefficient between the remaining contours and the contours in the reference map sequence, wherein the larger the correlation coefficient between the two contours is, the higher the similarity between the two contours is, and the smaller the similarity is, the lower the similarity is. The normal contour and the tumor contour in the atlas are labeled differently, so that the suspected tumor contour in the patient image can be obtained through the calculation. And inputting the extracted characteristic data into the established neural network model. And setting 3-5 layers of hidden layers in the neural network model, setting the weight and threshold of the related nodes, and outputting whether the input features contain suspected target object signals or not. And calculating the geometric gravity center of the boundary contour of the suspected target object, namely obtaining the candidate treatment isocenter.
FIG. 8 is a diagram illustrating a process of building a neural network model according to an embodiment. In this embodiment, the map sequence with segmentation information and the map sequence with normal morphology both refer to reference map sequences with labeling information. Only because the gray scale information on the atlas sequence is more emphasized in the analysis process of the signal features (namely, the image gray scale distribution features) without using the related labeling information, and the contour information with the segmentation information is needed in the registration process.
In an embodiment, an imaging apparatus is also provided. The imaging device includes a scanning device and a processor. Wherein the scanning device is adapted to acquire a sequence of scan images of the subject in an initial alignment marker state. The processor is connected to the scanning device for acquiring a sequence of scan images of the subject in an initial alignment mark state and performing the steps of the method for determining the position of the alignment mark as described in any of the previous embodiments. After the imaging device determines the position of each alignment mark, the position of the alignment mark does not move, so that the positioning of the examinee at the end of the radiotherapy device (such as RT) is determined according to the position of the alignment mark.
According to the imaging device, the position of the alignment mark can be determined through the processing of the processor, so that the alignment mark is utilized to realize the positioning alignment of the examinee, the central point of the target object of the examinee is ensured to be consistent with the position of the isocenter determined by the alignment mark, namely the isocenter of the treatment head, and the treatment efficiency and the treatment effect are improved.
In an embodiment, a storage medium having a computer program stored thereon is also provided. The program is operable, when executed by a processor, to perform the steps of the method according to any of the preceding embodiments.
In an embodiment, there is also provided an alignment mark identification method, where a flowchart of the alignment mark identification method is shown in fig. 9, and the method includes the following steps:
in step S910, a scan image of the subject in the alignment mark state is acquired.
In step S920, the alignment mark in the scanned image is automatically identified.
Specifically, it is determined whether the gray scale of each pixel in the scanned image is greater than a gray scale threshold, and a pixel region having a gray scale greater than the gray scale threshold is automatically identified as an alignment mark.
According to the method for identifying the alignment mark, the system can automatically identify the alignment mark in the image by judging the gray scale of each pixel in the scanned image without manual identification by a technician, so that the processing efficiency is improved, and the error caused by manual operation is favorably reduced.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (14)
1. A method of determining the position of an alignment mark, comprising:
acquiring a sequence of scan images of a subject in an initial alignment mark state;
determining an isocenter in an initial alignment mark state from the scan image sequence, comprising:
identifying initial alignment marks in each layer of scanned images in the sequence of scanned images; and
determining the isocenter from the initial alignment mark;
calculating the central point of the target object according to the scanning image sequence by using a neural network model; and
and determining the position of the alignment mark according to the space offset of the isocenter and the center point of the target object.
2. The method of claim 1, wherein the step of identifying initial alignment marks in each layer of the sequence of scan images comprises performing the following steps for each layer of the sequence of scan images:
acquiring a contour of a subject in a scan image, the contour being composed of contour points;
dividing an image area where the contour is located into a plurality of sub-image areas along the contour, wherein each sub-image area at least comprises a partial contour; and
and sequentially identifying each sub-image area and determining the initial alignment mark.
3. The method according to claim 2, wherein in the step of dividing the image area where the contour is located into a plurality of sub-image areas along the contour, the contour point is used as a center point of the sub-image areas for division, and a distance between center points of adjacent sub-image areas is greater than a size of the initial alignment mark.
4. The method of claim 2, wherein the step of sequentially identifying the sub-image regions and determining the initial alignment mark comprises:
sequentially identifying each sub-image area, marking the sub-image areas when the alignment marks are identified, and adding one to the number of the alignment marks;
judging whether the number of the alignment marks is larger than or equal to a target value;
and ending the scanning identification of each sub-image area in the scanned image when the number of the alignment marks is larger than or equal to the target value.
5. The method according to claim 4, wherein when identifying each sub-image region, determining whether the gray scale of each pixel in the sub-image region is greater than a gray scale threshold, and identifying the pixel region with the gray scale greater than the gray scale threshold as the alignment mark.
6. The method according to claim 4, wherein after the step of sequentially identifying each sub-image region and determining the initial alignment mark and before the step of determining the isocenter from the initial alignment mark, further comprising: taking the scanned images of which the number of the alignment marks is greater than or equal to a target value as target scanned images;
the step of determining the isocenter from the initial alignment marks is determining the isocenter from initial alignment marks in the target scan image.
7. The method of claim 6, wherein the step of determining the isocenter from initial alignment marks in the target scan image comprises:
acquiring the position of each initial alignment mark in the target scanning image in a corresponding sub-image area;
calculating the position of each initial alignment mark in the target scanning image based on the position relation between the sub-image area and the target scanning image; and
and calculating the spatial position of the isocenter according to the position of each initial alignment mark in the target scanning image.
8. The method of claim 2, wherein the step of sequentially identifying the sub-image regions and determining the initial alignment mark comprises:
sequentially identifying each sub-image area and marking each sub-image area when the alignment mark is identified;
calculating the position of the mark area in the corresponding scanning image;
judging whether two mark areas with the distance smaller than the size of the initial alignment mark exist in the scanned image or not;
if two mark areas with the distance smaller than the size of the initial alignment mark exist, merging the two mark areas with the distance smaller than the size of the initial alignment mark to be used as one mark area, and then using the average value of the positions of the two mark areas as the position of the merged mark area; and returning to the step of judging whether two mark areas with the distance smaller than the size of the initial alignment mark exist in the scanned image after combination;
if there are no two mark regions having a distance less than the size of the initial alignment mark, the mark regions are identified as the initial alignment marks.
9. The method according to claim 1, further comprising a step of acquiring a spatial position of a couch in which the subject is positioned in an initial alignment mark state;
in the step of determining the isocenter in the initial alignment mark state according to the scan image sequence, the scan image for determining the isocenter corresponding to the initial alignment mark is determined according to a spatial position relationship between a spatial position corresponding to each scan image in the scan image sequence and the scan bed.
10. The method of claim 1, wherein the step of calculating a center point of a target object from the sequence of scan images using a neural network model comprises:
and inputting the scanning image sequence into the neural network model, automatically identifying the contour information of the target object region in the corresponding scanning image through the neural network model, and calculating the central point of the target object according to the contour information.
11. The method of claim 10, further comprising the step of training the neural network model using images with labeling information; the labeling information comprises at least one characteristic information; the feature information includes segmented contour information.
12. An image forming apparatus, characterized by comprising:
a scanning device for acquiring a sequence of scan images of a subject in an initial alignment marker state; and
a processor connected to the scanning device for acquiring a sequence of scan images of a subject in an initial alignment marker state;
the processor is further configured to determine an isocenter in an initial alignment mark state from the sequence of scanned images, including: identifying initial alignment marks in each layer of scanned images in the sequence of scanned images; and determining the isocenter from the initial alignment mark;
the processor is further configured to calculate a center point of a target object according to the sequence of scanned images by using a neural network model, and determine a position of the alignment mark according to a spatial offset between the isocenter and the center point of the target object.
13. The imaging apparatus of claim 12, wherein the step of identifying initial alignment marks in each of the layers of the sequence of scan images comprises performing the following steps for each of the layers of the sequence of scan images:
acquiring a contour of a subject in a scan image, the contour being composed of contour points;
dividing an image area where the contour is located into a plurality of sub-image areas along the contour, wherein each sub-image area at least comprises a partial contour; and
and sequentially identifying each sub-image area and determining the initial alignment mark.
14. A storage medium having a computer program stored thereon, the program being adapted to perform the steps of the method according to any of claims 1 to 11 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710859487.2A CN107596578B (en) | 2017-09-21 | 2017-09-21 | Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710859487.2A CN107596578B (en) | 2017-09-21 | 2017-09-21 | Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107596578A CN107596578A (en) | 2018-01-19 |
CN107596578B true CN107596578B (en) | 2020-07-14 |
Family
ID=61061910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710859487.2A Active CN107596578B (en) | 2017-09-21 | 2017-09-21 | Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107596578B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108852400B (en) * | 2018-07-02 | 2022-02-18 | 东软医疗系统股份有限公司 | Method and device for realizing position verification of treatment center |
CN109145902B (en) * | 2018-08-21 | 2021-09-03 | 武汉大学 | Method for recognizing and positioning geometric identification by using generalized characteristics |
JP7252769B2 (en) * | 2019-02-01 | 2023-04-05 | 株式会社ディスコ | Alignment method |
CN109949260B (en) * | 2019-04-02 | 2021-02-26 | 晓智未来(成都)科技有限公司 | Method for automatically splicing images by adjusting height of x-ray detector |
CN110689521B (en) * | 2019-08-15 | 2022-07-29 | 福建自贸试验区厦门片区Manteia数据科技有限公司 | Automatic identification method and system for human body part to which medical image belongs |
CN112884820B (en) * | 2019-11-29 | 2024-06-25 | 杭州三坛医疗科技有限公司 | Image initial registration and neural network training method, device and equipment |
CN111062390A (en) * | 2019-12-18 | 2020-04-24 | 北京推想科技有限公司 | Region-of-interest labeling method, device, equipment and storage medium |
CN111419399A (en) * | 2020-03-17 | 2020-07-17 | 京东方科技集团股份有限公司 | Positioning tracking piece, positioning ball identification method, storage medium and electronic device |
US11311747B2 (en) | 2020-07-16 | 2022-04-26 | Uih America, Inc. | Systems and methods for isocenter calibration |
CN113438960B (en) * | 2021-04-02 | 2023-01-31 | 复旦大学附属肿瘤医院 | Target disposal method and system |
CN113520426B (en) * | 2021-06-28 | 2023-07-25 | 上海联影医疗科技股份有限公司 | Coaxiality measuring method, medical equipment rack adjusting method, equipment and medium |
CN116756045B (en) * | 2023-08-14 | 2023-10-31 | 海马云(天津)信息技术有限公司 | Application testing method and device, computer equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4585471B2 (en) * | 2006-03-07 | 2010-11-24 | 株式会社東芝 | Feature point detection apparatus and method |
CN101916443B (en) * | 2010-08-19 | 2012-10-17 | 中国科学院深圳先进技术研究院 | Processing method and system of CT image |
CN103829965B (en) * | 2012-11-27 | 2019-03-22 | Ge医疗系统环球技术有限公司 | The method and apparatus of CT scan is guided using marked body |
CN104414662B (en) * | 2013-09-04 | 2017-02-01 | 江苏瑞尔医疗科技有限公司 | Position calibration and error compensation device of imaging equipment and compensation method of position calibration and error compensation device |
DE102014219667B3 (en) * | 2014-09-29 | 2016-03-03 | Siemens Aktiengesellschaft | Method for selecting a recording area and system for selecting a recording area |
CN105678272A (en) * | 2016-03-25 | 2016-06-15 | 符锌砂 | Complex environment target detection method based on image processing |
-
2017
- 2017-09-21 CN CN201710859487.2A patent/CN107596578B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107596578A (en) | 2018-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107596578B (en) | Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium | |
JP3932303B2 (en) | Organ dynamics quantification method, apparatus, organ position prediction method, apparatus, radiation irradiation method, apparatus, and organ abnormality detection apparatus | |
US9684961B2 (en) | Scan region determining apparatus | |
CN113506294B (en) | Medical image evaluation method, system, computer equipment and storage medium | |
US8787647B2 (en) | Image matching device and patient positioning device using the same | |
JP7486485B2 (en) | Apparatus for identifying regions in brain images | |
CN111127404B (en) | Medical image contour rapid extraction method | |
US11532101B2 (en) | Marker element and application method with ECG | |
CN113920114B (en) | Image processing method, image processing apparatus, computer device, storage medium, and program product | |
CN112132860A (en) | Patient motion tracking system configured to automatically generate a region of interest | |
CN111050650B (en) | Method, system and device for determining radiation dose | |
US20230177681A1 (en) | Method for determining an ablation region based on deep learning | |
JP4344825B2 (en) | Irradiation position verification system | |
CN110349151B (en) | Target identification method and device | |
US11830184B2 (en) | Medical image processing device, medical image processing method, and storage medium | |
JP2017111129A (en) | Contour extraction device, contour extraction method and program | |
CN112085698A (en) | Method and device for automatically analyzing left and right breast ultrasonic images | |
CN110215621B (en) | Outer contour extraction method and device, treatment system and computer storage medium | |
Jain et al. | A novel strategy for automatic localization of cephalometric landmarks | |
CN115880469B (en) | Registration method of surface point cloud data and three-dimensional image | |
US20240112331A1 (en) | Medical Image Data Processing Technique | |
US20240242400A1 (en) | Systems and methods for medical imaging | |
EP3968215A1 (en) | Determining target object type and position | |
Sewa | Motion Determination Of Lung Tumours Based On Cine-MR Images | |
CN114305471A (en) | Processing method and device for determining posture and pose, surgical system, surgical equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258 Patentee after: Shanghai Lianying Medical Technology Co., Ltd Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258 Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |