CN115631342A - Medical image feature point identification method, identification system and readable storage medium - Google Patents

Medical image feature point identification method, identification system and readable storage medium Download PDF

Info

Publication number
CN115631342A
CN115631342A CN202210981446.1A CN202210981446A CN115631342A CN 115631342 A CN115631342 A CN 115631342A CN 202210981446 A CN202210981446 A CN 202210981446A CN 115631342 A CN115631342 A CN 115631342A
Authority
CN
China
Prior art keywords
predicted
point
group
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210981446.1A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaowei Changxing Robot Co ltd
Original Assignee
Suzhou Xiaowei Changxing Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaowei Changxing Robot Co ltd filed Critical Suzhou Xiaowei Changxing Robot Co ltd
Priority to CN202210981446.1A priority Critical patent/CN115631342A/en
Publication of CN115631342A publication Critical patent/CN115631342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Abstract

The invention provides a medical image feature point identification method, an identification system and a readable storage medium, wherein the medical image feature point identification method comprises the following steps: providing a medical image with a set of feature points; performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set; grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set; traversing all the prediction point groups in the prediction point group set based on the relative position relation of the plurality of calibration development pieces for one prediction point group in the prediction point group set to obtain a prediction local image group set; for one predicted local image group in the predicted local image group set, identifying the feature points in the predicted local image group based on a local segmentation identification algorithm, and traversing all the predicted local image groups in the predicted local image group set to obtain a predicted feature point sequence set; and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence.

Description

Medical image feature point identification method, medical image feature point identification system and readable storage medium
Technical Field
The invention relates to the technical field of medical instruments, in particular to a medical image feature point identification method, an identification system and a readable storage medium.
Background
In the navigation and positioning process of some surgical robots, a perspective image of a patient needs to be shot by utilizing a transmission ray (such as an X-ray) to transmit a calibration ruler containing a development mark, a feature point of the development mark in the perspective image is identified, then a conversion relation between a surgical robot coordinate system and a surgical space coordinate system is established according to the identified feature point coordinate, and then surgical path planning and navigation and positioning are performed.
At present, the identification and positioning of feature points in a perspective image of a medical image are generally realized by adopting traditional image detection, such as circle detection algorithms like Hough transform; or a preset characteristic point and angle matching mode is adopted; or dividing the characteristic points into a plurality of regions, firstly extracting a specific region, then generating a sub-region, and finally extracting the characteristic points from the sub-region.
However, these identification methods have certain drawbacks or shortcomings, such as:
1. the traditional image detection method has low robustness under the condition of relatively strong interference, and the conditions of missing identification and error identification are easy to occur under the interference environments such as shielding, noise and the like;
2. the method based on the preset characteristic points and angle matching has high requirements on the included angle between the scale plane and the imaging plane, and the included angle is larger than 5 degrees, so that the identification cannot be completed;
3. methods such as dividing the feature points into regions are not suitable for the design structure of the current scale and cannot complete the recognition task.
Disclosure of Invention
The invention aims to provide a medical image feature point identification method, an identification system and a readable storage medium, which are used for solving the problems of the conventional feature point identification method.
In order to solve the above technical problem, a first aspect of the present invention provides a method for identifying feature points of a medical image, which is suitable for identifying feature points in a two-dimensional medical perspective image, and the method for identifying feature points of a medical image includes:
providing a medical image with a feature point group, wherein the feature point group comprises a plurality of feature points, each feature point corresponds to a calibration developing piece, and the plurality of calibration developing pieces have known relative position relations;
performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set;
grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set;
for one predicted point group in the predicted point group set, obtaining a predicted local image group corresponding to the predicted point group based on the relative position relation of the plurality of calibrated developers; traversing all the prediction point groups in the prediction point group set to obtain a prediction local image group set;
for one predicted local image group in the predicted local image group set, identifying characteristic points in the predicted local image group set based on a local segmentation identification algorithm to obtain a predicted characteristic point sequence; traversing all the predicted local image groups in the predicted local image group set to obtain a predicted characteristic point sequence set;
and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence.
Optionally, in the method for identifying feature points in medical images, for one predicted point group in the set of predicted point groups, based on the relative position relationship between the plurality of calibrated developers, the step of obtaining a predicted local image group corresponding to the predicted point group includes:
for one prediction point group in the prediction point group set, obtaining a prediction coordinate sequence group based on the relative position relation of the calibration developing piece;
and obtaining the predicted local image group according to the distance between the characteristic points in the predicted point group and the predicted coordinate sequence group.
Optionally, in the method for identifying feature points of a medical image, in the step of obtaining the predicted local image group according to the distance between the feature points in the predicted point group and the predicted coordinate series group, the step of obtaining a predicted local image includes:
acquiring the serial numbers of the feature points in the predicted point group in the predicted coordinate sequence group, and calculating to obtain the projection ratio of the distance between the feature points in the medical image and the distance between the calibration developing pieces in practice based on the relative position relation between the serial numbers of the feature points and the calibration developing pieces corresponding to the feature points;
according to the projection proportion, the length and the width of a predicted local image corresponding to a certain characteristic point are obtained;
obtaining the central point of the predicted local image according to the relative position relation of the calibration developing piece corresponding to the characteristic point and the projection proportion;
and obtaining the predicted local image according to the central point, the length and the width of the predicted local image.
Optionally, in the method for identifying feature points of a medical image, each of the prediction point groups includes two feature points.
Optionally, in the medical image feature point identification method, the calibration developing piece is spherical; the step of performing initial segmentation and identification on the plurality of feature points in the medical image to obtain an initial feature point set comprises:
obtaining an initial segmentation threshold value according to the diameter and the number of the calibrated developing pieces and the image resolution;
detecting and segmenting the medical image according to an initial segmentation threshold value to obtain image coordinates of feature points;
and classifying the characteristic points into the initial characteristic point set according to the radius of the characteristic points obtained by identification.
Optionally, in the medical image feature point identification method, the step of obtaining an initial feature point set further includes:
counting the number of the feature points obtained by identification, if the ratio of the number of the feature points to the target number is smaller than a preset value, adjusting the initial segmentation threshold, and re-detecting and segmenting the medical image according to the adjusted initial segmentation threshold.
Optionally, in the method for identifying feature points of a medical image, the calibration image is spherical, and the local segmentation identification algorithm includes:
obtaining a local segmentation threshold value according to the radius statistics of the detected feature points in the medical image;
segmenting the predicted local image according to the local segmentation threshold value to obtain a segmentation result;
traversing all the communication areas in the segmentation result, and counting to obtain the aspect ratio, the roundness, the radius and the circle center;
and if the aspect ratio, the roundness and the radius accord with the preset requirements of the currently detected feature point, determining the communication area as a feature point, and adding the circle center of the feature point into a predicted feature point sequence.
Optionally, in the medical image feature point identification method, the step of screening all the predicted feature point sequences in the set of predicted feature point sequences to obtain a final feature point identification sequence includes:
traversing all the predicted feature point sequences in the set of predicted feature point sequences;
if the number of the characteristic points in one predicted characteristic point sequence is not matched with the number of the expected characteristic points, deleting the predicted characteristic point sequence;
calculating the average error of the predicted characteristic point sequence and the central point of the predicted local image;
and determining the predicted characteristic point sequence with the minimum average error as a final characteristic point identification sequence.
In order to solve the above technical problem, a second aspect of the present invention provides a readable storage medium, on which a program is stored, the program, when executed, implementing the steps of the medical image feature point identification method as described above.
In order to solve the above technical problem, a third aspect of the present invention provides a medical image feature point identification system, which includes: the medical imaging device comprises a transmitting end and a receiving end, the scale tool comprises a plurality of calibration developing pieces, and known relative position relations exist among the plurality of calibration developing pieces; the scale tool is disposed between the transmitting end and the receiving end.
Optionally, in the medical image feature point identification system, the scale tool includes at least two planes and at least two calibration developing parts of different specifications, the calibration developing parts of the same specification are disposed on the same plane, and the number of the calibration developing parts of each specification is not less than 3.
Optionally, in the medical image feature point identification system, the arrangement modes of the calibration developing pieces on the two planes are different.
Optionally, in the system for recognizing characteristic points of medical images, the scale tool further includes a rod body, and the two planes are non-coplanar and distributed on two sides of the rod body.
In summary, in the medical image feature point identification method, the medical image feature point identification system and the readable storage medium provided by the present invention, the medical image feature point identification method includes: providing a medical image with a feature point group, wherein the feature point group comprises a plurality of feature points, each feature point corresponds to a calibration developing piece, and the known relative position relationship exists among the calibration developing pieces; performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set; grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set; for one predicted point group in the predicted point group set, obtaining a predicted local image group corresponding to the predicted point group based on the relative position relation of the plurality of calibrated developers; traversing all the prediction point groups in the prediction point group set to obtain a prediction local image group set; for one predicted local image group in the predicted local image group set, identifying the characteristic points in the predicted local image group based on a local segmentation identification algorithm to obtain a predicted characteristic point sequence; traversing all the predicted local image groups in the predicted local image group set to obtain a predicted characteristic point sequence set; and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence.
According to the configuration, the possible regions of the characteristic points in the medical image are predicted by utilizing the known relative position relation between the characteristic points, the predicted local image containing the characteristic points can be accurately obtained, the characteristic points in the range of the predicted local image are further subjected to local segmentation and identification, the influences of shielding, noise and different exposure intensities can be effectively eliminated, the accuracy and robustness of the identification method are improved, the missing identification of the characteristic points is avoided, the identification efficiency is improved, manual interaction is not needed, the smoothness of the operation process can be ensured, and the operation efficiency is improved.
Drawings
It will be appreciated by those skilled in the art that the drawings are provided for a better understanding of the invention and do not constitute any limitation to the scope of the invention. Wherein:
FIG. 1 is a schematic view of a surgical robotic system for registration using medical images in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of an end region of a robotic arm of an embodiment of the invention;
FIG. 3 is a schematic view of a scale tool of an embodiment of the invention;
FIG. 4 is a top view of a scale tool of an embodiment of the invention;
FIG. 5 is a schematic illustration of a medical image of an embodiment of the present invention;
FIG. 6 is a schematic diagram of an initial segmentation result according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a group of predicted local images corresponding to the metal balls L1 to L9 according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a group of predicted local images corresponding to the metal balls S1 to S9 according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a local predicted image corresponding to the metal sphere L8 and a segmentation result corresponding to the local predicted image in the embodiment of the present invention;
fig. 10 is a schematic diagram of a local predicted image corresponding to the metal sphere S9 and a corresponding segmentation result thereof according to the embodiment of the present invention;
FIG. 11 is a schematic representation of the final feature point identification sequence of an embodiment of the present invention;
FIG. 12 is a schematic view of a scale tool according to another embodiment of the invention;
FIG. 13 is a top view of a scale tool according to another embodiment of the invention;
fig. 14 is a flowchart of a medical image feature point identification method according to an embodiment of the present invention.
Detailed Description
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be noted that the drawings are in simplified form and are not to scale, but are provided for the purpose of facilitating and clearly illustrating embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings are intended to show different emphasis, sometimes in different proportions.
As used in this disclosure, the singular forms "a," "an," and "the" include plural referents, the term "or" is generally employed in a sense including "and/or," the terms "a," "an," and "the" are generally employed in a sense including "at least one," the terms "at least two" are generally employed in a sense including "two or more," and further, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to imply that the number of indicated technical features is essential. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include one or at least two of that feature, "one end" and "the other end," and "proximal end" and "distal end" generally refer to the corresponding two parts, including not only the endpoints. Furthermore, as used herein, the terms "mounted," "connected," and "disposed" on another element should be construed broadly and generally merely indicate that a connection, coupling, fit, or drive relationship exists between the two elements, and a connection, coupling, fit, or drive relationship between the two elements, whether direct or indirect through intervening elements, should not be construed as indicating or implying any spatial relationship between the two elements, i.e., an element may be located in any orientation within, outside, above, below, or to one side of another element unless the content clearly indicates otherwise. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art. Moreover, directional terminology, such as above, below, up, down, upward, downward, left, right, etc., is used with respect to the exemplary embodiments as they are shown in the figures, with the upward or upward direction being toward the top of the corresponding figure and the downward or downward direction being toward the bottom of the corresponding figure.
The invention aims to provide a medical image feature point identification method, an identification system and a readable storage medium, which are used for solving the problems of the conventional feature point identification method.
The following description refers to the accompanying drawings.
Referring to fig. 1, a surgical robotic system for registration using medical images is shown, comprising: a mechanical arm 1, a scale tool 2, a medical image device 3 and a navigation device 4; the medical imaging apparatus 3 comprises a transmitting end 31 and a receiving end 32, in an alternative example, the medical imaging apparatus 3 is an X-ray machine, the transmitting end 31 is an X-ray transmitting tube for transmitting X-rays to the side of the receiving end 32, and the receiving end 32 is an imaging flat plate; after passing through the scale tool 2 and the surgical object 5, the X-rays reach the receiving end 32, and medical images are obtained by imaging at the receiving end 32. Of course, the medical imaging device 3 is not limited to an X-ray machine, and those skilled in the art can configure it as a CT machine according to the prior art, and the invention is not limited thereto. The navigation device 4 includes a positioning device 41 (e.g., an optical positioning device) and a plurality of trackable members 42 (e.g., optical targets, etc.), and the positioning device 41 is paired with the trackable members 42 so that the positioning device 41 can track and acquire pose information of the trackable members 42. Of course, the positioning device 41 and the trackable member 42 are not limited to optical positioning devices and optical targets, and those skilled in the art can configure them as magnetic positioning devices according to the prior art, and the invention is not limited thereto. In one example, the trackable elements 42 may be mounted on the robotic arm 1 and the surgical object 5, respectively. So configured, the positioning device 41 is able to obtain the pose information of the robot arm 1 and the pose information of the surgical object 5 by tracking and acquiring the pose of the trackable member 42.
Further, referring to fig. 2 showing an example of the end region of the robot arm 1, the scale tool 2 is mounted on the robot arm 1, and the trackable member 42 is mounted on the robot arm 1, so that the relative positional relationship between the scale tool 2 and the trackable member 42 is known and fixed. Further, the scale tool 2 includes at least two planes 20 and at least two kinds of calibration developing members 21 of different specifications, the calibration developing members 21 of the same specification are disposed on the same plane 20, and the number of the calibration developing members 21 of each specification is not less than 3. Preferably, the calibration developing member 21 has a spherical shape. The calibration developing member 21 of different specifications has different diameters. Furthermore, each standard developing member 21 has a fixed arrangement on its corresponding plane 20; preferably, the arrangement of the calibration developing devices 21 on the two planes 20 is different, and more preferably, the two planes 20 are parallel to each other. Referring to fig. 3, an exemplary scale tool 2 is shown, which includes two flat surfaces 20 and two calibration visualizations 21 of different specifications. In the medical image obtained at the receiving end 32, the images of the calibration developing parts 21 with different specifications can be distinguished, and based on the principle that at least 3 points determine one plane, the relationship of two planes 20 in the medical image can be determined according to the images of the calibration developing parts 21 with two specifications, so that the projection matrix of the medical image is obtained by calculation, further, the calculation of three-dimensional space coordinates can be realized, the coordinate system of the medical image and the coordinate system of the navigation coordinate system are registered, and the operation path planning and the operation are performed on the basis. In particular, the surgical object 5 may be a patient, but the surgical object 5 is not limited to be a patient, and may also be a model prosthesis or the like, which may be used by an operator to perform training, calibration, or verification surgery, and the application scenario of the surgical robot system is not limited by the present invention.
An exemplary medical image acquisition process includes:
step Sa1: the surgical object 5 is arranged in place;
step Sa2: the mechanical arm 1, the medical imaging device 3 and the positioning device 41 are reasonably arranged;
step Sa3: trackable members 42 are fixedly mounted on the surgical object 5 and the robot arm 1, respectively; a scale tool 2 is fixedly arranged on the mechanical arm 1;
step Sa4: the C-shaped arm of the medical imaging device 3 is adjusted to a proper position according to the operation type and the part;
step Sa5: adjusting the mechanical arm 1, placing the scale tool 2 close to the surgical object 5, and enabling the plane 20 on the scale tool 2 to be parallel to the imaging flat plate of the receiving end 32 as much as possible;
step Sa6: and shooting to obtain a medical image.
After the medical image is obtained, the image in which the developing member 21 is marked needs to be segmented and extracted. In an ideal state, the medical image is segmented and extracted, and images of all the calibrated developers 21 can be identified, so that the projection matrix of the medical image can be accurately calculated. However, in practice, due to the influence of interference items such as occlusion and exposure noise, the medical image is often segmented and extracted without identifying all the images of the calibration image 21, or includes a certain number of interference points, and in the case that the number of the identified images of the calibration image 21 is less than the expected number, an error may be caused, or even a situation that calculation cannot be performed may occur.
To solve the problem, an embodiment of the present invention provides a method for identifying feature points of a medical image, which is suitable for identifying feature points in a two-dimensional medical perspective image. For convenience of description, the image of the calibration display 21 is abstracted as a feature point, and it is understood that the feature point can be defined according to the feature of the image of the calibration display 21, for example, in some embodiments, the image of the calibration display 21 is a circle or an ellipse, and the feature point can refer to such a circle or an ellipse of the image area. Furthermore, for convenience of description, a plurality of feature points corresponding to a plurality of calibration developers 21 with the same specification in the medical image are defined as a feature point group. As shown in fig. 14, the method for identifying feature points of medical images includes:
step S1: providing a medical image with a feature point group, wherein the feature point group comprises a plurality of feature points, each feature point corresponds to one calibration developing piece 21, and a known relative position relationship exists among a plurality of calibration developing pieces 21;
step S2: performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set;
and step S3: grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set; it will be appreciated that each group of predicted points contains at least two feature points.
And step S4: for one predicted point group in the predicted point group set, obtaining a predicted local image group corresponding to the predicted point group based on the relative position relationship of the plurality of calibration developers 21; traversing all the prediction point groups in the prediction point group set to obtain a prediction local image group set;
step S5: for one predicted local image group in the predicted local image group set, identifying the characteristic points in the predicted local image group based on a local segmentation identification algorithm to obtain a predicted characteristic point sequence; traversing all the predicted local image groups in the predicted local image group set to obtain a predicted characteristic point sequence set;
step S6: and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence.
The following description is given by way of an example with reference to the accompanying drawings.
Referring to fig. 3 and 4, the scale tool 2 includes a scale base 22 and a rod 23, the scale base 22 is made of a material transparent to X-rays, the rod 23 is connected to the scale base 22, the scale base 22 has two planes 20, a first plane 201 and a second plane 202, respectively, and preferably, the two planes 20 are non-coplanar and distributed on two sides of the rod 23. The first plane 201 and the second plane 202 are respectively provided with 9 mounting holes for mounting the calibration developing parts 21, the aperture of the mounting holes on the two planes 20 is different, and the arrangement of the mounting holes on the two planes 20 is also different. The calibration developing member 21 is two kinds of metal balls with different diameters, 9 metal balls each, which are respectively installed in the installation holes on the two planes 20. Further, the metal balls on the first plane 201 and the second plane 202 are numbered in a certain order, so that each metal ball has its own unique number. For convenience of description, the 9 metal ball numbers on the first plane 201 are defined as L1 to L9, and the 9 metal ball numbers on the second plane 201 are defined as S1 to S9. Thus, there is a fixed and known relative positional relationship between each metal ball and the other metal balls. After the medical image is obtained by scanning, the relative image coordinates of the feature points of each metal ball in the medical image are also fixed, as shown in fig. 5. It is understood that step S1 is based on the ruler tool 2 as described above, and the obtained medical image includes two feature point groups, where each feature point group includes 9 feature points.
Optionally, the step S2 of performing initial segmentation and identification on the plurality of feature points in the medical image to obtain an initial feature point set includes:
step S21: obtaining an initial segmentation threshold value according to the diameter, the number and the image resolution of the calibration developing pieces 21; step S21 may for example employ conventional image processing algorithms, such as adaptive threshold segmentation methods or target detection algorithms based on machine learning or deep learning, etc., which can be selected by the skilled person according to the prior art.
Step S22: detecting and segmenting the medical image according to an initial segmentation threshold value to obtain image coordinates of feature points; note that, since the calibration developing member 21 is spherical, the feature point should ideally be circular (or an ellipse approximating a circle, etc.), and the image coordinates of the feature point include the center coordinates of the feature point and the radius of the feature point. The result of the initial segmentation is shown in fig. 6. It should be noted that the initial segmentation may only identify some feature points, and does not require all feature points to be identified. Fig. 6 shows an example in which the initial segmentation identifies 8 feature points corresponding to the metal spheres L1, L2, L3, L4, L5, L6, L7, L9 on the plane 201; the metal balls S1, S2, S3, S4 and S5 on the plane 202 correspond to 5 characteristic points. The feature points corresponding to the metal balls L8, S6 to S9 cannot be identified.
Step S23: and classifying the characteristic points into the initial characteristic point set according to the radius of the characteristic points obtained by identification. It is understood that, since the calibration developing member 21 is spherical, the radius of the feature point can also be obtained according to the initial division of step S21. If there is only one type of calibration developer 21, the radius of the feature points should be the same, and an initial set of feature points can be obtained after classification. Based on the scale tool 2 including the calibration developing part 21 with two different specifications in the above embodiment, the radius of the identified feature points may also be two, and the feature points are classified into two initial feature point sets according to different radii.
Optionally, before step S21, a two-dimensional image histogram may be counted to obtain an initial segmentation threshold. Optionally, before step S23, a clustering algorithm may be further used to filter out noise points. Further, after step S23, the number of the identified feature points may be counted, and if the ratio of the number of the feature points to the target number is smaller than a preset value (for example, 60% of the total number of the calibration developing units 21), the initial segmentation threshold is adjusted, and the medical image is redetected and segmented according to the adjusted initial segmentation threshold. And if the ratio of the number of the characteristic points to the target number is not less than a preset value, finishing the initial segmentation.
In step S3 and step S4, for an initial feature point set, there is a known determined relative positional relationship between a plurality of feature points included in the initial feature point set, so that the positions of all other feature points can be predicted and obtained by using any two or more feature points and the relative positional relationship therebetween. Therefore, for the initial feature point set, the feature points included therein may be grouped, where each group of feature points is referred to as a predicted point group, and based on each predicted point group, the positions of all other feature points in the initial feature point set may be predicted. It will be appreciated that a minimum of two feature points may be used for prediction, and therefore it is preferred that each of the predicted point groups comprises two of the feature points. Of course, in other embodiments, a larger number of feature points may be used for prediction, and the invention is not limited thereto.
For example, the following description will be made by taking the prediction of two feature points, and assuming that the relative positional relationship between the two feature points corresponds to the relative positional relationship between two of the calibration developers 21 (for convenience of description, referred to as an assumed correspondence relationship). As described above, the relative positions of all the feature points (for convenience of description, referred to as predicted positions of the feature points) corresponding to the calibration developer 21 of each specification can be predicted based on two feature points in the predicted point group. That is, a predicted position of the feature point can be obtained based on a hypothetical correspondence (i.e., based on a predicted point group) between the feature point and the calibration developer 21; further, the assumed correspondence is exhausted and traversed corresponding to all assumed correspondences, and all possible predicted positions can be obtained.
It will be appreciated that in all possible prediction positions, a large number of unwanted spurious results are contained. Therefore, the elimination of the false result is required. Therefore, the group of prediction local images can be obtained by dividing based on any prediction position, so as to eliminate false results.
Optionally, in step S4, for one predicted point group in the predicted point group set, based on the relative position relationship between the plurality of calibration developers 21, the step of obtaining the predicted local image group corresponding to the predicted point group includes:
step S41: for one prediction point group in the prediction point group set, obtaining a prediction coordinate sequence group based on the relative position relationship of the calibration developing part 21; based on the numbers of the calibration images 21 corresponding to the two feature points in the predicted point group, the numbers of the calibration images 21 corresponding to the other remaining 7 feature points can be predicted, that is, a predicted coordinate sequence group is obtained.
Step S42: and obtaining the predicted local image group according to the distance between the characteristic points in the predicted point group and the predicted coordinate sequence group. Based on the above description, it can be known that, according to the prediction coordinate series group, the positions of all other feature points can be predicted from two feature points in the prediction point group. And then obtaining the predicted local images corresponding to each feature point, and classifying the predicted local images into a predicted local image group.
Referring to fig. 7, a group of predicted local images corresponding to the metal balls L1 to L9 obtained in the above-described steps is shown, which includes 9 predicted local images L1 'to L9'. In an exemplary embodiment, in the initial segmentation of the foregoing step, the feature point corresponding to the metal ball L8 is not successfully identified, so the predicted local image L8' corresponding to the metal ball L8 is obtained by segmenting according to the predicted position of the feature point corresponding to the metal ball L8. Fig. 8 shows one of the predicted local image groups corresponding to the metal balls S1 to S9 obtained in the above-described steps, which includes 9 predicted local images S1 'to S9'. In one example, in the initial segmentation in the foregoing step, the feature points corresponding to the metal balls S6 to S9 are not successfully identified, and therefore the predicted local images S6 'to S9' corresponding to the metal balls S6 to S9 are segmented based on the predicted positions of the feature points corresponding to the metal balls S6 to S9.
Further, in step S42, the step of obtaining a predicted local image includes:
step S421: acquiring the serial numbers of the feature points in the predicted point group in the predicted coordinate series group, and calculating to obtain the projection ratio of the distance between the feature points in the medical image and the distance between the calibrated developing parts 21 in practice based on the serial numbers of the feature points and the relative position relation of the calibrated developing parts 21 corresponding to the feature points; in this step, the distance between two feature points in one predicted point group is used as an image distance, and assuming that the two feature points in the predicted point group are in one-to-one correspondence (i.e., assumed correspondence) with two calibration developers 21 in practice, the distance between the two calibration developers 21 is used as a template distance, and a projection ratio can be calculated by the image distance and the template distance.
Step S422: according to the projection proportion, the length and the width of a predicted local image corresponding to a certain characteristic point are obtained;
step S423: obtaining the central point of the predicted local image according to the relative position relation of the calibration developing part 21 corresponding to the characteristic point and the projection proportion;
step S424: and obtaining the predicted local image according to the central point, the length and the width of the predicted local image.
Furthermore, all the assumed corresponding relations are traversed, that is, for the same predicted point group, the predicted point group is paired with any two of the calibration developers 21, and a plurality of projection proportions can be obtained by traversing such corresponding relations, and according to a plurality of such projection proportions, a predicted local image group corresponding to the predicted point group can be obtained. It can be understood that, further, traversing all the prediction point groups can obtain a plurality of prediction local image groups, and further classifying the prediction local image groups to obtain a prediction local image group set. As can be appreciated, a set of predicted local image groups contains a large number of useless artifacts. Then, according to the local segmentation recognition algorithm in step S5, the predicted local image can be locally segmented and recognized, so as to eliminate these useless false results.
Optionally, in step S5, the local segmentation recognition algorithm includes:
step S51: statistically obtaining a local segmentation threshold according to the radius (as identified in step S22) of the detected feature points in the medical image;
step S52: segmenting the predicted local image according to the local segmentation threshold value to obtain a segmentation result; it is understood that steps S51 and S52 may be circle detection methods based on hough transform, or other circle identification methods, and those skilled in the art can select the circle detection method according to the prior art.
Step S53: traversing all the communication areas in the segmentation result, and counting to obtain the aspect ratio, the roundness, the radius and the circle center;
step S54: and if the aspect ratio, the roundness and the radius accord with the preset requirements of the currently detected feature point, determining the communication area as a feature point, and adding the circle center of the feature point into a predicted feature point sequence.
Referring to fig. 9 and fig. 10, fig. 9 shows a local predicted image L8' (the left area of fig. 9) corresponding to the metal sphere L8, and the corresponding segmentation result L8 "(the right area of fig. 9); fig. 10 shows a local predicted image S9 '(the left area of fig. 10) corresponding to the metal sphere S9, and a segmentation result S9' (the right area of fig. 10) corresponding thereto.
Optionally, in step S54, if the aspect ratio, the roundness, and the radius do not meet the preset requirements of the currently detected feature point, it is determined that the segmentation fails, that is, the feature point cannot be obtained, and the corresponding circle center cannot be obtained. In this way, the predicted feature point sequence corresponding to the group of predicted local image groups may only include the centers of the initial two feature points in the predicted point group. Optionally, before step S51, a histogram of the local two-dimensional image may be counted, so as to obtain a local segmentation threshold.
In practice, steps S51 to S54 are a process of eliminating an invalid predicted local image by verifying whether or not a feature point exists in the predicted local image in the previous step. For example, in all possible prediction positions, there are some invalid results, that is, the assumed correspondence between two feature points in the prediction point set and the calibration developer 21 is not true, so that in the prediction local image set obtained according to the projection relationship, there may be no feature point in the plurality of prediction local images. Then steps S51-S54 are executed at this time and no valid result will be obtained.
Further, step S6 includes:
step S61: traversing all the predicted characteristic point sequences in the predicted characteristic point sequence set;
step S62: if the number of the characteristic points in one predicted characteristic point sequence is not matched with the number of the expected characteristic points, deleting the predicted characteristic point sequence; if the number of feature points in a predicted feature point sequence is less than expected, it indicates that there is an invalid predicted local image in the predicted local image group corresponding to the predicted feature point sequence, i.e. the feature points are not segmented to obtain valid feature points, further indicates that the feature points are a group of invalid false results, and should be excluded.
Step S63: calculating the average error of the predicted characteristic point sequence and the central point of the predicted local image; it is understood that the data in the predicted feature point sequence is a set of center points of the connected region obtained in the foregoing steps S53 and S54, and the closer the center point is to the center point of the predicted local image, the more accurate the position of the previously predicted feature point is, the higher the confidence thereof is.
Step S64: and determining the predicted characteristic point sequence with the minimum average error as a final characteristic point identification sequence.
Since the predicted point group is obtained by traversal, which includes several groups of correct assumed correspondences, several groups of valid predicted local image groups are obtained, and the purpose of steps S63 and S64 is to determine the predicted feature point sequence corresponding to the predicted local image group with the smallest error as the final feature point identification sequence. In this way, a sequence of feature points that ultimately need to be identified is obtained. The final feature point identification sequence is shown in fig. 11.
It should be noted that the above-mentioned two sets of scale tools 2 containing 9 metal balls are only used to illustrate the method for identifying feature points of medical images provided in this embodiment, and the specific form and structure of the scale tool 2 are not limited. In other embodiments, the scale tool 2 may also comprise two sets of 5 metal balls, as shown in figures 12 and 13. It can be understood that fewer calibration developers 21 can make the medical image contain fewer shadows, reduce the number of shadows shielding the patient part, and reduce the influence of the shadows of the calibration developers 21 on diagnosis and surgical planning; of course, a smaller number of calibration developing members 21 may decrease the accuracy of positioning. The number of the specific calibration developers 21 can be set according to the precision of the actual application scene. In particular, the number of planes 20 included in the scale tool 2 is not limited to two, and the number of the calibration development members 21 on each plane 20 is not limited to be the same, and may be configured by those skilled in the art according to the actual application.
Based on the medical image feature point identification method, an embodiment of the present invention further provides a readable storage medium, on which a program is stored, and when the program is executed, the steps of the medical image feature point identification method are implemented. Furthermore, the embodiment of the present invention also provides a medical image feature point identification system, which includes a medical imaging apparatus 3, a ruler tool 2, and the readable storage medium as described above. It is understood that the readable storage medium may be disposed independently, or may be integrated into the medical image feature point identification system, such as the medical imaging apparatus 3, which is not limited in this respect.
In summary, in the medical image feature point identification method, the medical image feature point identification system and the readable storage medium provided by the present invention, the medical image feature point identification method includes: providing a medical image with a feature point group, wherein the feature point group comprises a plurality of feature points, each feature point corresponds to a calibration developing piece, and the plurality of calibration developing pieces have known relative position relations; performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set; grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set; for one prediction point group in the prediction point group set, obtaining a prediction local image group corresponding to the prediction point group based on the relative position relation of the plurality of calibration development pieces; traversing all the prediction point groups in the prediction point group set to obtain a prediction local image group set; for one predicted local image group in the predicted local image group set, identifying the characteristic points in the predicted local image group based on a local segmentation identification algorithm to obtain a predicted characteristic point sequence; traversing all the predicted local image groups in the predicted local image group set to obtain a predicted characteristic point sequence set; and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence. According to the configuration, the possible regions of the characteristic points in the medical image are predicted by utilizing the known relative position relation between the characteristic points, the predicted local image containing the characteristic points can be accurately obtained, the characteristic points in the range of the predicted local image are further subjected to local segmentation and identification, the influences of shielding, noise and different exposure intensities can be effectively eliminated, the accuracy and robustness of the identification method are improved, the missing identification of the characteristic points is avoided, the identification efficiency is improved, manual interaction is not needed, the smoothness of the operation process can be ensured, and the operation efficiency is improved.
It should be noted that the above embodiments may be combined with each other. The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the appended claims.

Claims (13)

1. A medical image feature point identification method is suitable for feature point identification in a two-dimensional medical perspective image, and is characterized by comprising the following steps:
providing a medical image with a feature point group, wherein the feature point group comprises a plurality of feature points, each feature point corresponds to a calibration developing piece, and the known relative position relationship exists among the calibration developing pieces;
performing initial segmentation identification on a plurality of feature points in the medical image to obtain an initial feature point set;
grouping the characteristic points in the initial characteristic point set to obtain a prediction point group set;
for one predicted point group in the predicted point group set, obtaining a predicted local image group corresponding to the predicted point group based on the relative position relation of the plurality of calibrated developers; traversing all the prediction point groups in the prediction point group set to obtain a prediction local image group set;
for one predicted local image group in the predicted local image group set, identifying characteristic points in the predicted local image group set based on a local segmentation identification algorithm to obtain a predicted characteristic point sequence; traversing all the predicted local image groups in the predicted local image group set to obtain a predicted characteristic point sequence set;
and screening all the predicted characteristic point sequences in the predicted characteristic point sequence set to obtain a final characteristic point identification sequence.
2. The method according to claim 1, wherein the step of obtaining a predicted local image group corresponding to the predicted point group based on the relative position relationship of the plurality of calibrated developers for one of the predicted point groups in the predicted point group set comprises:
for one predicted point group in the predicted point group set, obtaining a predicted coordinate sequence group based on the relative position relation of the calibration developing piece;
and obtaining the predicted local image group according to the distance between the characteristic points in the predicted point group and the predicted coordinate sequence group.
3. The method of claim 2, wherein the step of obtaining a predicted local image in the predicted local image group according to the distance between the feature points in the predicted point group and the predicted coordinate series group comprises:
acquiring the serial numbers of the feature points in the prediction point group in the prediction coordinate series group, and calculating to obtain the projection ratio of the distance between the feature points in the medical image and the distance between the calibration developing pieces in practice based on the serial numbers of the feature points and the relative position relation of the calibration developing pieces corresponding to the feature points;
according to the projection proportion, the length and the width of a predicted local image corresponding to a certain characteristic point are obtained;
obtaining the central point of the predicted local image according to the relative position relation of the calibrated developing part corresponding to the characteristic point and the projection proportion;
and obtaining the predicted local image according to the central point, the length and the width of the predicted local image.
4. The method according to claim 1, wherein each of the predicted point groups includes two of the feature points.
5. The method according to claim 1, wherein the calibration image is spherical; the step of performing initial segmentation and identification on the plurality of feature points in the medical image to obtain an initial feature point set comprises:
obtaining an initial segmentation threshold value according to the diameter and the number of the calibrated developing pieces and the image resolution;
detecting and segmenting the medical image according to an initial segmentation threshold value to obtain image coordinates of feature points;
and classifying the characteristic points into the initial characteristic point set according to the radius of the characteristic points obtained by identification.
6. The method of claim 5, wherein the step of obtaining the initial feature point set further comprises:
counting the number of the feature points obtained by identification, if the ratio of the number of the feature points to the target number is smaller than a preset value, adjusting the initial segmentation threshold, and re-detecting and segmenting the medical image according to the adjusted initial segmentation threshold.
7. The method according to claim 1, wherein the calibration image is spherical, and the local segmentation recognition algorithm comprises:
obtaining a local segmentation threshold value according to the radius statistics of the detected feature points in the medical image;
segmenting the predicted local image according to the local segmentation threshold value to obtain a segmentation result;
traversing all the communication areas in the segmentation result, and counting to obtain the aspect ratio, the roundness, the radius and the circle center;
and if the aspect ratio, the roundness and the radius accord with the preset requirements of the currently detected feature point, determining the communication area as a feature point, and adding the circle center of the feature point into a predicted feature point sequence.
8. The method according to claim 1, wherein the step of screening all the predicted feature point sequences in the set of predicted feature point sequences to obtain a final feature point identification sequence comprises:
traversing all the predicted characteristic point sequences in the predicted characteristic point sequence set;
if the number of the characteristic points in one predicted characteristic point sequence is not matched with the number of the expected characteristic points, deleting the predicted characteristic point sequence;
calculating the average error of the predicted characteristic point sequence and the central point of the predicted local image;
and determining the predicted characteristic point sequence with the minimum average error as a final characteristic point identification sequence.
9. A readable storage medium, on which a program is stored, wherein the program is executed to implement the steps of the medical image feature point identification method according to any one of claims 1 to 8.
10. A system for recognizing feature points of medical images, comprising: the medical imaging device comprises a transmitting end and a receiving end, the scale tool comprises a plurality of calibration developing pieces, and a known relative position relationship exists between the plurality of calibration developing pieces; the scale tool is disposed between the transmitting end and the receiving end.
11. The system of claim 10, wherein the scale tool comprises at least two planes and at least two calibration images of different specifications, the calibration images of the same specification are disposed on the same plane, and the number of the calibration images of each specification is not less than 3.
12. The system of claim 11, wherein the arrangement of the calibration marks on the two planes is different.
13. The system of claim 11, wherein the scale tool further comprises a shaft, and the two planes are non-coplanar and are distributed on two sides of the shaft.
CN202210981446.1A 2022-08-15 2022-08-15 Medical image feature point identification method, identification system and readable storage medium Pending CN115631342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981446.1A CN115631342A (en) 2022-08-15 2022-08-15 Medical image feature point identification method, identification system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981446.1A CN115631342A (en) 2022-08-15 2022-08-15 Medical image feature point identification method, identification system and readable storage medium

Publications (1)

Publication Number Publication Date
CN115631342A true CN115631342A (en) 2023-01-20

Family

ID=84902368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981446.1A Pending CN115631342A (en) 2022-08-15 2022-08-15 Medical image feature point identification method, identification system and readable storage medium

Country Status (1)

Country Link
CN (1) CN115631342A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116350352A (en) * 2023-02-23 2023-06-30 北京纳通医用机器人科技有限公司 Surgical robot marker bit identification positioning method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116350352A (en) * 2023-02-23 2023-06-30 北京纳通医用机器人科技有限公司 Surgical robot marker bit identification positioning method, device and equipment
CN116350352B (en) * 2023-02-23 2023-10-20 北京纳通医用机器人科技有限公司 Surgical robot marker bit identification positioning method, device and equipment

Similar Documents

Publication Publication Date Title
US6359960B1 (en) Method for identifying and locating markers in a 3D volume data set
EP1278458B1 (en) Fluoroscopic tracking and visualization system
US11224763B2 (en) Tracking device for radiation treatment, position detection device, and method for tracking moving body
US9314214B2 (en) Calibration of radiographic images
CN107708568B (en) Registered fiducial markers, systems, and methods
US8988505B2 (en) Imaging system using markers
US6856827B2 (en) Fluoroscopic tracking and visualization system
US6856826B2 (en) Fluoroscopic tracking and visualization system
US20060245628A1 (en) Systems and methods for determining geometric parameters of imaging devices
Livyatan et al. Robust automatic C-arm calibration for fluoroscopy-based navigation: a practical approach
Dang et al. Robust methods for automatic image‐to‐world registration in cone‐beam CT interventional guidance
CN109363770A (en) A kind of surgical navigational robot index point automatic identification localization method
CN115631342A (en) Medical image feature point identification method, identification system and readable storage medium
CN211178436U (en) System for magnetometer spatial localization
EP1923756B1 (en) Method and system for region of interest calibration parameter adjustment of tracking systems
Schaller et al. Time-of-flight sensor for patient positioning
EP4067817A1 (en) System and method for spatial positioning of magnetometers
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN114404041B (en) C-arm imaging parameter calibration system and method
EP4202831A1 (en) Image-based motion detection method
CN113066126A (en) Positioning method for puncture needle point
CN116849806A (en) Medical image mark point identification method and surgical robot system
CN113100967B (en) Wearable surgical tool positioning device and positioning method
Wei et al. Determining the position of a patient reference from C-Arm views for image guided navigation
JP2006254934A (en) Irradiation field judgement device, irradiation field judgement method and its program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination