CN111461049A - Space registration identification method, device, equipment and computer readable storage medium - Google Patents

Space registration identification method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111461049A
CN111461049A CN202010284146.9A CN202010284146A CN111461049A CN 111461049 A CN111461049 A CN 111461049A CN 202010284146 A CN202010284146 A CN 202010284146A CN 111461049 A CN111461049 A CN 111461049A
Authority
CN
China
Prior art keywords
registration
type
image
marking
marking device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010284146.9A
Other languages
Chinese (zh)
Other versions
CN111461049B (en
Inventor
吴东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202010284146.9A priority Critical patent/CN111461049B/en
Publication of CN111461049A publication Critical patent/CN111461049A/en
Application granted granted Critical
Publication of CN111461049B publication Critical patent/CN111461049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments

Abstract

The application relates to a space registration identification method, a space registration identification device, space registration identification equipment and a computer readable storage medium. The space registration identification method comprises the step of obtaining a scanning image of a marking device fixed on a to-be-operated position. And judging the marking type of the marking device according to the scanning image. And selecting a space registration method corresponding to the mark type according to the mark type, and finishing space registration by using the space registration method. The space registration identification method is used for selecting different space registration methods according to different mark types of the marking devices. The space registration method is matched with the characteristics of the marking device such as material shape, installation mode, shape and the like, and the accuracy of coordinate system registration is improved.

Description

Space registration identification method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of surgical technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for spatial registration identification.
Background
When the surgical robot navigation positioning system is used for assisting a doctor to perform a surgical operation, the space registration of a to-be-operated part is required to be performed firstly so as to complete the registration between the coordinate systems of the to-be-operated part. The user is required to manually select the registration mode during space registration.
The common registration modes include a plurality of registration modes such as bone nail contact registration, optical registration, bone nail non-contact registration and framework registration. How to improve the accuracy of coordinate system registration is an urgent problem to be solved.
Disclosure of Invention
Based on this, it is necessary to provide a spatial registration identification method, apparatus, device and computer readable storage medium for solving the problem of how to improve the accuracy of coordinate system registration.
A spatial registration identification method comprises the following steps:
s100, acquiring a scanning image of the marking device fixed on the to-be-operated part.
And S200, judging the marking type of the marking device according to the scanning image.
S300, selecting a space registration method corresponding to the mark type according to the mark type, and finishing space registration by using the space registration method.
In one embodiment, after S100, the spatial registration identification method further includes:
and S110, segmenting the scanned image only comprising the marking device from the scanned image.
In one embodiment, S300 includes:
s310, judging whether the mark type belongs to the first mark type.
S320, if the mark type belongs to the first mark type, selecting a first registration method to complete spatial registration.
In one embodiment, the method further includes, after S320, the step of:
s330, if the tag type does not belong to the first tag type, determining whether the tag type belongs to a second tag type.
S340, if the mark type belongs to the second mark type, selecting a second registration method to complete spatial registration.
And S350, traversing to the Nth mark type according to the steps S310-S340, and finishing spatial registration.
In one embodiment, S110 includes:
s111, obtaining a plurality of images of the part to be operated and a plurality of training images of the marking devices which are in one-to-one correspondence with the plurality of images of the part to be operated and fixed on the part to be operated.
And S112, respectively inputting the plurality of to-be-operated position images and the plurality of training images into a machine learning algorithm for learning training to obtain a segmentation network model.
And S113, inputting the scanned image into the segmentation network model to obtain a scanned image only containing the marking device.
In one embodiment, the marking device comprises a first marking device, and S200 comprises:
s201, respectively acquiring a plurality of scanned images including the first marking device and a first type label corresponding to the first marking device.
S202, inputting the plurality of scanned images containing the first marking devices and the first type labels into a machine learning algorithm for learning training to obtain a classification model.
S203, inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In one embodiment, the plurality of marking devices includes the first to nth marking devices, and after S202 and before S203, the method further includes:
s2021, obtaining a plurality of the scanned images including the second to nth marking devices and the second to nth labels corresponding to the second to nth marking devices one by one according to the step S201.
And S2022, inputting the plurality of scanned images of the second marking device to the Nth marking device and the labels of the second kind to the Nth kind into a machine learning algorithm for learning and training according to the step S202 to obtain the classification model.
In one embodiment, the mark type is an optical bead, and the spatial registration recognition method further includes:
s3010, judging whether the HU value of the optical small ball base is larger than a set value or not, if so, selecting a bone nail non-contact method to finish space registration, and if not, selecting an optical registration method to finish space registration.
In one embodiment, before S3010, the spatial registration identification method further includes:
s3001, dividing the optical ball image and the optical ball base image from the scanning image.
A space registration identification system comprises an acquisition module, a judgment module and a registration module. The acquisition module is used for acquiring a scanning image of the marking device fixed on the position to be operated. The judging module is used for judging the marking type of the marking device according to the scanning image. The registration module is used for selecting a space registration method corresponding to the mark type according to the mark type and finishing space registration by using the space registration method.
In one embodiment, the determination module includes a segmentation sub-module and a determination sub-module. The segmentation submodule is used for segmenting the scanning image only comprising the marking devices from the scanning image. The judging submodule is used for judging the marking type of the marking device according to the scanning image only comprising the marking device.
In one embodiment, the segmentation sub-module comprises a segmentation network model, and the segmentation sub-module is configured to input the scan image into the segmentation network model to obtain a scan image containing only the marking devices.
In one embodiment, the judging sub-module includes a classification model, and the judging sub-module is configured to input the scan image into the classification model to obtain a mark type of the marking device of the scan image.
An apparatus comprising one or more processors. The apparatus also includes a memory. The memory is used to store one or more programs. When executed by the one or more processors, cause the one or more processors to implement a spatial registration recognition method as in any one of the embodiments above.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the spatial registration recognition method according to any one of the above embodiments.
The space registration identification method provided by the embodiment of the application comprises the step of obtaining a scanning image of a marking device fixed on a to-be-operated part. And judging the marking type of the marking device according to the scanning image. And selecting a space registration method corresponding to the mark type according to the mark type, and finishing space registration by using the space registration method. The space registration identification method selects different space registration methods according to different mark types of the mark device. The space registration method is matched with the characteristics of the marking device such as material shape, installation mode, shape and the like, and the accuracy of coordinate system registration is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present application, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of the spatial registration identification method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of the spatial registration identification method provided in another embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the present application are described in detail below with reference to the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of embodiments in many different forms than those described herein and those skilled in the art will be able to make similar modifications without departing from the spirit of the application and it is therefore not intended to be limited to the embodiments disclosed below.
The numbering of the components as such, e.g., "first", "second", etc., is used herein for the purpose of describing the objects only, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings). In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present application and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be considered as limiting the present application.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
When the surgical robot navigation positioning system is used for assisting a doctor to perform a surgical operation, the space registration of a to-be-operated part is required to be performed firstly so as to complete the registration between the coordinate systems of the to-be-operated part. The common registration modes include a plurality of registration modes such as bone screw contact registration, framework registration, optical registration and bone screw non-contact registration. In all the registration modes, a marking device is required to be placed at a position to be operated for medical image scanning imaging, then the coordinates of marking points on the marking device are extracted from the image, and then the registration between coordinate systems is completed through the marking points.
Different registration modes, different material shapes and different installation modes of the marking devices, different scanning imaging and different corresponding space registration methods.
After the user selects the registration mode, the surgical robot navigation positioning system guides the user to install different instrument tail ends on the mechanical arm to perform space registration in different image-text modes.
The contact type registration mode of the bone nail is as follows: the marking device is a bone nail. Firstly, implanting a plurality of bone nails into a human skeleton of a part to be operated for fixation, then carrying out image scanning, and respectively extracting the spherical center coordinates of a plurality of bone nail grooves. And finally, carrying out coordinate system registration through the spherical center coordinates of the plurality of bone nail grooves. The bone nail contact type registration method corresponds to the bone nail contact type registration method.
The framework registration mode is as follows: the marking device is a frame tool with a plurality of marking bodies. Fixing the frame tool on the part to be operated, then scanning images, and respectively extracting the spherical center coordinates of the plurality of mark bodies. And finally, carrying out coordinate system registration through the spherical center coordinates of the plurality of marker bodies. The frame registration method corresponds to the frame registration method.
The optical registration method comprises the following steps: the marking device is an optical bead. Fixing a plurality of the optical pellets on a plastic base, adhering the optical pellets on the skin surface of the part to be operated, then scanning images, and respectively extracting the spherical center coordinates of the optical pellets. And finally, carrying out coordinate system registration through the spherical center coordinates of the optical small balls. The optical registration method corresponds to the optical registration method.
The bone nail non-contact mode is as follows: the marking devices are bone nails and optical pellets. Firstly, implanting a plurality of bone nails into a human skeleton of a part to be operated to be fixed, then fixing a plurality of optical small balls on the surface of the bone nails far away from the human body in a one-to-one correspondence manner, then carrying out image scanning imaging, and respectively extracting the spherical center coordinates of the plurality of optical small balls. And finally, carrying out coordinate system registration through the spherical center coordinates of the optical small balls. The bone screw non-contact method corresponds to the bone screw non-contact method.
The coordinate system registration refers to registration of the coordinate system of the to-be-operated part and the coordinate system of the mechanical arm.
Referring to fig. 1, an embodiment of the present application provides a spatial registration identification method, including:
s100, acquiring a scanning image of the marking device fixed on the to-be-operated part.
And S200, judging the marking type of the marking device according to the scanning image.
S300, selecting a space registration method corresponding to the mark type according to the mark type, and finishing space registration by using the space registration method.
The space registration method comprises a sphere center coordinate extraction method and a coordinate system registration method. The mark type of the mark device is matched with the space registration method, and then the registration of the coordinate system can be completed. The marking device has different marking types and the space registration method is different.
The space registration identification method provided by the application is used for selecting different space registration methods according to different marking types of the marking devices. The space registration identification method is used for selecting the space registration method matched with the marking type item of the marking device so as to accurately complete the registration of the coordinate system.
The marking means comprises one or more of a bone nail, a frame or an optical ball, and combinations thereof. The marking device may also be another device that serves a marking function.
The registration mode matched with the marking device includes, but is not limited to, a bone nail contact registration mode, a frame registration mode, an optical registration mode, a bone nail non-contact registration mode, and the like. The registration mode may be another registration mode matched with the marking device.
The spatial registration method includes, but is not limited to, the bone nail contact registration method, the frame registration method, the optical registration method, or the bone nail non-contact registration method, etc.
Referring to fig. 2, in an embodiment, after S100, the spatial registration identification method further includes:
and S110, segmenting the scanned image only comprising the marking device from the scanned image.
The space registration identification method firstly performs image segmentation to obtain the scanning image only comprising the marking device, thereby effectively avoiding the interference of the image of the part to be operated. The pixels of the scanned image including only the marking device are reduced relative to the original scanned image. The processing efficiency for judging the mark type is improved by less data processing amount, and further, the efficiency for selecting the registration method is improved.
In one embodiment, when the spatial registration recognition method is used only for determining whether the mark type of the marking device in the scanned image belongs to a first mark type, S300 includes:
s310, judging whether the mark type belongs to the first mark type.
S320, if the mark type belongs to the first mark type, selecting a first registration method to complete spatial registration.
In one embodiment, the first marker species is one of a bone nail, a frame, or an optical ball. The first marker species may also be a combination of several of bone pins, frames or optical beads, or other marker species compatible with the marker device.
In one embodiment, the tag type is plural. The plurality of tag types are the first tag type to the Nth tag type, respectively. N is an integer of 2 or more. After S320, the spatial registration identification method further includes:
s330, if the tag type does not belong to the first tag type, determining whether the tag type belongs to a second tag type.
S340, if the mark type belongs to the second mark type, selecting a second registration method to complete spatial registration.
S350, traversing the third mark type to the Nth mark type according to the steps S310-S340, and finishing the spatial registration.
In one embodiment, the marker species include, but are not limited to, bone pegs, frames, or optical beads, etc.
Steps S310 to S350 are used to find the mark type of the marking device in the scanned image, and space registration is completed according to the mark type.
In one embodiment, S110 includes:
s111, obtaining a plurality of images of the part to be operated and a plurality of training images of the marking devices which are in one-to-one correspondence with the plurality of images of the part to be operated and fixed on the part to be operated.
The plurality of to-be-operated part images and the plurality of training images are training samples. And the plurality of to-be-operated part images correspond to the plurality of training images one to one.
And S112, respectively inputting the plurality of to-be-operated position images and the plurality of training images into a deep learning model algorithm for learning training to obtain a segmentation network model. The deep learning model algorithm comprises a U-Net algorithm.
And S113, inputting the scanned image into the segmentation network model to obtain a scanned image only containing the marking device.
The segmentation network model is used for segmenting the scanning image only comprising the marking device from the scanning image. The number of pixels of a scanned image including only the marking device is reduced relative to the scanned image, and the amount of data processing in determining the type of the mark is reduced. The processing efficiency for judging the mark type is improved by less data processing amount, and further, the efficiency for selecting the registration method is improved.
In one embodiment, S200 includes:
s201, respectively acquiring a plurality of scanned images including the first marking device and a first type label corresponding to the first marking device.
S202, inputting the plurality of first marker device images and the first class labels into a machine learning algorithm for learning training to obtain a classification model. The mechanical learning algorithm includes a deep learning algorithm. The Machine learning algorithm includes an SVM (Support Vector Machine) or a Random Forest algorithm, etc.
S203, inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In one embodiment, the plurality of marking devices includes the first to nth marking devices, and after S202 and before S203, the method further includes:
s2021, obtaining a plurality of the scanned images including the second to nth marking devices and the second to nth labels corresponding to the second to nth marking devices one by one according to the step S201.
And S2022, inputting the plurality of scanned images of the second marking device to the Nth marking device and the labels of the second kind to the Nth kind into a machine learning algorithm for learning and training according to the step S202 to obtain the classification model.
For images which are not easy to distinguish through the classification model, the images can be distinguished through an HU value screening mode.
The images of the optical beads as the types of the markers are not easily distinguished by a classification model. For the image formed by the bone nail in a non-contact mode, the identification degrees of the bone nail are different because the depth of the bone nail implanted into the human skeleton is different. When the depth of the bone nail implanted into the human skeleton is deep, the surface of the bone nail is provided with the optical ball, and the exposed size of the bone nail is small, the bone nail cannot be clearly identified in the scanned image. The image containing the optical globules may also contain an image of the bone pins.
In one embodiment, the mark type is an optical bead, and the spatial registration recognition method further includes:
s3010, judging whether the HU value of the optical small ball base is larger than a set value or not, if so, selecting a bone nail non-contact method to finish space registration, and if not, selecting an optical registration method to finish space registration.
In one embodiment, the set value is 1000.
In one embodiment, before the step of determining whether the HU value of the optical sphere base is greater than a set value, the method further includes:
s3001, dividing the optical ball image and the optical ball base image from the scanning image.
In a specific embodiment, the marker device includes a bone screw, a frame, an optical pellet, and an optical pellet with a bone screw. The "bone screw" marking category includes "bone screw" marking devices. The "frame" label category comprises "frame" label devices. The optical ball marking category comprises an optical ball marking device and an optical ball marking device with bone nails.
In a specific embodiment, the S200 further includes:
respectively obtaining a plurality of bone nail images, a plurality of frame images, a plurality of optical ball images and category labels which are in one-to-one correspondence with the bone nail images, the frame images and the optical ball images.
And respectively inputting the bone nail images, the frame images, the optical bead images and the class labels which are in one-to-one correspondence with the bone nail images, the frame images and the optical bead images into a machine learning algorithm for learning training to obtain a classification model.
And inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In a specific embodiment, the S300 further includes:
and judging whether the mark type belongs to the bone nail.
And if the mark type belongs to the bone nail, selecting a bone nail contact type registration method to complete space registration.
And if the mark type does not belong to the bone nail, judging whether the mark type belongs to the frame.
And if the mark type belongs to the frame, selecting a frame registration method to finish space registration.
And if the mark type does not belong to the frame, segmenting the optical ball image and the base image from the scanned image by using an image algorithm.
And traversing the HU value of the base pixel of the base image, and judging whether the HU value is larger than a set value.
If the HU value is larger than the set value, the mark type is an optical ball with a bone nail, and a bone nail non-contact method is selected to complete space registration.
And if the HU value is smaller than the set value, selecting an optical registration method to finish spatial registration.
The space registration identification method firstly distinguishes the bone nail image, the frame image and the optical ball image. And then the optical registration mode and the bone nail non-contact mode are distinguished. The space registration identification method improves the accuracy of registration mode distinguishing, and further improves the accuracy of space registration method selection.
In one embodiment, the step S300 is implemented by a computer code program. The selection of the registration method may be implemented using a one-by-one elimination method in operation S300.
The embodiment of the application provides a space registration identification device, which comprises an acquisition module, a judgment module and a registration module. The acquisition module is used for acquiring a scanning image of the marking device fixed on the position to be operated. The judging module is used for judging the marking type of the marking device according to the scanning image. The registration module is used for selecting a space registration method corresponding to the mark type according to the mark type and finishing space registration by using the space registration method.
The space registration identification apparatus provided in the embodiment of the present application selects different space registration methods according to different types of marks of the marking device. The space registration method is matched with the characteristics of the marking device such as material shape, installation mode, shape and the like, and the accuracy of coordinate system registration is improved.
In one embodiment, the determination module includes a segmentation sub-module and a determination sub-module. The segmentation submodule is used for segmenting the scanning image only comprising the marking devices from the scanning image. The judging submodule is used for judging the marking type of the marking device according to the scanning image.
In one embodiment, the segmentation module comprises a segmentation network model. The segmentation module is used for inputting the scanning image into the segmentation network model to obtain the scanning image only containing the marking device.
The scanning image is obtained by image segmentation through the segmentation module, so that the interference of the image of the part to be operated is effectively avoided. The pixels of the scanned image including only the marking device are reduced with respect to the pixels of the scanned image, so that the data processing amount in the process of judging the mark type by the judgment module is reduced. The processing efficiency of judging the mark type is improved due to less data processing amount, and further, the efficiency of selecting the registration method is improved due to the space registration recognition device.
In one embodiment, the step of forming the segmentation network model is:
firstly, a plurality of images of a part to be operated and a plurality of training images of marking devices which are in one-to-one correspondence with the plurality of images of the part to be operated and fixed on the part to be operated are obtained.
The plurality of to-be-operated part images and the plurality of training images are training samples. And the plurality of to-be-operated part images correspond to the plurality of training images one to one.
And then, respectively inputting the plurality of to-be-operated position images and the plurality of training images into a deep learning model algorithm for learning training to obtain a segmentation network model. Wherein the deep learning model algorithm comprises a U-Net algorithm.
In one embodiment, the judging module includes a classification model, and the judging module is configured to input the scanned image into the classification model to obtain a labeling type of the labeling device of the scanned image.
In one embodiment, the marking device comprises a first marking device, and the classification model is formed by:
a plurality of scanned images containing the first marking device and a first type of label corresponding to the first marking device are acquired, respectively. And inputting a plurality of scanned images containing the first marking devices and the first class labels into a machine learning algorithm for learning training to obtain a classification model.
In a specific embodiment, the plurality of marking devices includes the first marking device to the nth marking device, and the classification model is formed by:
and respectively acquiring a plurality of scanning images including a second marking device to an Nth marking device and second type labels to Nth type labels which are in one-to-one correspondence with the second marking device to the Nth marking device.
And inputting the plurality of scanned images of the second marking device to the Nth marking device and the plurality of labels from the second type to the Nth type into a machine learning algorithm for learning training to obtain the classification model.
In a specific embodiment, the classification model is formed by the steps of:
firstly, a plurality of bone nail images, a plurality of frame images, a plurality of optical ball images and category labels which are in one-to-one correspondence with the bone nail images, the frame images and the optical ball images are respectively obtained. And respectively inputting the bone nail images, the frame images, the optical bead images and the class labels which are in one-to-one correspondence with the bone nail images, the frame images and the optical bead images into a machine learning algorithm for learning training to obtain the classification model.
The bone nail image is a scanned image only containing bone nails and does not contain an image of a part to be operated. The bone nail image is a scanned image only containing the frame and does not contain an image of the part to be operated. The optical bead image is an image containing an optical bead. But the optic sphere image may also include an image of a bone pin.
For the image formed by the bone nail in a non-contact mode, the identification degrees of the bone nail are different because the depth of the bone nail implanted into the human skeleton is different. When the depth of the bone nail implanted into the human skeleton is deep, the surface of the bone nail is provided with the optical ball, and the exposed size of the bone nail is small, the bone nail cannot be clearly identified in the scanned image.
The type label corresponding to the bone nail image is a bone nail. The category label corresponding to the frame graph is a frame. The type label corresponding to the optical bead image is an optical bead.
The Machine learning algorithm includes an SVM (Support Vector Machine) or a RandomForest algorithm, etc.
An embodiment of the present application provides an apparatus that includes one or more processors and a memory. The memory is used to store one or more programs. When executed by the one or more processors, cause the one or more processors to implement a spatial registration recognition method as in any one of the embodiments above.
When the one or more programs are executed by the one or more processors, the apparatus provided in the embodiment of the present application enables the one or more processors to implement the spatial registration identification method according to any of the above embodiments. The equipment automatically completes the selection of the space registration method according to the scanned image through the processor and the memory, and further automatically completes the space registration. The device reduces the probability of errors in manual selection.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the spatial registration identification method according to any of the above embodiments. The computer-readable storage medium provided by the embodiment of the application. The computer readable storage medium automatically completes the selection of the space registration method according to the scanned image through the stored computer program, and further automatically completes the space registration. The computer readable storage medium reduces the probability of error for manual selection.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-described examples merely represent several embodiments of the present application and are not to be construed as limiting the scope of the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A spatial registration identification method is characterized by comprising the following steps:
s100, acquiring a scanning image of a marking device fixed on a to-be-operated part;
s200, judging the marking type of the marking device according to the scanning image;
s300, selecting a space registration method corresponding to the mark type according to the mark type, and finishing space registration by using the space registration method.
2. The spatial registration identification method of claim 1, further comprising, after S100:
and S110, segmenting the scanned image only comprising the marking device from the scanned image.
3. The spatial registration recognition method of claim 1, wherein S300 comprises:
s310, judging whether the mark type belongs to a first mark type;
s320, if the mark type belongs to the first mark type, selecting a first registration method to complete spatial registration.
4. The spatial registration recognition method of claim 3, wherein the tag types are a plurality of tag types, and the tag types are the first tag type through the nth tag type, respectively, and after S320, the method further comprises:
s330, if the mark type does not belong to the first mark type, judging whether the mark type belongs to a second mark type;
s340, if the mark type belongs to the second mark type, selecting a second registration method to complete space registration;
and S350, traversing to the Nth mark type according to the steps S310-S340, and finishing spatial registration.
5. The spatial registration recognition method of claim 2, wherein S110 comprises:
s111, acquiring a plurality of images of the part to be operated and a plurality of training images of the marking devices which are in one-to-one correspondence with the plurality of images of the part to be operated and fixed on the part to be operated;
s112, respectively inputting the plurality of to-be-operated position images and the plurality of training images into a machine learning algorithm for learning training to obtain a segmentation network model;
and S113, inputting the scanned image into the segmentation network model to obtain a scanned image only containing the marking device.
6. The spatial registration recognition method of claim 1, wherein the marking device comprises a first marking device, S200 comprises:
s201, respectively acquiring a plurality of scanned images containing the first marking device and a first type label corresponding to the first marking device;
s202, inputting a plurality of scanned images containing the first marking devices and the first type labels into a machine learning algorithm for learning training to obtain a classification model;
s203, inputting the scanned image into the classification model to obtain the mark type of the scanned image.
7. The spatial registration recognition method of claim 6, wherein the plurality of marking devices includes the first to nth marking devices, and after S202 and before S203, the method further comprises:
s2021, obtaining a plurality of the scanned images including a second marker device to the nth marker device and second type labels to nth type labels corresponding to the second marker device to the nth marker device one by one according to the step S201;
and S2022, inputting the plurality of scanned images of the second marking device to the Nth marking device and the labels of the second kind to the Nth kind into a machine learning algorithm for learning and training according to the step S202 to obtain the classification model.
8. The spatial registration recognition method of claim 1, wherein the mark type is an optical bead, the spatial registration recognition method further comprising:
s3010, judging whether the HU value of the optical small ball base is larger than a set value or not, if so, selecting a bone nail non-contact method to finish space registration, and if not, selecting an optical registration method to finish space registration.
9. The spatial registration identification method of claim 8, further comprising, before S3010:
s3001, dividing the optical ball image and the optical ball base image from the scanning image.
10. A spatial registration recognition system, comprising:
the acquisition module is used for acquiring a scanning image of the marking device fixed on the part to be operated;
the judging module is used for judging the marking type of the marking device according to the scanning image;
and the registration module is used for selecting a space registration method corresponding to the mark type according to the mark type and finishing space registration by utilizing the space registration method.
11. The spatial registration recognition system of claim 10, wherein the determination module comprises:
a division submodule configured to divide a scanned image including only the marking device from the scanned image;
a judging sub-module for judging the marking type of the marking device according to the scanned image only including the marking device.
12. The spatial registration recognition system of claim 11, wherein the segmentation sub-module comprises a segmentation network model, the segmentation sub-module being configured to input the scan image into the segmentation network model to obtain a scan image containing only the marking devices.
13. The spatial registration recognition system of claim 10, wherein the decision sub-module comprises a classification model, and the decision sub-module is configured to input the scan image into the classification model to obtain a label type of the labeling device of the scan image.
14. An apparatus, characterized in that the apparatus comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the spatial registration recognition method of any of claims 1-9.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the spatial registration recognition method according to any one of claims 1 to 9.
CN202010284146.9A 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium Active CN111461049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010284146.9A CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010284146.9A CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111461049A true CN111461049A (en) 2020-07-28
CN111461049B CN111461049B (en) 2023-08-22

Family

ID=71685263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010284146.9A Active CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111461049B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080242978A1 (en) * 2007-03-29 2008-10-02 Medtronic Navigation, Inc. Method and apparatus for registering a physical space to image space
US20110069867A1 (en) * 2009-09-21 2011-03-24 Stryker Leibinger Gmbh & Co. Kg Technique for registering image data of an object
US20140049629A1 (en) * 2011-04-29 2014-02-20 The Johns Hopkins University Sytem and method for tracking and navigation
CN104146767A (en) * 2014-04-24 2014-11-19 薛青 Intraoperative navigation method and system for assisting in surgery
CN107133637A (en) * 2017-03-31 2017-09-05 精劢医疗科技南通有限公司 A kind of surgical navigational image registers equipment and method automatically
CN107440797A (en) * 2017-08-21 2017-12-08 上海霖晏医疗科技有限公司 Registration system and method for surgical navigational
CN107481272A (en) * 2016-06-08 2017-12-15 瑞地玛医学科技有限公司 A kind of radiotherapy treatment planning image registration and the method and system merged
CN107874832A (en) * 2017-11-22 2018-04-06 合肥美亚光电技术股份有限公司 Bone surgery set navigation system and method
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109091228A (en) * 2018-07-04 2018-12-28 首都医科大学 A kind of more instrument optical positioning methods and system
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN110547872A (en) * 2019-09-23 2019-12-10 重庆博仕康科技有限公司 Operation navigation registration system
CN110582247A (en) * 2017-04-28 2019-12-17 美敦力导航股份有限公司 automatic identification of instruments
CN110664484A (en) * 2019-09-27 2020-01-10 江苏工大博实医用机器人研究发展有限公司 Space registration method and system for robot and image equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080242978A1 (en) * 2007-03-29 2008-10-02 Medtronic Navigation, Inc. Method and apparatus for registering a physical space to image space
US20110069867A1 (en) * 2009-09-21 2011-03-24 Stryker Leibinger Gmbh & Co. Kg Technique for registering image data of an object
US20140049629A1 (en) * 2011-04-29 2014-02-20 The Johns Hopkins University Sytem and method for tracking and navigation
CN104146767A (en) * 2014-04-24 2014-11-19 薛青 Intraoperative navigation method and system for assisting in surgery
CN107481272A (en) * 2016-06-08 2017-12-15 瑞地玛医学科技有限公司 A kind of radiotherapy treatment planning image registration and the method and system merged
CN107133637A (en) * 2017-03-31 2017-09-05 精劢医疗科技南通有限公司 A kind of surgical navigational image registers equipment and method automatically
CN110582247A (en) * 2017-04-28 2019-12-17 美敦力导航股份有限公司 automatic identification of instruments
CN107440797A (en) * 2017-08-21 2017-12-08 上海霖晏医疗科技有限公司 Registration system and method for surgical navigational
CN107874832A (en) * 2017-11-22 2018-04-06 合肥美亚光电技术股份有限公司 Bone surgery set navigation system and method
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109091228A (en) * 2018-07-04 2018-12-28 首都医科大学 A kind of more instrument optical positioning methods and system
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN110547872A (en) * 2019-09-23 2019-12-10 重庆博仕康科技有限公司 Operation navigation registration system
CN110664484A (en) * 2019-09-27 2020-01-10 江苏工大博实医用机器人研究发展有限公司 Space registration method and system for robot and image equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
兰坤;张岩;沈旭昆;: "一种基于视觉的手术导航系统设计与实现", no. 09, pages 2025 - 2042 *

Also Published As

Publication number Publication date
CN111461049B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN110390269B (en) PDF document table extraction method, device, equipment and computer readable storage medium
CN111178250B (en) Object identification positioning method and device and terminal equipment
US7916919B2 (en) System and method for segmenting chambers of a heart in a three dimensional image
US20180336683A1 (en) Multi-Label Semantic Boundary Detection System
US8780110B2 (en) Computer vision CAD model
US8600165B2 (en) Optical mark classification system and method
US8160322B2 (en) Joint detection and localization of multiple anatomical landmarks through learning
US8988200B2 (en) Printed label-to-RFID tag data translation apparatus and method
CN111583188A (en) Operation navigation mark point positioning method, storage medium and computer equipment
CN107481276B (en) Automatic identification method for marker point sequence in three-dimensional medical image
US20110188706A1 (en) Redundant Spatial Ensemble For Computer-Aided Detection and Image Understanding
CN110556179A (en) Method and system for marking whole spine image by using deep neural network
CN112307786B (en) Batch positioning and identifying method for multiple irregular two-dimensional codes
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
CN111190595A (en) Method, device, medium and electronic equipment for automatically generating interface code based on interface design drawing
CN104641381B (en) Method and system for detecting 2D barcode in a circular label
CN114549603B (en) Method, system, equipment and medium for converting labeling coordinate of cytopathology image
CN115752683A (en) Weight estimation method, system and terminal based on depth camera
CN111461049B (en) Space registration identification method, device, equipment and computer readable storage medium
JP6303347B2 (en) Sample image management system and sample image management program
CN110795987B (en) Pig face recognition method and device
CN112150398B (en) Image synthesis method, device and equipment
CN110555850A (en) method and device for identifying rib region in image, electronic equipment and storage medium
CN111046848B (en) Gait monitoring method and system based on animal running platform
CN107610105B (en) Method, device and equipment for positioning ROI and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant