CN111461049B - Space registration identification method, device, equipment and computer readable storage medium - Google Patents

Space registration identification method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111461049B
CN111461049B CN202010284146.9A CN202010284146A CN111461049B CN 111461049 B CN111461049 B CN 111461049B CN 202010284146 A CN202010284146 A CN 202010284146A CN 111461049 B CN111461049 B CN 111461049B
Authority
CN
China
Prior art keywords
registration
marking
image
type
mark type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010284146.9A
Other languages
Chinese (zh)
Other versions
CN111461049A (en
Inventor
吴东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202010284146.9A priority Critical patent/CN111461049B/en
Publication of CN111461049A publication Critical patent/CN111461049A/en
Application granted granted Critical
Publication of CN111461049B publication Critical patent/CN111461049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments

Abstract

The application relates to a space registration identification method, a device, equipment and a computer readable storage medium. The spatial registration method includes acquiring a scanned image of a marker device secured to a site to be operated on. And judging the marking type of the marking device according to the scanning image. And selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method. The space registration identification method is used for selecting different space registration methods according to different marking types of the marking devices. The space registration method is matched with the characteristics of the marking device, such as material shape, mounting mode, shape and the like, so that the accuracy of coordinate system registration is improved.

Description

Space registration identification method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of surgery technologies, and in particular, to a spatial registration identification method, apparatus, device, and computer readable storage medium.
Background
When a surgical robot navigation positioning system is used for assisting a doctor in surgical operation, the space registration of the to-be-operated part is needed to be performed firstly so as to complete the registration between the coordinate systems of the to-be-operated part. The user is required to manually select the registration mode at the time of spatial registration.
Common registration methods include nail contact registration, optical registration, nail non-contact registration, and frame registration. How to improve the accuracy of the coordinate system registration is a problem to be solved.
Disclosure of Invention
Based on this, it is necessary to provide a spatial registration recognition method, apparatus, device, and computer-readable storage medium for the problem of how to improve accuracy of coordinate system registration.
A spatial registration identification method, comprising:
s100, acquiring a scanning image of the marker device fixed at the part to be operated.
S200, judging the marking type of the marking device according to the scanning image.
S300, selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method.
In one embodiment, after S100, the spatial registration identification method further includes:
s110, segmenting a scanning image only comprising the marking device from the scanning image.
In one embodiment, S300 includes:
s310, judging whether the mark type belongs to a first mark type.
S320, if the mark type belongs to the first mark type, selecting a first registration method to finish space registration.
In one embodiment, the number of the tag types is a plurality, and the plurality of the tag types are the first to nth tag types, respectively, and after S320, the method further includes:
s330, if the label category does not belong to the first label category, judging whether the label category belongs to a second label category.
And S340, if the mark type belongs to the second mark type, selecting a second registration method to finish space registration.
S350, traversing to the Nth mark category according to the steps S310-S340, and finishing space registration.
In one embodiment, S110 includes:
s111, acquiring a plurality of images of the to-be-operated part and a plurality of training images of the marking devices which are in one-to-one correspondence with the images of the to-be-operated part and fixed on the to-be-operated part.
S112, respectively inputting the images of the part to be operated and the training images into a machine learning algorithm for learning training to obtain a segmentation network model.
And S113, inputting the scanning image into the segmentation network model to obtain a scanning image only containing the marking device.
In one embodiment, the marking device comprises a first marking device, and S200 comprises:
s201, respectively acquiring a plurality of scanning images containing the first marking device and a first type label corresponding to the first marking device.
S202, inputting a plurality of scanned images containing the first marking device and the first type of labels into a machine learning algorithm for learning training to obtain a classification model.
S203, inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In one embodiment, the plurality of marking devices includes the first to nth marking devices, after S202, before S203, further including:
s2021, respectively acquiring a plurality of scanned images including second to Nth marking devices and second to Nth kinds of labels corresponding to the second to Nth marking devices one by one according to the step S201.
S2022, inputting the scanned images of the second marker to the Nth marker and the labels of the second type to the Nth type into a machine learning algorithm for learning training according to the step S202, and obtaining the classification model.
In one embodiment, the marker species is an optical sphere, and the spatial registration identification method further comprises:
s3010, judging whether the HU value of the optical ball base is larger than a set value, if so, selecting a bone screw non-contact method to complete space registration, and if not, selecting an optical registration method to complete space registration.
In one embodiment, before S3010, the spatial registration identification method further includes:
s3001, dividing the scanned image into an optical ball image and an optical ball base image.
A space registration recognition system comprises an acquisition module, a judgment module and a registration module. The acquisition module is used for acquiring a scanning image of the marking device fixed on the part to be operated. The judging module is used for judging the marking type of the marking device according to the scanning image. The registration module is used for selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method.
In one embodiment, the determination module includes a segmentation sub-module and a determination sub-module. The segmentation sub-module is used for segmenting the scanning image only comprising the marking device from the scanning image. The judging sub-module is used for judging the marking type of the marking device according to the scanning image only comprising the marking device.
In an embodiment, the segmentation submodule comprises a segmentation network model, and the segmentation submodule is used for inputting the scanning image into the segmentation network model to obtain the scanning image only containing the marking device.
In one embodiment, the judging submodule includes a classification model, and the judging submodule is used for inputting the scanning image into the classification model to obtain the marking type of the marking device of the scanning image.
An apparatus comprising one or more processors. The device also includes a memory. The memory is used for storing one or more programs. The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the spatial registration identification method as described in any one of the embodiments above.
A computer readable storage medium having stored thereon a computer program which when executed by a processor implements a spatial registration identification method as described in any one of the embodiments above.
The space registration identification method provided by the embodiment of the application comprises the steps of obtaining a scanning image of a marker device fixed on a part to be operated. And judging the marking type of the marking device according to the scanning image. And selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method. The space registration identification method selects different space registration methods according to different marking types of the marking devices. The space registration method is matched with the characteristics of the marking device, such as material shape, mounting mode, shape and the like, so that the accuracy of coordinate system registration is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments or the conventional techniques of the present application, the drawings required for the descriptions of the embodiments or the conventional techniques will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 is a schematic flow chart of the spatial registration identification method according to an embodiment of the present application;
fig. 2 is a flow chart of the spatial registration identification method according to another embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be readily understood, a more particular description of the application will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The application may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit or scope of the application, which is therefore not limited to the specific embodiments disclosed below.
The numbering of the components itself, e.g. "first", "second", etc., is used herein only to divide the objects described, and does not have any sequential or technical meaning. The term "coupled" as used herein includes both direct and indirect coupling (coupling), unless otherwise indicated. In the description of the present application, it should be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the device or element in question must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application.
In the present application, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
When a surgical robot navigation positioning system is used for assisting a doctor in surgical operation, the space registration of the to-be-operated part is needed to be performed firstly so as to complete the registration between the coordinate systems of the to-be-operated part. Common registration methods include nail contact registration, frame registration, optical registration, and nail non-contact registration. All registration methods need to place a marking device on a part to be operated to carry out medical image scanning imaging, then the coordinates of marking points on the marking device are extracted from the image, and registration between coordinate systems is completed through the marking points.
Different registration modes, different material shapes and installation modes of the marking device, different scanning imaging and different corresponding space registration methods.
After the user selects the registration mode, the surgical robot navigation positioning system guides the user to install different instrument ends on the mechanical arm in different image-text modes for space registration.
The contact registration mode of the bone nails is as follows: the marking device is a bone screw. Firstly, implanting a plurality of bone nails into human bones of a part to be operated for fixation, then performing image scanning, and respectively extracting spherical center coordinates of a plurality of bone nail grooves. And finally, registering a coordinate system through the spherical center coordinates of the bone screw grooves. The bone screw contact registration method corresponds to the bone screw contact registration method.
The frame registration mode is as follows: the marking device is a frame tool with a plurality of marking bodies. And fixing the frame tool on the part to be operated, and then performing image scanning to extract the spherical center coordinates of the plurality of markers respectively. And finally, carrying out coordinate system registration through the spherical center coordinates of the plurality of the markers. The frame registration method corresponds to the frame registration method.
The optical registration mode is as follows: the marking device is an optical pellet. Fixing a plurality of optical pellets on a plastic base, adhering the optical pellets on the skin surface of a part to be operated, and then performing image scanning to extract spherical center coordinates of the optical pellets respectively. And finally, registering coordinate systems through the spherical center coordinates of the plurality of optical pellets. The optical registration method corresponds to the optical registration method.
The non-contact mode of the bone nail is as follows: the marking device is a bone nail and an optical ball. Firstly, implanting a plurality of bone nails into human bones of a part to be operated for fixing, then fixing a plurality of optical pellets on the surfaces of the bone nails, which are far away from a human body, in a one-to-one correspondence manner, and then performing image scanning imaging to respectively extract spherical center coordinates of the optical pellets. And finally, registering coordinate systems through the spherical center coordinates of the plurality of optical pellets. The non-contact mode of the bone nail corresponds to the non-contact mode of the bone nail.
The coordinate system registration refers to registration of the coordinate system of the part to be operated and the coordinate system of the mechanical arm.
Referring to fig. 1, an embodiment of the present application provides a spatial registration identification method, including:
s100, acquiring a scanning image of the marker device fixed at the part to be operated.
S200, judging the marking type of the marking device according to the scanning image.
S300, selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method.
The space registration method comprises an extraction method of spherical center coordinates and a coordinate system registration method. The registration of the coordinate system can be completed only by matching the marking type of the marking device with the space registration method. The marking device has different marking types and the space registration method is different.
The space registration identification method provided by the application is used for selecting different space registration methods according to the marking types of different marking devices. The space registration identification method is used for selecting the space registration method matched with the marking type item of the marking device to accurately finish the registration of the coordinate system.
The marking means comprises one or more of a bone screw, a frame or an optical ball, and combinations thereof. The marking device may also be other devices that function as a marking.
Registration means for matching the marking device described above include, but are not limited to, bone screw contact registration means, frame registration means, optical registration means, or bone screw non-contact registration means, among others. The registration method can also be other registration methods matched with the marking device.
The spatial registration method includes, but is not limited to, the bone screw contact registration method, frame registration method, optical registration method, or bone screw non-contact registration method, etc.
Referring to fig. 2, in one embodiment, after S100, the spatial registration identification method further includes:
s110, segmenting a scanning image only comprising the marking device from the scanning image.
The space registration identification method firstly performs image segmentation to obtain the scanning image only comprising the marking device, so that the interference of the image of the part to be operated is effectively avoided. The scanned image including only the marking device has reduced pixels relative to the original scanned image. The smaller data processing amount improves the processing efficiency of judging the label type, and further improves the efficiency of selecting a registration method.
In one embodiment, when the spatial registration recognition method is used only to determine whether the marker class of the marker device in the scanned image belongs to a first marker class, S300 includes:
s310, judging whether the mark type belongs to a first mark type.
S320, if the mark type belongs to the first mark type, selecting a first registration method to finish space registration.
In one embodiment, the first marker species is one of a bone screw, a frame, or an optical pellet. The first marker species may also be a combination of several of bone nails, frames or optical beads, or may be other marker species compatible with the marker device.
In one embodiment, the marker species is a plurality. The plurality of tag species are the first to nth tag species, respectively. N is an integer greater than or equal to 2. After S320, the spatial registration identification method further includes:
s330, if the label category does not belong to the first label category, judging whether the label category belongs to a second label category.
And S340, if the mark type belongs to the second mark type, selecting a second registration method to finish space registration.
S350, traversing the third mark category to the N mark category according to the steps S310-S340, and completing space registration.
In one embodiment, the marker species include, but are not limited to, bone nails, frames, or optical pellets, etc.
The steps of S310 to S350 are for finding the kind of mark of the marking device in the scanned image and completing the spatial registration according to the kind of mark.
In one embodiment, S110 includes:
s111, acquiring a plurality of images of the to-be-operated part and a plurality of training images of the marking devices which are in one-to-one correspondence with the images of the to-be-operated part and fixed on the to-be-operated part.
The plurality of images of the part to be operated and the plurality of training images are training samples. And the images of the part to be operated and the training images are in one-to-one correspondence.
S112, respectively inputting the images of the part to be operated and the training images into a deep learning model algorithm for learning training to obtain a segmentation network model. The deep learning model algorithm comprises a U-Net algorithm.
And S113, inputting the scanning image into the segmentation network model to obtain a scanning image only containing the marking device.
The segmentation network model is used for segmenting the scanning image only comprising the marking device from the scanning image. The scanned image including only the marking device is reduced relative to the pixels of the scanned image, and the data processing amount in the process of judging the marking type is reduced. The smaller data processing amount improves the processing efficiency of judging the label type, and further improves the efficiency of selecting a registration method.
In one embodiment, S200 includes:
s201, respectively acquiring a plurality of scanning images containing the first marking device and a first type label corresponding to the first marking device.
S202, inputting a plurality of first marking device images and the first type labels into a machine learning algorithm for learning training to obtain a classification model. The machine learning algorithm includes a deep learning algorithm. The machine learning algorithm includes an SVM (support vector machine ) or Random Forest algorithm, etc.
S203, inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In one embodiment, the plurality of marking devices includes the first to nth marking devices, after S202, before S203, further including:
s2021, respectively acquiring a plurality of scanned images including second to Nth marking devices and second to Nth kinds of labels corresponding to the second to Nth marking devices one by one according to the step S201.
S2022, inputting the scanned images of the second marker to the Nth marker and the labels of the second type to the Nth type into a machine learning algorithm for learning training according to the step S202, and obtaining the classification model.
For images which are not easily distinguished by the classification model, the images can be distinguished by a HU value screening mode.
The image of the marker species being an optical sphere is not easily distinguished by the classification model. For the image formed by the bone nails in a non-contact mode, the identification degree of the bone nails is different due to the fact that the depth of the bone nails implanted into bones of human bodies is different. When the depth of the bone nail implanted into the human skeleton is deeper, the surface of the bone nail is provided with an optical small ball, and the exposed size of the bone nail is smaller, the bone nail cannot be clearly identified in the scanned image. The image containing the optical beads may also contain an image of the bone screw.
In one embodiment, the marker species is an optical sphere, and the spatial registration identification method further comprises:
s3010, judging whether the HU value of the optical ball base is larger than a set value, if so, selecting a bone screw non-contact method to complete space registration, and if not, selecting an optical registration method to complete space registration.
In one embodiment, the set point is 1000.
In one embodiment, before the step of determining whether the HU value of the optical ball mount is greater than a set value, the method further comprises:
s3001, dividing the scanned image into an optical ball image and an optical ball base image.
In a specific embodiment, the marking device comprises a bone screw, a frame, an optical pellet, and an optical pellet with a bone screw. The "bone nail" marking category includes "bone nail" marking devices. The "frame" tag category includes "frame" tag devices. The "optical bead" marking category includes both "optical bead" marking devices and "bone nailed optical bead" marking devices.
In a specific embodiment, the S200 further includes:
and respectively acquiring a plurality of bone screw images, a plurality of frame images, a plurality of optical ball images and category labels corresponding to the bone screw images, the frame images and the optical ball images one by one.
And respectively inputting the bone nail images, the frame images, the optical ball images and the type labels corresponding to the bone nail images, the frame images and the optical ball images one by one into a machine learning algorithm for learning and training to obtain a classification model.
And inputting the scanned image into the classification model to obtain the mark type of the scanned image.
In a specific embodiment, the S300 further includes:
judging whether the marking type belongs to the bone nail.
And if the marked species belongs to the bone nail, selecting a bone nail contact registration method to finish space registration.
And if the marking type does not belong to the bone nail, judging whether the marking type belongs to the frame.
And if the mark type belongs to the frame, selecting a frame registration method to finish space registration.
And if the mark type does not belong to the frame, dividing an optical ball image and a base image in the scanning image by using an image algorithm.
Traversing the HU value of the base pixel of the base image, and judging whether the HU value is larger than a set value.
If the HU value is larger than the set value, the marking type is an optical small ball with bone nails, and the bone nails are selected to finish space registration by a non-contact method.
And if the HU value is smaller than the set value, selecting an optical registration method to finish space registration.
The space registration recognition method is used for distinguishing the bone screw image, the frame image and the optical ball image. And then the distinction between the optical registration mode and the bone nail non-contact mode is completed. The spatial registration identification method improves the accuracy of registration mode distinction, and further improves the accuracy of spatial registration method selection.
In one embodiment, step S300 is implemented by a computer-encoded program. The selection of registration method may be implemented using a one-by-one elimination method when running S300.
The embodiment of the application provides a space registration recognition device which comprises an acquisition module, a judgment module and a registration module. The acquisition module is used for acquiring a scanning image of the marking device fixed on the part to be operated. The judging module is used for judging the marking type of the marking device according to the scanning image. The registration module is used for selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method.
The spatial registration recognition device provided by the embodiment of the application selects different spatial registration methods according to the marking types of different marking devices. The space registration method is matched with the characteristics of the marking device, such as material shape, mounting mode, shape and the like, so that the accuracy of coordinate system registration is improved.
In one embodiment, the determination module includes a segmentation sub-module and a determination sub-module. The segmentation sub-module is used for segmenting the scanning image only comprising the marking device from the scanning image. The judging sub-module is used for judging the marking type of the marking device according to the scanning image.
In one embodiment, the segmentation module includes a segmentation network model. The segmentation module is used for inputting the scanning image into the segmentation network model to obtain the scanning image only containing the marking device.
The image is segmented by the segmentation module to obtain the scanning image, so that the interference of the image of the part to be operated is effectively avoided. The scanned image including only the marking device is reduced relative to the pixels of the scanned image, so that the data processing amount in the process of judging the marking type by the judging module is reduced. The smaller data processing amount improves the processing efficiency of judging the mark type, and furthermore, the space registration recognition device improves the efficiency of selecting a registration method.
In one embodiment, the forming step of the split network model is:
firstly, a plurality of images of a part to be operated and a plurality of training images of marking devices which are in one-to-one correspondence with the images of the part to be operated are acquired, wherein the training images are fixed on the part to be operated.
The plurality of images of the part to be operated and the plurality of training images are training samples. And the images of the part to be operated and the training images are in one-to-one correspondence.
And then, respectively inputting a plurality of images of the part to be operated and a plurality of training images into a deep learning model algorithm for learning training to obtain a segmentation network model. Wherein the deep learning model algorithm comprises a U-Net algorithm.
In one embodiment, the judging module comprises a classification model, and the judging module is used for inputting the scanned image into the classification model to obtain the marking type of the marking device of the scanned image.
In a specific embodiment, the marking device includes a first marking device, and the forming step of the classification model includes:
and respectively acquiring a plurality of scanning images containing the first marking device and a first type of label corresponding to the first marking device. And inputting a plurality of scanned images containing the first marking device and the first type of labels into a machine learning algorithm for learning training to obtain a classification model.
In a specific embodiment, the number of the marking devices is a plurality, the plurality of the marking devices includes the first marking device to the nth marking device, and the forming step of the classification model is as follows:
and respectively acquiring a plurality of scanning images comprising a second marking device to an N marking device and second-type labels to N-type labels which are in one-to-one correspondence with the second marking device to the N marking device.
And respectively inputting a plurality of scanned images of the second marker to the Nth marker and the second type labels to the Nth type labels into a machine learning algorithm for learning training to obtain the classification model.
In a specific embodiment, the step of forming the classification model includes:
firstly, respectively acquiring a plurality of bone nail images, a plurality of frame images, a plurality of optical ball images and category labels corresponding to the bone nail images, the frame images and the optical ball images one by one. And respectively inputting the bone nail images, the frame images, the optical ball images and the type labels corresponding to the bone nail images, the frame images and the optical ball images one by one into a machine learning algorithm for learning and training to obtain the classification model.
The bone nail image is a scanning image only containing bone nails and does not contain an image of a part to be operated. The bone screw image is a scanning image only comprising a frame, and does not comprise an image of the part to be operated. The optical bead image is an image that includes an optical bead. The optical bead image may also include an image of a bone screw.
For the image formed by the bone nails in a non-contact mode, the identification degree of the bone nails is different due to the fact that the depth of the bone nails implanted into bones of human bodies is different. When the depth of the bone nail implanted into the human skeleton is deeper, the surface of the bone nail is provided with an optical small ball, and the exposed size of the bone nail is smaller, the bone nail cannot be clearly identified in the scanned image.
The type label corresponding to the bone nail image is bone nail. The category label corresponding to the frame graph is a frame. The type label corresponding to the optical bead image is an optical bead.
The machine learning algorithm includes an SVM (support vector machine ) or Random Forest algorithm, etc.
Embodiments of the present application provide an apparatus comprising one or more processors and memory. The memory is used for storing one or more programs. The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the spatial registration identification method as described in any of the embodiments above.
The device provided by the embodiment of the present application, when the one or more programs are executed by the one or more processors, causes the one or more processors to implement the spatial registration identification method according to any one of the embodiments above. The device automatically completes the selection of the space registration method according to the scanned image through the processor and the memory, and further automatically completes the space registration. The device reduces the error probability of manual selection.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a spatial registration identification method as described in any of the above embodiments. The embodiment of the application provides the computer readable storage medium. The computer readable storage medium automatically completes the selection of the space registration method according to the scanned image through the storage of the computer program, and further automatically completes the space registration. The computer readable storage medium reduces the probability of error for manual selection.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The examples described above represent only a few embodiments of the present application and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (12)

1. A spatial registration recognition method, comprising:
s100, acquiring a scanning image of a marker device fixed at a part to be operated;
s200, judging the marking type of the marking device according to the scanning image;
s300, selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method;
the selecting a space registration method corresponding to the mark type according to the mark type, and using the space registration method to complete space registration includes:
s310, judging whether the mark type belongs to a first mark type;
s320, if the mark type belongs to the first mark type, selecting a first registration method to finish space registration;
s330, if the label category does not belong to the first label category, judging whether the label category belongs to a second label category;
s340, if the mark type belongs to the second mark type, selecting a second registration method to finish space registration;
s350, traversing to the Nth mark type according to the steps S310-S340, and finishing space registration;
the judging the marking type of the marking device according to the scanned image comprises the following steps:
s203, inputting the scanned image into a classification model to obtain the mark type of the scanned image.
2. The spatial registration identification method according to claim 1, further comprising, after S100:
s110, segmenting a scanning image only comprising the marking device from the scanning image.
3. The spatial registration identification method according to claim 2, wherein S110 includes:
s111, acquiring a plurality of images of a part to be operated and a plurality of training images of marking devices which are in one-to-one correspondence with the images of the part to be operated and are fixed on the part to be operated;
s112, respectively inputting a plurality of images of the part to be operated and a plurality of training images into a machine learning algorithm for learning training to obtain a segmentation network model;
and S113, inputting the scanning image into the segmentation network model to obtain a scanning image only containing the marking device.
4. The spatial registration identification method according to claim 1, wherein the marking device comprises a first marking device, S200 comprising:
s201, respectively acquiring a plurality of scanning images containing the first marking device and a first type label corresponding to the first marking device;
s202, inputting a plurality of scanned images containing the first marking device and the first type of labels into a machine learning algorithm for learning training to obtain a classification model.
5. The spatial registration identification method according to claim 4, wherein the plurality of the marking devices includes the first to nth marking devices, after S202, before S203, further comprising:
s2021, respectively acquiring a plurality of scanned images comprising a second marker to the Nth marker and second-kind tags to Nth-kind tags corresponding to the second marker to the Nth marker one by one according to the step S201;
s2022, inputting the scanned images of the second marker to the Nth marker and the labels of the second type to the Nth type into a machine learning algorithm for learning training according to the step S202, and obtaining the classification model.
6. The spatial registration method according to claim 1, wherein the marker species is an optical pellet, the spatial registration method further comprising:
s3010, judging whether the HU value of the optical ball base is larger than a set value, if so, selecting a bone screw non-contact method to complete space registration, and if not, selecting an optical registration method to complete space registration.
7. The spatial registration identification method according to claim 6, further comprising, prior to S3010:
s3001, dividing the scanned image into an optical ball image and an optical ball base image.
8. A spatial registration recognition system, comprising:
the acquisition module is used for acquiring a scanning image of the marking device fixed on the part to be operated;
the judging module is used for judging the marking type of the marking device according to the scanning image;
the registration module is used for selecting a space registration method corresponding to the mark type according to the mark type, and completing space registration by using the space registration method;
the registration module is specifically configured to:
s310, judging whether the mark type belongs to a first mark type;
s320, if the mark type belongs to the first mark type, selecting a first registration method to finish space registration;
s330, if the label category does not belong to the first label category, judging whether the label category belongs to a second label category;
s340, if the mark type belongs to the second mark type, selecting a second registration method to finish space registration;
s350, traversing to the Nth mark type according to the steps S310-S340, and finishing space registration;
the judging the marking type of the marking device according to the scanned image comprises the following steps:
s203, inputting the scanned image into a classification model to obtain the mark type of the scanned image.
9. The spatial registration identification system of claim 8 wherein the determination module comprises:
a segmentation sub-module for segmenting a scan image including only the marker device from the scan image;
and the judging sub-module is used for judging the marking type of the marking device according to the scanning image only comprising the marking device.
10. The spatial registration identification system of claim 9 wherein the segmentation sub-module includes a segmentation network model, the segmentation sub-module being operable to input the scan image into the segmentation network model to obtain a scan image containing only the marker device.
11. An apparatus, the apparatus comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the spatial registration identification method of any of claims 1-7.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the spatial registration identification method according to any one of claims 1-7.
CN202010284146.9A 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium Active CN111461049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010284146.9A CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010284146.9A CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111461049A CN111461049A (en) 2020-07-28
CN111461049B true CN111461049B (en) 2023-08-22

Family

ID=71685263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010284146.9A Active CN111461049B (en) 2020-04-13 2020-04-13 Space registration identification method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111461049B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104146767A (en) * 2014-04-24 2014-11-19 薛青 Intraoperative navigation method and system for assisting in surgery
CN107133637A (en) * 2017-03-31 2017-09-05 精劢医疗科技南通有限公司 A kind of surgical navigational image registers equipment and method automatically
CN107440797A (en) * 2017-08-21 2017-12-08 上海霖晏医疗科技有限公司 Registration system and method for surgical navigational
CN107481272A (en) * 2016-06-08 2017-12-15 瑞地玛医学科技有限公司 A kind of radiotherapy treatment planning image registration and the method and system merged
CN107874832A (en) * 2017-11-22 2018-04-06 合肥美亚光电技术股份有限公司 Bone surgery set navigation system and method
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109091228A (en) * 2018-07-04 2018-12-28 首都医科大学 A kind of more instrument optical positioning methods and system
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN110547872A (en) * 2019-09-23 2019-12-10 重庆博仕康科技有限公司 Operation navigation registration system
CN110582247A (en) * 2017-04-28 2019-12-17 美敦力导航股份有限公司 automatic identification of instruments
CN110664484A (en) * 2019-09-27 2020-01-10 江苏工大博实医用机器人研究发展有限公司 Space registration method and system for robot and image equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8150494B2 (en) * 2007-03-29 2012-04-03 Medtronic Navigation, Inc. Apparatus for registering a physical space to image space
EP2298223A1 (en) * 2009-09-21 2011-03-23 Stryker Leibinger GmbH & Co. KG Technique for registering image data of an object
US10426554B2 (en) * 2011-04-29 2019-10-01 The Johns Hopkins University System and method for tracking and navigation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104146767A (en) * 2014-04-24 2014-11-19 薛青 Intraoperative navigation method and system for assisting in surgery
CN107481272A (en) * 2016-06-08 2017-12-15 瑞地玛医学科技有限公司 A kind of radiotherapy treatment planning image registration and the method and system merged
CN107133637A (en) * 2017-03-31 2017-09-05 精劢医疗科技南通有限公司 A kind of surgical navigational image registers equipment and method automatically
CN110582247A (en) * 2017-04-28 2019-12-17 美敦力导航股份有限公司 automatic identification of instruments
CN107440797A (en) * 2017-08-21 2017-12-08 上海霖晏医疗科技有限公司 Registration system and method for surgical navigational
CN107874832A (en) * 2017-11-22 2018-04-06 合肥美亚光电技术股份有限公司 Bone surgery set navigation system and method
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109091228A (en) * 2018-07-04 2018-12-28 首都医科大学 A kind of more instrument optical positioning methods and system
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN110547872A (en) * 2019-09-23 2019-12-10 重庆博仕康科技有限公司 Operation navigation registration system
CN110664484A (en) * 2019-09-27 2020-01-10 江苏工大博实医用机器人研究发展有限公司 Space registration method and system for robot and image equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
兰坤 ; 张岩 ; 沈旭昆 ; .一种基于视觉的手术导航系统设计与实现.系统仿真学报.2017,(第09期),第2025-2042页. *

Also Published As

Publication number Publication date
CN111461049A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US7916919B2 (en) System and method for segmenting chambers of a heart in a three dimensional image
US8780110B2 (en) Computer vision CAD model
US8160322B2 (en) Joint detection and localization of multiple anatomical landmarks through learning
CN110556179B (en) Method and system for marking whole spine image by using deep neural network
CN111583188A (en) Operation navigation mark point positioning method, storage medium and computer equipment
CN101542532B (en) A method, an apparatus and a computer program for data processing
WO2008154314A1 (en) Salient object detection
CN1846216A (en) Locally storing biological specimen data to a slide
CN114187277B (en) Detection method for thyroid cytology multiple cell types based on deep learning
US8035687B2 (en) Image processing apparatus and program
CN111190595A (en) Method, device, medium and electronic equipment for automatically generating interface code based on interface design drawing
CN110832542B (en) Identification processing device, identification processing method, and program
CN112307786B (en) Batch positioning and identifying method for multiple irregular two-dimensional codes
CN110555860A (en) Method, electronic device and storage medium for marking rib region in medical image
US11633235B2 (en) Hybrid hardware and computer vision-based tracking system and method
CN104641381B (en) Method and system for detecting 2D barcode in a circular label
CN111461049B (en) Space registration identification method, device, equipment and computer readable storage medium
CN115752683A (en) Weight estimation method, system and terminal based on depth camera
CN110619621A (en) Method and device for identifying rib region in image, electronic equipment and storage medium
CN105868815A (en) Superposed-type combined identification based on same identification equipment, and generation method and system
CN101313333B (en) Method for creating a model of a structure
CN110555850A (en) method and device for identifying rib region in image, electronic equipment and storage medium
CN112150398B (en) Image synthesis method, device and equipment
CN107610105B (en) Method, device and equipment for positioning ROI and machine-readable storage medium
CN114141337A (en) Method, system and application for constructing image automatic annotation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant