CN111419399A - Positioning tracking piece, positioning ball identification method, storage medium and electronic device - Google Patents

Positioning tracking piece, positioning ball identification method, storage medium and electronic device Download PDF

Info

Publication number
CN111419399A
CN111419399A CN202010186662.8A CN202010186662A CN111419399A CN 111419399 A CN111419399 A CN 111419399A CN 202010186662 A CN202010186662 A CN 202010186662A CN 111419399 A CN111419399 A CN 111419399A
Authority
CN
China
Prior art keywords
voxels
positioning
dimensional model
layer
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010186662.8A
Other languages
Chinese (zh)
Inventor
聂丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010186662.8A priority Critical patent/CN111419399A/en
Publication of CN111419399A publication Critical patent/CN111419399A/en
Priority to PCT/CN2021/081170 priority patent/WO2021185260A1/en
Priority to US17/639,220 priority patent/US20220405965A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2059Mechanical position encoders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3937Visible markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3983Reference marker arrangements for use with image guided surgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides a positioning and tracking member, a method of identifying a positioning ball, a storage medium, and an electronic device, by directly placing the positioning and tracking member on a patient's body, there is no need to realize a rigid connection between the positioning and tracking member and the body, thereby avoiding damage to the body; and the identification of the positioning ball in the image space is quickly realized by combining the identification algorithm of the positioning ball and comparing each voxel in the three-dimensional model with the actual size of the positioning ball, so that the identification period is shortened and the identification accuracy is improved.

Description

Positioning tracking piece, positioning ball identification method, storage medium and electronic device
Technical Field
The present disclosure relates to the field of medical navigation, and in particular, to a positioning tracking member, a method for identifying a positioning ball, a storage medium, and an electronic device.
Background
The workflow of a medical field navigation system generally includes medical imaging, preoperative surgical path planning, intraoperative patient image space registration, intraoperative positioning navigation and postoperative effect assessment. Among them, the image space registration of the patient in the operation, also called operation registration, is one of the key technologies in navigation, and the registration precision directly affects the final treatment result of the operation.
The traditional marking ball is fixed on the bone of a patient through skin, so that the marking ball is rigidly connected with the human body and is independent of the human body, the position of the marking ball can be directly determined in an image by scanning and shooting of an imaging device, but if the coordinate of each marking ball in an image space needs to be acquired, the current commonly used image identification mode is that a doctor directly marks marking points on the human body manually by using a prepared tool and acquires the three-dimensional coordinate of each marking point, the method is long in time, tedious in process and not easy to meet clinical requirements, and meanwhile, the identification process can generate thought errors, and the errors are amplified step by step in the later configuration and registration process, so that the final precision is influenced.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a positioning tracking element, a method for identifying a positioning ball, a storage medium, and an electronic device, so as to solve the problems in the prior art that rigid connection between a marker ball and a human body is complex in operation, a coordinate identification process is complicated, and an error is large.
In order to solve the technical problem, the embodiment of the present disclosure adopts the following technical solutions: a position tracking member, comprising: the device comprises a preset number of positioning balls and an object placing plate used for placing all the positioning balls.
Further, the object placing plate is made of polyvinyl chloride.
The embodiment of the present disclosure further provides a method for identifying a location ball, including: acquiring a two-dimensional scanning image of a tracking positioning piece, and determining a three-dimensional model of the tracking positioning piece according to the two-dimensional scanning image; traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer; determining all voxels in the three-dimensional model and geometric information of all the voxels according to all the connected region sets; and according to the geometric information of all the voxels, screening out marked voxels with geometric information meeting preset conditions from all the voxels, and determining all the marked voxels to be images of the positioning spheres in the image space.
Further, before traversing each layer of the three-dimensional model and determining a connected region set corresponding to each layer, the method further includes: and segmenting the three-dimensional model based on a preset threshold value.
Further, before traversing each layer of the three-dimensional model and determining a connected region set corresponding to each layer, the method further includes: and carrying out filtering processing on the three-dimensional model.
Further, traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer, includes: traversing each layer of the three-dimensional model, and marking all non-zero areas in each layer; and determining a connected region set corresponding to each image layer based on all the non-zero regions in each image layer.
Further, the determining all voxels in the three-dimensional model and the geometric information of all voxels according to all the connected region sets includes: based on the coordinates of all the connected regions in the connected region set in the image space, connecting the connected regions of adjacent layers to determine all voxels in the three-dimensional model; determining geometric information of all voxels based on their coordinates in image space.
Further, the step of screening out labeled voxels with geometric information meeting preset conditions from all the voxels includes: comparing the geometric information of each voxel with the actual size of the positioning sphere; and screening out voxels with the difference value between the geometric information and the actual size within a preset threshold from all the voxels to serve as marked voxels.
The embodiment of the present disclosure further provides a storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the method in any one of the above technical solutions.
An embodiment of the present disclosure further provides an electronic device, which at least includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method in any one of the above technical solutions when executing the computer program on the memory.
The beneficial effects of this disclosed embodiment lie in: the positioning tracking piece is directly arranged on the body of the patient, so that the rigid connection between the positioning tracking piece and the human body is not required to be realized, and the damage to the human body is avoided; and the identification of the positioning ball in the image space is quickly realized by combining the identification algorithm of the positioning ball and comparing each voxel in the three-dimensional model with the actual size of the positioning ball, so that the identification period is shortened and the identification accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 shows a schematic view of a positioning tracker in a first embodiment of the present disclosure;
FIG. 2 shows a schematic view of a detent ball in a first embodiment of the present disclosure;
FIG. 3 is a schematic view of an image of a positioning and tracking member according to a first embodiment of the present disclosure;
fig. 4 shows a flow chart of a method of identifying a location ball in a second embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device in a fourth embodiment of the present disclosure.
Detailed Description
Various aspects and features of the disclosure are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present disclosure will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present disclosure has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the disclosure, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
A first embodiment of the present disclosure provides a positioning tracking member, which mainly includes an object placing plate 10 and a preset number of positioning balls 20, in this embodiment, 100 positioning balls are provided on the object placing plate 10 as an example, as shown in fig. 1, the 100 positioning balls 20 are arranged on the object placing plate 10 according to a preset manner, wherein the preset manner is preferably arranged according to an array manner (for example, 10 × 10 array is provided in fig. 1), in other use cases, the preset number of positioning balls 20 may also be arranged according to the area of the to-be-detected portion of the patient and the size of the object placing plate 10 in a concentric circle manner with different radii, or may be arranged according to other required arrangement manners, which is not limited in this embodiment.
Because the positioning tracking piece in the embodiment is placed on the body part to be detected of the patient when in use, if the back of the patient needs to be operated, the positioning tracking piece can be placed on the back of the patient, and the medical imaging equipment and the optical tracking equipment are used for scanning and tracking at the same time. In order to distinguish the localization tracking member from the tissue structure of the human body in the image captured by the medical imaging device, in this embodiment, Polyvinyl chloride (PVC) is preferably used to manufacture the object placing plate 10, the Hu value of PVC is greatly different from the Hu value of the human body, and the skin of the human body and the localization tracking member can be clearly distinguished in the later recognition, so as to improve the accuracy of image recognition. The positioning ball 20 is made of a material which can be identified by both medical imaging equipment and optical tracking equipment, or an imaging material such as metal can be used as a material of a sphere center, and then a reflective material is coated on the outer surface of the positioning ball, so that the positioning ball can be identified by both the medical imaging equipment and the optical tracking equipment, and the accuracy of the coordinate system of the two different equipment during alignment can be improved by respectively identifying the same positioning tracking piece under the two different coordinate systems.
It should be noted that the positioning ball 20 may be a regular spherical member, or an irregular positioning ball as shown in fig. 2, and the protrusion is disposed on the spherical outer surface, and the concave groove is correspondingly disposed on the object placing plate 10, so as to facilitate the installation of the positioning ball 20, or the connection between the positioning ball 20 and the object placing plate 10 is performed by using glue or the like. The image of the irregular positioning ball after scanning and recognition is shown in fig. 3 (only the image of the positioning ball 20 is shown in fig. 3), and the irregular shape facilitates distinguishing from human tissues in the image, and subsequent registration is also convenient.
This embodiment provides a modified location tracking spare, can regard as medical imaging equipment and optical tracking equipment's setting element to use simultaneously, and when using direct volume location tracking spare cover on patient's health can, need not produce the damage to patient's health, and then promoted patient's use and experienced, reach better operation effect.
The second embodiment of the disclosure provides a method for identifying a positioning ball, which is based on a positioning tracking member in the first embodiment of the disclosure and is used for identifying the position of the positioning ball in an image space in the positioning tracking member, wherein the image space in the embodiment is a coordinate space in an image shot by medical imaging equipment, and after the position and the coordinates of the positioning ball in the image space are accurately determined, subsequent registration operation with an actual operation space is facilitated, so that accurate navigation of equipment such as an operation robot is ensured. The flowchart of the identification method provided in this embodiment is shown in fig. 4, and mainly includes steps S1 to S4:
and S1, acquiring a two-dimensional scanning image of the tracking positioner, and determining a three-dimensional model of the tracking positioner according to the two-dimensional scanning image.
In practical use, the tracking positioning piece is placed near a part of a human body to be operated, for example, the tracking positioning piece is placed on the back of a patient, and then scanning shooting is carried out by using medical imaging equipment to obtain a two-dimensional scanning image of the part to be detected of the patient and the tracking positioning piece. Specifically, in general, the imaging device may simultaneously capture a plurality of images, or perform slow moving scanning from one end to the other end when scanning a portion to be detected of a patient, and perform image capturing every short time during the moving process, so as to finally form a plurality of images of the portion to be detected, which necessarily include two-dimensional scanning images of the positioning and tracking member.
Further, based on the plurality of two-dimensional scanning images, three-dimensional reconstruction is performed on the two-dimensional scanning images, so that a three-dimensional model of the tracking positioning element can be obtained.
And S2, traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer.
In this embodiment, each layer of the three-dimensional model may be each two-dimensional scanned image forming the three-dimensional model, or the layers may be divided again according to the reconstructed three-dimensional model, and then each layer is traversed to determine all connected regions existing in each layer, so as to form a connected region set corresponding to each layer. For example, in the first layer of the three-dimensional model, the photographed content includes the skin, bones, organs, and location balls and object placing plates of the patient, and based on the density difference of the three-dimensional model, Hu values with different sizes are presented on the layer, that is, the boundary lines of influence between different tissues or parts, which are all presented in the layer as a connected region, for example, the location balls may be presented in the layer as a circle, and the organs may be presented in the layer as a certain cross-sectional shape. It should be understood that any point in each connected region may have its coordinates determined based on a coordinate system in image space, and the number of points in one connected region, and thus the area of each connected region, may be determined based on the coordinates of each point. Specifically, when traversing each layer, firstly, a non-zero region in each layer, that is, a region with a non-zero area in each layer is marked to reduce the number of regions in subsequent processing, and then a connected region set corresponding to each layer is determined based on all non-zero regions in each layer.
In practical implementation, before traversal is performed, the reconstructed three-dimensional model can be segmented based on a preset band-pass threshold, wherein the band-pass threshold is ThighAnd TlowThe specific value is adjusted according to the hu value that the material of the object placing plate can present, and the division according to the band pass threshold aims to separate the part of the tracking positioning piece from the body part of the patient based on the higher hu values of the object placing plate and the positioning ball, so as to reduce the data volume of the subsequent processing, for example, if the hu value of the human skin is 100, the hu value of the positioning ball is 150, and the hu value of the object placing plate is 190, the band pass threshold can be set as Thigh=140,Tlow200, i.e. according to ThighAnd TlowAnd (3) dividing the part of the hu with the value between 140 and 200 from the three-dimensional model, wherein the divided model mainly consists of the positioning ball and the object placing plate, but can also comprise other parts with the value between 140 and 200 or other tissue structures of the human body, so that subsequent processing steps are required. On this basis, filtering processing may also be performed on the three-dimensional model to eliminate the influence of noise points in the segmented three-dimensional model on subsequent processing, and improve the identification accuracy, where the filtering processing used in this embodiment is filtering processing based on a median. It should be understood that, in actual use, the three-dimensional model may be segmented and then subjected to filtering processing, or the segmentation processing may be performed after filtering, and this embodiment is not limited to this.
And S3, determining all voxels in the three-dimensional model and the geometric information of all the voxels according to all the connected region sets.
Based on the sequential relation among all layers in the three-dimensional model, after all layers are traversed to obtain all corresponding connected region sets, the connected region sets among all the layers are integrated based on the coordinates of all the connected regions in the connected region sets to form three-dimensional connected voxels, and then the geometric information of each voxel is determined based on the determined voxel and the corresponding coordinates, wherein the geometric information mainly comprises the length, the width, the height and the like of the voxel. Specifically, when determining voxels in the three-dimensional model, communicating regions of adjacent layers are communicated mainly based on coordinates of all the communicating regions in an image space, and a communicating region which belongs to the same tissue or component in reality forms a voxel corresponding to the tissue or component in the three-dimensional model, taking a positioning sphere as an example, an entity of the positioning sphere is scanned and then is definitely present in a plurality of continuous layers, and an image presented in each layer is a circular communicating region, the X-axis coordinates and the Y-axis coordinates of the circular communicating regions in the three-dimensional model may be different, but have a unique Z-axis coordinate (for example, a Z-axis coordinate corresponding to the center of the positioning sphere), and the communicating regions all have the same hu value, and then the communicating regions in the plurality of layers are considered as communicating regions belonging to the same tissue or component, and associating the connected regions of all layers in the three-dimensional model based on the mode to determine all voxels in the three-dimensional model.
After all the voxels are determined, the geometric information corresponding to each voxel can be determined based on the coordinates of each point in the connected region. It should be noted that, in the three-dimensional model, the shape and size of each voxel are not regular, and it is impossible to use the same standard representation mode to represent all the voxel shapes, so the length, width and height used in this embodiment are taken as geometric information, the distance of a voxel on the vertical projection of the X axis is taken as the length of the voxel, the distance of the voxel on the vertical projection of the Y axis is taken as the width of the voxel, and the distance of the voxel on the vertical projection of the Z axis is taken as the height of the voxel, and the geometric information of the voxel with irregular shape is represented in this form, which facilitates the subsequent screening of the voxel.
And S4, according to the geometric information of all the voxels, screening the labeled voxels of which the geometric information meets the preset condition from all the voxels, and determining all the labeled voxels to be the images of the positioning spheres in the image space.
In order to identify the voxels corresponding to the positioning sphere in all the voxels of the three-dimensional model, in this embodiment, based on the actual size of the positioning sphere, sequentially detecting whether the geometric information of each voxel meets a preset condition, screening the voxels meeting the preset condition as labeled voxels, where the determined labeled voxels are the voxels corresponding to the positioning sphere, that is, the image of the positioning sphere in the image space.
Specifically, when detecting and screening labeled voxels meeting preset conditions, geometric information of each voxel is compared with an actual size of a positioning sphere, and it should be noted that the actual size of the positioning sphere is also represented in the present embodiment in the same manner as the geometric information of the voxel, that is, distances of vertical projections of the positioning sphere on an X axis, a Y axis and a Z axis respectively are used as the actual size of the positioning sphere; during comparison, whether the difference value between the length, width and height of the voxel and the actual length, width and height of the positioning sphere is within a preset threshold value or not can be detected by means of difference making, and only the voxel with the difference value within the preset threshold value can be used as a marking voxel. In practical use, taking the positioning sphere with the shape shown in fig. 2 as an example, the radius of the upper hemisphere of the positioning sphere is 4.573 mm, the thickness of the convex part in the middle is 2 mm, the diameter is 14.55 mm, the radius of the lower hemisphere is 6.725 mm, and the distances corresponding to the vertical projections on the X-axis, the Y-axis and the Z-axis are 14.55 mm, 14.55 mm and 13.298 mm, since the size of the voxel of the positioning sphere in the three-dimensional model is usually the same as the actual size of the positioning sphere, the preset threshold value can be set to 2 mm in this embodiment, that is, when the difference between the geometric information of the voxel and the actual size of the positioning sphere is within 2 mm, the voxel can be regarded as the image corresponding to the positioning sphere and can be used as the labeled voxel.
In order to facilitate the subsequent registration process, after the image and the specific position of the voxel corresponding to each location sphere in the image space are identified, the coordinates of the center of the location sphere in the image space may be used to represent the specific position of the location sphere in the image space, and of course, the coordinates of other positions may also be selected to represent according to the actual situation, which is not limited in this embodiment.
In the embodiment, the positioning and tracking piece is directly arranged on the body of the patient, and the rigid connection between the positioning and tracking piece and the human body is not required, so that the damage to the human body is avoided; and the identification of the positioning ball in the image space is quickly realized by combining the identification algorithm of the positioning ball and comparing each voxel in the three-dimensional model with the actual size of the positioning ball, so that the identification period is shortened and the identification accuracy is improved.
A third embodiment of the present disclosure provides a storage medium, which is a computer-readable medium storing a computer program that, when executed by a processor, implements the method provided by the second embodiment of the present disclosure, including the following steps S11 to S14:
s11, acquiring a two-dimensional scanning image of the tracking positioning piece, and determining a three-dimensional model of the tracking positioning piece according to the two-dimensional scanning image;
s12, traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer;
s13, determining all voxels and geometric information of all voxels in the three-dimensional model according to all connected region sets;
and S14, according to the geometric information of all the voxels, screening the labeled voxels of which the geometric information meets the preset condition from all the voxels, and determining all the labeled voxels to be the images of the positioning spheres in the image space.
The computer program is executed by the processor to traverse each layer of the three-dimensional model, and before determining the connected region set corresponding to each layer, the following steps are also required to be executed: and segmenting the three-dimensional model based on a preset threshold value.
The computer program is executed by the processor to traverse each layer of the three-dimensional model, and before determining the connected region set corresponding to each layer, the following steps are also required to be executed: and carrying out filtering processing on the three-dimensional model.
When the computer program is executed by the processor to traverse each layer of the three-dimensional model and determine the connected region set corresponding to each layer, the following steps are specifically executed by the processor: traversing each layer of the three-dimensional model, and marking all non-zero areas in each layer; and determining a connected region set corresponding to each image layer based on all non-zero regions in each image layer.
When the computer program is executed by the processor to determine all voxels and geometric information of all voxels in the three-dimensional model according to all connected region sets, the processor specifically executes the following steps: based on the coordinates of all the connected regions in the image space in all the connected region sets, connecting the connected regions of adjacent layers, and determining all voxels in the three-dimensional model; based on the coordinates of all voxels in image space, the geometric information of all voxels is determined.
When the computer program is executed by the processor to screen out the marked voxels with the geometric information meeting the preset condition from all the voxels, the processor specifically executes the following steps: comparing the geometric information of each voxel with the actual size of the positioning sphere; and screening out voxels with the difference value between the geometric information and the actual size within a preset threshold value from all the voxels to serve as marking voxels.
In the embodiment, the positioning and tracking piece is directly arranged on the body of the patient, and the rigid connection between the positioning and tracking piece and the human body is not required, so that the damage to the human body is avoided; and the identification of the positioning ball in the image space is quickly realized by combining the identification algorithm of the positioning ball and comparing each voxel in the three-dimensional model with the actual size of the positioning ball, so that the identification period is shortened and the identification accuracy is improved.
A fourth embodiment of the present disclosure provides an electronic device, a schematic structural diagram of which may be as shown in fig. 5, and the electronic device at least includes a memory 100 and a processor 200, where the memory 100 stores a computer program, and the processor 200 implements the method provided in any embodiment of the present disclosure when executing the computer program on the memory 100. Illustratively, the electronic device computer program steps are as follows S21 and S24:
s21, acquiring a two-dimensional scanning image of the tracking positioning piece, and determining a three-dimensional model of the tracking positioning piece according to the two-dimensional scanning image;
s22, traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer;
s23, determining all voxels and geometric information of all voxels in the three-dimensional model according to all connected region sets;
and S24, according to the geometric information of all the voxels, screening the labeled voxels of which the geometric information meets the preset condition from all the voxels, and determining all the labeled voxels to be the images of the positioning spheres in the image space.
Before the processor executes traversal on each layer of the three-dimensional model stored in the memory and determines a connected region set corresponding to each layer, the processor further executes the following steps: and segmenting the three-dimensional model based on a preset threshold value.
Before the processor executes traversal on each layer of the three-dimensional model stored in the memory and determines a connected region set corresponding to each layer, the processor further executes the following steps: and carrying out filtering processing on the three-dimensional model.
When the processor executes traversal on each layer of the three-dimensional model stored in the memory and determines a connected region set corresponding to each layer, the following computer program is specifically executed: traversing each layer of the three-dimensional model, and marking all non-zero areas in each layer; and determining a connected region set corresponding to each image layer based on all non-zero regions in each image layer.
When the processor determines all voxels in the three-dimensional model and the geometric information of all voxels according to all connected region sets stored in the memory, the following computer program is specifically executed: based on the coordinates of all the connected regions in the image space in all the connected region sets, connecting the connected regions of adjacent layers, and determining all voxels in the three-dimensional model; based on the coordinates of all voxels in image space, the geometric information of all voxels is determined.
When the processor executes the labeled voxels, which are stored in the memory and whose geometric information meets the preset condition, from all the voxels, the following computer program is specifically executed: comparing the geometric information of each voxel with the actual size of the positioning sphere; and screening out voxels with the difference value between the geometric information and the actual size within a preset threshold value from all the voxels to serve as marking voxels.
In practical implementation, the electronic device may be a medical imaging device such as a CT machine and an X-ray machine, or an electronic device such as another computer terminal and a tablet terminal that performs data communication with the medical imaging device, as long as the above method can be implemented correspondingly.
In the embodiment, the positioning and tracking piece is directly arranged on the body of the patient, and the rigid connection between the positioning and tracking piece and the human body is not required, so that the damage to the human body is avoided; and the identification of the positioning ball in the image space is quickly realized by combining the identification algorithm of the positioning ball and comparing each voxel in the three-dimensional model with the actual size of the positioning ball, so that the identification period is shortened and the identification accuracy is improved.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hypertext transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks (L AN), Wide Area Networks (WAN), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The storage medium may be included in the electronic device; or may exist separately without being assembled into the electronic device.
The storage medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the storage medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including but not limited to AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
It should be noted that the storage media described above in this disclosure can be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any storage medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While the present disclosure has been described in detail with reference to the embodiments, the present disclosure is not limited to the specific embodiments, and those skilled in the art can make various modifications and alterations based on the concept of the present disclosure, and the modifications and alterations should fall within the scope of the present disclosure as claimed.

Claims (10)

1. A position tracking member, comprising:
the device comprises a preset number of positioning balls and an object placing plate used for placing all the positioning balls.
2. The position tracking member of claim 2, wherein the shelf is made of polyvinyl chloride.
3. A method for identifying a location ball, comprising:
acquiring a two-dimensional scanning image of a tracking positioning piece, and determining a three-dimensional model of the tracking positioning piece according to the two-dimensional scanning image;
traversing each layer of the three-dimensional model, and determining a connected region set corresponding to each layer;
determining all voxels in the three-dimensional model and geometric information of all the voxels according to all the connected region sets;
and according to the geometric information of all the voxels, screening out marked voxels with geometric information meeting preset conditions from all the voxels, and determining all the marked voxels to be images of the positioning spheres in the image space.
4. The identification method according to claim 3, wherein before traversing each layer of the three-dimensional model and determining the set of connected regions corresponding to each layer, the method further comprises:
and segmenting the three-dimensional model based on a preset threshold value.
5. The identification method according to claim 3, wherein before traversing each layer of the three-dimensional model and determining the set of connected regions corresponding to each layer, the method further comprises:
and carrying out filtering processing on the three-dimensional model.
6. The identification method according to claim 3, wherein the traversing each layer of the three-dimensional model to determine the connected region set corresponding to each layer comprises:
traversing each layer of the three-dimensional model, and marking all non-zero areas in each layer;
and determining a connected region set corresponding to each image layer based on all the non-zero regions in each image layer.
7. The method according to claim 3, wherein said determining all voxels in said three-dimensional model and geometric information of said all voxels according to said set of connected regions comprises:
based on the coordinates of all the connected regions in the connected region set in the image space, connecting the connected regions of adjacent layers to determine all voxels in the three-dimensional model;
determining geometric information of all voxels based on their coordinates in image space.
8. The identification method according to any one of claims 3 to 7, wherein the step of screening out labeled voxels from all the voxels whose geometric information meets a preset condition comprises:
comparing the geometric information of each voxel with the actual size of the positioning sphere;
and screening out voxels with the difference value between the geometric information and the actual size within a preset threshold from all the voxels to serve as marked voxels.
9. A storage medium storing a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 3 to 8 when executed by a processor.
10. An electronic device comprising at least a memory, a processor, the memory having a computer program stored thereon, wherein the processor, when executing the computer program on the memory, is adapted to carry out the steps of the method of any of claims 3 to 8.
CN202010186662.8A 2020-03-17 2020-03-17 Positioning tracking piece, positioning ball identification method, storage medium and electronic device Pending CN111419399A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010186662.8A CN111419399A (en) 2020-03-17 2020-03-17 Positioning tracking piece, positioning ball identification method, storage medium and electronic device
PCT/CN2021/081170 WO2021185260A1 (en) 2020-03-17 2021-03-16 Positioning tracking member, method for recognizing marker, storage medium, and electronic device
US17/639,220 US20220405965A1 (en) 2020-03-17 2021-03-16 Positioning and tracking member, method for recognizing marker, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010186662.8A CN111419399A (en) 2020-03-17 2020-03-17 Positioning tracking piece, positioning ball identification method, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN111419399A true CN111419399A (en) 2020-07-17

Family

ID=71547976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010186662.8A Pending CN111419399A (en) 2020-03-17 2020-03-17 Positioning tracking piece, positioning ball identification method, storage medium and electronic device

Country Status (3)

Country Link
US (1) US20220405965A1 (en)
CN (1) CN111419399A (en)
WO (1) WO2021185260A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272288A (en) * 2022-08-22 2022-11-01 杭州微引科技有限公司 Medical image mark point automatic identification method, electronic equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612536B (en) * 2022-03-22 2022-11-04 北京诺亦腾科技有限公司 Method, device and equipment for identifying three-dimensional model of object and readable storage medium
CN116051553B (en) * 2023-03-30 2023-06-09 天津医科大学朱宪彝纪念医院(天津医科大学代谢病医院、天津代谢病防治中心) Method and device for marking inside three-dimensional medical model
CN117316393B (en) * 2023-11-30 2024-02-20 北京维卓致远医疗科技发展有限责任公司 Method, apparatus, device, medium and program product for precision adjustment

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1711968A (en) * 2005-05-26 2005-12-28 西安理工大学 Rapid progressive three-dimensional reconstructing method of CT image from direct volume rendering
CN1758284A (en) * 2005-10-17 2006-04-12 浙江大学 Method for quickly rebuilding-up three-D jaw model from tomographic sequence
US20080242978A1 (en) * 2007-03-29 2008-10-02 Medtronic Navigation, Inc. Method and apparatus for registering a physical space to image space
US20080319313A1 (en) * 2007-06-22 2008-12-25 Michel Boivin Computer-assisted surgery system with user interface
CN101533518A (en) * 2009-04-21 2009-09-16 山东大学 Method for re-establishing surface of three dimensional target object by unparallel dislocation image sequence
CN102147919A (en) * 2010-02-10 2011-08-10 昆明医学院第一附属医院 Intraoperative registration method for correcting preoperative three-dimensional image and device
CN102319116A (en) * 2011-05-26 2012-01-18 上海交通大学 Method for increasing three-dimensional positioning accuracy of surgical instrument by using mechanical structure
CN103679810A (en) * 2013-12-26 2014-03-26 海信集团有限公司 Method for three-dimensional reconstruction of liver computed tomography (CT) image
CN103654965A (en) * 2013-12-03 2014-03-26 华南理工大学 Mark point used for optical surgical navigation system and image extraction method
CN104331924A (en) * 2014-11-26 2015-02-04 西安冉科信息技术有限公司 Three-dimensional reconstruction method based on single camera SFS algorithm
CN204219049U (en) * 2014-11-06 2015-03-25 上海逸动医学科技有限公司 Reflective ball cover and witch ball
CN105055021A (en) * 2015-06-30 2015-11-18 华南理工大学 Calibration device and calibration method for surgical navigation puncture needle
CN105055022A (en) * 2015-06-30 2015-11-18 华南理工大学 Surgical navigation general marking structure and image position obtaining method thereof
CN105078577A (en) * 2014-05-14 2015-11-25 斯瑞克欧洲控股I公司 Navigation system for and method of tracking the position of a work target
CN205054433U (en) * 2015-08-31 2016-03-02 北京天智航医疗科技股份有限公司 A optical tracking instrument for navigating operation
CN205215355U (en) * 2015-12-22 2016-05-11 仲恺农业工程学院 Mark point applied to optical operation navigation
CN205514897U (en) * 2016-01-28 2016-08-31 北京柏惠维康科技有限公司 A formation of image tag for operation navigation location
CN106139423A (en) * 2016-08-04 2016-11-23 梁月强 A kind of image based on photographic head guides seeds implanted system
CN106388849A (en) * 2016-10-25 2017-02-15 安徽优尼科医疗科技有限公司 Medical image in-vitro positioning identification point with compatibility
CN106890031A (en) * 2017-04-11 2017-06-27 东北大学 A kind of label identification and locating mark points method and operation guiding system
CN107182200A (en) * 2015-12-24 2017-09-19 中国科学院深圳先进技术研究院 Minimally invasive operation navigating system
CN107468351A (en) * 2016-06-08 2017-12-15 北京天智航医疗科技股份有限公司 A kind of surgery positioning device, alignment system and localization method
CN107596578A (en) * 2017-09-21 2018-01-19 上海联影医疗科技有限公司 The identification and location determining method of alignment mark, imaging device and storage medium
CN108053433A (en) * 2017-11-28 2018-05-18 浙江工业大学 A kind of multi-modal arteria carotis MRI method for registering based on physical alignment and outline
CN108670301A (en) * 2018-06-06 2018-10-19 西北工业大学 A kind of backbone transverse process localization method based on ultrasonic image
CN108852496A (en) * 2018-05-15 2018-11-23 杭州三坛医疗科技有限公司 The Attitude Display System of guide channel and guide channel
CN109091229A (en) * 2018-09-13 2018-12-28 上海逸动医学科技有限公司 The flexible positioning device and air navigation aid to navigate suitable for robotic surgery under X-ray
CN208974098U (en) * 2018-07-18 2019-06-14 南京博峰精准医疗科技有限公司 For point data acquisition probe at the operation of operation on hip joint
CN110163867A (en) * 2019-04-02 2019-08-23 成都真实维度科技有限公司 A method of divided automatically based on lesion faulted scanning pattern
CN110400286A (en) * 2019-06-05 2019-11-01 山东科技大学 The detection localization method of metal needle in a kind of X ray CT image
CN209673111U (en) * 2018-12-28 2019-11-22 北京诺亦腾科技有限公司 A kind of position and attitude navigation utensil
CN110570515A (en) * 2019-09-03 2019-12-13 天津工业大学 method for carrying out human skeleton three-dimensional modeling by utilizing CT (computed tomography) image
CN110706241A (en) * 2019-09-30 2020-01-17 东软医疗系统股份有限公司 Three-dimensional focus area extraction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10368838B2 (en) * 2008-03-31 2019-08-06 Intuitive Surgical Operations, Inc. Surgical tools for laser marking and laser cutting
US9675419B2 (en) * 2013-08-21 2017-06-13 Brachium, Inc. System and method for automating medical procedures
WO2018067794A1 (en) * 2016-10-05 2018-04-12 Nuvasive, Inc. Surgical navigation system and related methods
CN111388092B (en) * 2020-03-17 2023-04-07 京东方科技集团股份有限公司 Positioning tracking piece, registration method, storage medium and electronic equipment

Patent Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1711968A (en) * 2005-05-26 2005-12-28 西安理工大学 Rapid progressive three-dimensional reconstructing method of CT image from direct volume rendering
CN1758284A (en) * 2005-10-17 2006-04-12 浙江大学 Method for quickly rebuilding-up three-D jaw model from tomographic sequence
US20080242978A1 (en) * 2007-03-29 2008-10-02 Medtronic Navigation, Inc. Method and apparatus for registering a physical space to image space
US8548563B2 (en) * 2007-03-29 2013-10-01 Medtronic Navigation, Inc. Method for registering a physical space to image space
US20080319313A1 (en) * 2007-06-22 2008-12-25 Michel Boivin Computer-assisted surgery system with user interface
CN101533518A (en) * 2009-04-21 2009-09-16 山东大学 Method for re-establishing surface of three dimensional target object by unparallel dislocation image sequence
CN102147919A (en) * 2010-02-10 2011-08-10 昆明医学院第一附属医院 Intraoperative registration method for correcting preoperative three-dimensional image and device
CN102319116A (en) * 2011-05-26 2012-01-18 上海交通大学 Method for increasing three-dimensional positioning accuracy of surgical instrument by using mechanical structure
CN103654965A (en) * 2013-12-03 2014-03-26 华南理工大学 Mark point used for optical surgical navigation system and image extraction method
CN103679810A (en) * 2013-12-26 2014-03-26 海信集团有限公司 Method for three-dimensional reconstruction of liver computed tomography (CT) image
CN105078577A (en) * 2014-05-14 2015-11-25 斯瑞克欧洲控股I公司 Navigation system for and method of tracking the position of a work target
CN204219049U (en) * 2014-11-06 2015-03-25 上海逸动医学科技有限公司 Reflective ball cover and witch ball
CN104331924A (en) * 2014-11-26 2015-02-04 西安冉科信息技术有限公司 Three-dimensional reconstruction method based on single camera SFS algorithm
CN105055021A (en) * 2015-06-30 2015-11-18 华南理工大学 Calibration device and calibration method for surgical navigation puncture needle
CN105055022A (en) * 2015-06-30 2015-11-18 华南理工大学 Surgical navigation general marking structure and image position obtaining method thereof
CN205054433U (en) * 2015-08-31 2016-03-02 北京天智航医疗科技股份有限公司 A optical tracking instrument for navigating operation
CN205215355U (en) * 2015-12-22 2016-05-11 仲恺农业工程学院 Mark point applied to optical operation navigation
CN107182200A (en) * 2015-12-24 2017-09-19 中国科学院深圳先进技术研究院 Minimally invasive operation navigating system
CN205514897U (en) * 2016-01-28 2016-08-31 北京柏惠维康科技有限公司 A formation of image tag for operation navigation location
CN107468351A (en) * 2016-06-08 2017-12-15 北京天智航医疗科技股份有限公司 A kind of surgery positioning device, alignment system and localization method
CN106139423A (en) * 2016-08-04 2016-11-23 梁月强 A kind of image based on photographic head guides seeds implanted system
CN106388849A (en) * 2016-10-25 2017-02-15 安徽优尼科医疗科技有限公司 Medical image in-vitro positioning identification point with compatibility
CN106890031A (en) * 2017-04-11 2017-06-27 东北大学 A kind of label identification and locating mark points method and operation guiding system
CN107596578A (en) * 2017-09-21 2018-01-19 上海联影医疗科技有限公司 The identification and location determining method of alignment mark, imaging device and storage medium
CN108053433A (en) * 2017-11-28 2018-05-18 浙江工业大学 A kind of multi-modal arteria carotis MRI method for registering based on physical alignment and outline
CN108852496A (en) * 2018-05-15 2018-11-23 杭州三坛医疗科技有限公司 The Attitude Display System of guide channel and guide channel
CN108670301A (en) * 2018-06-06 2018-10-19 西北工业大学 A kind of backbone transverse process localization method based on ultrasonic image
CN208974098U (en) * 2018-07-18 2019-06-14 南京博峰精准医疗科技有限公司 For point data acquisition probe at the operation of operation on hip joint
CN109091229A (en) * 2018-09-13 2018-12-28 上海逸动医学科技有限公司 The flexible positioning device and air navigation aid to navigate suitable for robotic surgery under X-ray
CN209673111U (en) * 2018-12-28 2019-11-22 北京诺亦腾科技有限公司 A kind of position and attitude navigation utensil
CN110163867A (en) * 2019-04-02 2019-08-23 成都真实维度科技有限公司 A method of divided automatically based on lesion faulted scanning pattern
CN110400286A (en) * 2019-06-05 2019-11-01 山东科技大学 The detection localization method of metal needle in a kind of X ray CT image
CN110570515A (en) * 2019-09-03 2019-12-13 天津工业大学 method for carrying out human skeleton three-dimensional modeling by utilizing CT (computed tomography) image
CN110706241A (en) * 2019-09-30 2020-01-17 东软医疗系统股份有限公司 Three-dimensional focus area extraction method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272288A (en) * 2022-08-22 2022-11-01 杭州微引科技有限公司 Medical image mark point automatic identification method, electronic equipment and storage medium

Also Published As

Publication number Publication date
US20220405965A1 (en) 2022-12-22
WO2021185260A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN111419399A (en) Positioning tracking piece, positioning ball identification method, storage medium and electronic device
US9672607B2 (en) Identification and registration of multi-marker jig
CA2651437C (en) Methods and systems for segmentation using boundary reparameterization
CN111388092B (en) Positioning tracking piece, registration method, storage medium and electronic equipment
CN106890031B (en) Marker identification and marking point positioning method and operation navigation system
US20100080347A1 (en) Method for Defining an Individual Coordination System for a Breast of a Female Patient
US20210374452A1 (en) Method and device for image processing, and elecrtonic equipment
CN110464462B (en) Image navigation registration system for abdominal surgical intervention and related device
CN108778134B (en) System and method for characterizing the central axis of a bone from a 3D anatomical image
CN106794051A (en) Judge the method and system of operative site position of probe
JP2014104354A (en) Method and device for navigating ct scanning by a marker
CA2940256A1 (en) Methods and systems for performing segmentation and registration of images using neutrosophic similarity scores
CN111297480A (en) Tracking positioning part, registration method, storage medium and electronic equipment
CN105556567B (en) Method and system for vertebral location detection
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN114565517A (en) Image denoising method and device for infrared camera and computer equipment
CN108670301A (en) A kind of backbone transverse process localization method based on ultrasonic image
CN107845106B (en) Utilize the medical image registration method of improved NNDR strategy
US20200305837A1 (en) System and method for guided ultrasound imaging
CN111345886A (en) Magnetic resonance image and ultrasonic transducer coordinate system conversion method, device, equipment and storage medium
Malian et al. Development of a robust photogrammetric metrology system for monitoring the healing of bedsores
CN117437267A (en) Image registration method based on kidney characteristics and related equipment
CN115797416A (en) Image reconstruction method, device and equipment based on point cloud image and storage medium
CN112244884B (en) Bone image acquisition method, device, console equipment and CT system
CN105678738A (en) Positioning method and device for datum point in medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination