CN112053400A - Data processing method and robot navigation system - Google Patents

Data processing method and robot navigation system Download PDF

Info

Publication number
CN112053400A
CN112053400A CN202010943601.1A CN202010943601A CN112053400A CN 112053400 A CN112053400 A CN 112053400A CN 202010943601 A CN202010943601 A CN 202010943601A CN 112053400 A CN112053400 A CN 112053400A
Authority
CN
China
Prior art keywords
point
focus
dimensional
skeleton
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010943601.1A
Other languages
Chinese (zh)
Other versions
CN112053400B (en
Inventor
张宇卉
谢永召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN202010943601.1A priority Critical patent/CN112053400B/en
Publication of CN112053400A publication Critical patent/CN112053400A/en
Application granted granted Critical
Publication of CN112053400B publication Critical patent/CN112053400B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a data processing method and a robot navigation system. The data processing method comprises the following steps: acquiring a medical image sequence, wherein the medical image sequence comprises a plurality of two-dimensional medical images; determining a skeleton three-dimensional model and a focus three-dimensional model according to gray values of pixel points in a plurality of medical images and space position information of the medical images; and mapping the focus three-dimensional model according to a focus mapping path, and obtaining a projection area of the focus three-dimensional model on the skeleton three-dimensional model so as to determine an operation position according to the projection area. The method may facilitate determining a surgical site.

Description

Data processing method and robot navigation system
Technical Field
The application relates to the technical field of medical equipment, in particular to a data processing method and a robot navigation system.
Background
In the existing medicine, the operation is a very important means for saving the life health of a patient, and a doctor can treat abnormal conditions in the body of the patient through the operation so as to reasonably treat the patient.
In the prior art, a doctor manually inputs a coordinate position of a scalpel, so that the robot navigation system controls the movement of the scalpel according to the input coordinate position. However, in this way, the doctor can only determine the coordinate position of the operation through two-dimensional medical images (such as CT images, MRI images, etc.) and by combining self experience, so that the doctor is very dependent on personal experience, and the situations that the injury to the patient is too large or the treatment efficiency is not satisfactory easily occur.
Disclosure of Invention
In order to solve the above problem, embodiments of the present application provide a method of a data processing method to at least partially solve the above problem.
According to an aspect of the present application, there is provided a data processing method including: acquiring a medical image sequence, wherein the medical image sequence comprises a plurality of two-dimensional medical images; determining a skeleton three-dimensional model and a focus three-dimensional model according to gray values of pixel points in a plurality of medical images and space position information of the medical images; and mapping the focus three-dimensional model according to a focus mapping path, and obtaining a projection area of the focus three-dimensional model on the skeleton three-dimensional model so as to determine an operation position according to the projection area.
According to another aspect of the present application, there is provided a robot navigation system including: a processor for transmitting and receiving control signals; a memory for storing at least one executable command for causing the processor to perform the operations of the data processing method as described above; and the execution mechanism is used for receiving the control signal sent by the processor and drawing an operation area on the patient according to the control signal.
According to the embodiment of the application, the medical image of the patient is segmented, the focus three-dimensional model and the skeleton three-dimensional model are generated, the focus three-dimensional model is mapped to the skeleton three-dimensional model to obtain the mapping region, and the problems that a doctor can only determine the operation position through the two-dimensional medical image and the experience of the doctor, the injury to the patient is too large, and the treatment efficiency cannot meet the expectation are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 shows a flow chart of steps of a data processing method according to a first embodiment of the invention;
FIG. 2 shows a flow chart of steps of a data processing method according to a second embodiment of the invention;
FIG. 3 shows a cut strategy diagram of a wrapper box according to a second embodiment of the invention;
FIG. 4 is a diagram illustrating a specific slicing sequence of the slicing strategy according to the second embodiment of the present invention;
fig. 5 shows a projection of a three-dimensional model of a lesion on a three-dimensional model of a bone according to a second embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
Example one
The data processing method in the embodiment includes the following steps:
step S102: acquiring a medical image sequence, wherein the medical image sequence comprises a plurality of two-dimensional medical images.
The medical image is an image formed by scanning or transmitting a human body or a part of the human body using a medical Imaging apparatus, and there are generally CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and the like.
Because the characteristics of each tissue of the human body in the medical image are generally different, for example, the gray values of the pixel points corresponding to the bone tissue, the skin, the muscle and the like are generally different, a doctor can obtain the relative position relationship of the internal physiological tissue of the human body and the health problem by analyzing the information in the medical image. For example, if the scanning region is the head (for convenience of illustration, the head is also taken as an example in the remaining steps of this embodiment), the doctor can clearly see the relative positions of the bone (such as the skull) and the brain from the medical image, and when a tumor or inflammation is found to be possible, the doctor can obtain information about whether the human body has a lesion from the tumor or inflammation due to the difference of the gray-level values of the lesion region on the medical image.
In this embodiment, the medical image sequence includes a plurality of two-dimensional medical images, such as CT and MRI images, which are tomographic scans, and are generally a plurality of two-dimensional medical images, and the two-dimensional medical images are combined to correspond to the internal condition of the scanned region layer by layer.
The present embodiment does not limit the type and number of the acquired medical image sequences, nor the number of two-dimensional medical images. Illustratively, however, the medical image sequence includes at least one of a CT image sequence and an MRI image sequence. The CT image sequence includes bone information and the MRI image sequence includes lesion information. Obviously, different medical image sequences can be reasonably selected for processing through the respective displayed definition. Taking a CT image sequence of the head as an example, it includes two-dimensional medical images of a plurality of different slices of the head.
Step S104: and determining a skeleton three-dimensional model and a focus three-dimensional model according to the gray values of the pixel points in the plurality of medical images and the spatial position information of the medical images.
The medical image is the same as other images, different areas are displayed according to the gray values of the pixel points, and the gray values of the pixel points corresponding to the focus and the skeleton are obviously different. And because the fault position scanned by each medical image in the medical image sequence is consistent with the fault position of the actual scanning object, the spatial position information of each pixel point can be determined according to the spatial position information of the medical image, namely the fault position scanned by the medical image and the position of the pixel point in the medical image. In the present embodiment, the spatial position information of the medical image may be represented by its coordinates in the set coordinate system, but is not limited thereto, and the set coordinate system may be any suitable coordinate system set as required. Such as the coordinate system of a surgical robotic arm or the coordinate system of a physical three-dimensional space. In order to more completely see the relative position relationship between the lesion and the bone in the scanning region to determine the operation position, in this embodiment, the bone three-dimensional model and the lesion three-dimensional model are determined according to the gray values of the pixel points in the plurality of medical images and the spatial position information of the medical images. For example, for the gray value of a pixel point in a CT medical image, a bone point corresponding to a bone is determined, and a three-dimensional model of the bone is generated by combining the position information of each bone point. And determining focus points corresponding to focuses according to gray values of pixel points in the MRI medical image, and generating a focus three-dimensional model by combining position information of each focus point.
By obtaining the skeleton three-dimensional model and the focus three-dimensional model, a doctor can visually, conveniently and accurately see the relative position between the focus and the skeleton.
Step S106: and mapping the focus three-dimensional model according to the focus mapping path, and obtaining a projection area of the focus three-dimensional model on the skeleton three-dimensional model so as to determine the operation position according to the projection area.
Because the skeleton three-dimensional model and the focus three-dimensional model of the embodiment of the application are basically consistent with the actual size and position relationship of the skeleton and the focus in the scanning area, the focus three-dimensional model is mapped to the projection area obtained on the skeleton three-dimensional model, so that a doctor can conveniently correspond the projection of the actual focus to an area on the skull of an actual patient, and the operation position can be conveniently determined.
The shape of the lesion mapping path is not limited in this embodiment, and may be, for example, a straight line. The present embodiment also does not limit the rule for selecting the lesion mapping path, and for example, the physician may select a lesion mapping path by himself or automatically select a path that meets a certain condition, for example, the shortest path between the lesion three-dimensional model and the bone three-dimensional model.
After the projection area of the focus three-dimensional model in the skeleton three-dimensional model is obtained, a doctor can accurately determine the operation position according to the projection area, so that the doctor can determine a better operation position, the dependence on personal experience is reduced, and the situations that the damage to a patient is too large and the treatment efficiency does not meet the expectation due to too large operation opening when the focus is removed according to the determined operation position are avoided.
Obviously, the doctor can appropriately adjust the region drawn on the head according to the actual situation as long as the region can fulfill the operation requirement, and the embodiment is not limited to this.
According to the embodiment, a skeleton three-dimensional model and a focus three-dimensional model are determined based on gray values of pixel points in a plurality of medical images and space position information of the medical images; the focus three-dimensional model is mapped according to the focus mapping path, and the projection area of the focus three-dimensional model on the skeleton three-dimensional model is obtained, so that the operation position is determined according to the projection area, and the problems that a doctor can only determine the operation position through a two-dimensional medical image and the experience of the doctor, the damage to a patient is too large, and the treatment efficiency does not meet the expectation are solved.
Example two
Referring to fig. 2, a flowchart of steps of a data processing method according to a second embodiment of the present application is shown.
In this embodiment, the data processing method includes the steps of:
step S202: acquiring a medical image sequence, wherein the medical image sequence comprises a plurality of two-dimensional medical images.
The medical image sequence may show information about the lesion and the bone, such as a positional relationship, a size, and the like. For convenience of description, the present embodiments are described with respect to a lesion in the cranium as an example. In this embodiment, the acquired medical image sequence may be a CT image sequence, an MRI image sequence, etc., and the type and number of the medical image sequence are not limited in this embodiment as long as the medical image sequence can meet the actual requirements. Of course, as an exemplary example, the acquired medical image sequence is a combination of a CT image sequence and an MRI image sequence, and on this basis, in order to more conveniently and clearly display the lesion and the bone region, the present embodiment may acquire the image of the bone with the CT image sequence as the main point and acquire the image of the lesion with the MRI image sequence as the main point.
Step S204: and determining a skeleton three-dimensional model and a focus three-dimensional model according to the gray values of the pixel points in the plurality of medical images and the spatial position information of the medical images.
Specifically, step S204 includes the following substeps:
substep S2041: according to the spatial position information of the medical images and the gray values of the pixel points in the medical images, the medical images are segmented, and a skeleton three-dimensional point set and a focus three-dimensional point set are obtained.
Since the coordinate systems used by the plurality of medical images may be different, or the coordinate systems of the CT medical image corresponding to the bone and the MRI medical image corresponding to the lesion are different, the plurality of medical images need to be registered to ensure that the obtained spatial position information of the plurality of medical images is accurate, so that the accuracy of the segmentation processing according to the gray value and the spatial position information is better.
In a possible approach, sub-step S2041 includes process I and process II.
The process I: and carrying out position registration processing on the plurality of medical images to obtain the spatial position information of each medical image.
Because coordinate systems corresponding to different medical images may be different, in order to ensure that the generated skeleton three-dimensional model and the lesion three-dimensional model belong to the same coordinate system and ensure the position accuracy, a plurality of medical images can be registered. Taking a CT medical image sequence and an MRI medical image sequence as examples, the CT medical image sequence is used for acquiring a skeleton three-dimensional point set, and the MRI medical image sequence is used for acquiring a focus three-dimensional point set. In order to avoid this problem, the alignment is implemented by the position registration processing in this embodiment, so as to better determine the skeleton three-dimensional point set and the focus three-dimensional point set.
The position registration processing of the embodiment of the application can correspond the pixel points of all the medical images, and all the medical images are corresponding to the same coordinate system. The present application does not limit the specific steps of the position registration process, but may correspond thereto using a functional conversion relationship or a coordinate transformation relationship, as an example.
For example, a reference point a is specified in the medical image a, a reference point B corresponding to the reference point a is specified in the medical image B, and a coordinate transformation relationship is specified based on the coordinates of the reference point a and the coordinates of the reference point B. After the medical image registration, it may be subjected to a segmentation process.
And (II) a process: and according to the gray value of the pixel points and preset segmentation conditions, performing segmentation processing on the medical images, and determining a skeleton three-dimensional point set and a focus three-dimensional point set according to the segmented target points and the position information of the target points.
The medical image can be segmented in different modes such as automatic segmentation, threshold segmentation and manual segmentation. The target point may be a bone point or a lesion point, depending on the target to be segmented. For example, when a CT medical image is segmented, the target point is a bone point; similarly, when the MRI medical image is segmented, the target point is a lesion point.
For example, when automatic segmentation or threshold segmentation is adopted, the segmentation conditions include bone segmentation conditions corresponding to bones and lesion segmentation conditions corresponding to lesions, and the bone segmentation conditions include a gray threshold or a gray threshold range of bone points; and/or the lesion segmentation condition comprises a gray threshold or a gray threshold range of a lesion point.
If automatic segmentation is adopted, firstly, a gray threshold value is set, all points in the region of interest are traversed, the points which are the same as the gray threshold value are obtained, and the points are segmented from the medical image to obtain a skeleton three-dimensional point set and a focus three-dimensional point set. Taking a bone as an example for further description, if the bone segmentation condition is a gray threshold of a bone point, the segmentation process of the bone three-dimensional point set is as follows: obtaining an interested region defined in a model coordinate space (the interested region is a space which is considered by a doctor to have a possible focus after analysis and can be used for performing an operation, and includes all the focus regions and at least part of a bone region, and the shape of the interested region is not limited in the embodiment of the application, and can be a cube or a sphere for example), traversing all pixel points in the interested region, obtaining points with the gray value being the same as the gray threshold of the bone point, and segmenting the points from a medical image to obtain a bone three-dimensional point set.
If the threshold segmentation is adopted, firstly setting a gray threshold range, traversing the region of interest to gather all points, acquiring the points in the gray threshold range, and segmenting the points from the medical image to obtain a skeleton three-dimensional point set and a focus three-dimensional point set. When the gray threshold range changes, the segmentation result is updated in real time along with the change of the threshold. Taking a focus as an example for further explanation, if the focus segmentation condition is the gray threshold range of the focus point, the segmentation process of the focus three-dimensional point set is to traverse all pixel points in the region of interest, obtain points with gray values within the gray threshold range of the focus point, segment the points from the medical image, and obtain the focus three-dimensional point set. In addition, since the doctor generally needs to set the gray threshold range depending on actual conditions when setting the gray threshold range, different gray threshold ranges are usually set multiple times (for example, the gray threshold range is set to 50-75 for the first time, and is reset to 75-100 if the doctor feels inappropriate, which is only an example, but not a limitation, and does not represent the situation in actual use), and when the gray threshold range changes, the acquired points are also updated accordingly. Therefore, doctors can better regulate and control the gray threshold range.
Alternatively, when manual segmentation is employed, the medical image may be segmented using an interpolation algorithm, a region growing algorithm, a Marching Cube algorithm, a watershed algorithm, or the like. Alternatively, in the present embodiment, the medical image is segmented using a graph segmentation algorithm.
Since the bone tissue is clearer in the CT medical image and the lesion tissue is clearer in the MRI medical image, in one embodiment, the CT medical image may be used as the main acquisition source of the set of three-dimensional points of the bone, and the MRI medical image may be used as the main acquisition source of the set of three-dimensional points of the lesion. When segmentation processing is performed on an MRI medical image, in order to reduce the calculation amount, a doctor may previously diagnose a possible lesion region according to his own experience, and draw a region of interest including the possible lesion region in the MRI medical image, so that the segmentation processing may be performed only on the drawn region of interest. When the segmentation processing is performed on the CT medical image, the complete CT medical image may be directly performed, or a doctor may draw a region of interest including a possible bone region, which is not limited in this embodiment.
In this embodiment, when performing manual segmentation on a medical image, the process II may be implemented as:
the sub-process A: a first region and a second region are determined from a plurality of medical images.
The first area comprises a plurality of first pixel points, the second area comprises a plurality of second pixel points, and the first pixel points are different from the second pixel points.
Specifically, the sub-process a includes the following stages:
stage I: contour regions in a plurality of medical images are obtained.
To reduce the amount of computation, the physician may outline the target in the medical image. Such as delineating bone in a CT medical image and delineating a lesion in an MRI medical image.
When the medical image sequence is sketched, because a plurality of medical images exist, when the contour is sketched, the contour can be sketched on some medical image layers only, and the contour cannot be sketched on some medical image layers, so that a complete three-dimensional point set cannot be formed. The embodiment does not limit the implementation process of the interpolation algorithm, and as an example, the contour may be generated on the other medical image by fitting polynomial interpolation in the two medical images.
The skeleton three-dimensional point set and the focus three-dimensional point set can be obtained by segmenting the outlined region sketched on each medical image.
Stage II: and taking the pixel points in the contour region as reference first pixel points, and determining N first pixel points closest to the reference point according to the position information of the reference first pixel points to form a first region, wherein N is greater than or equal to 1.
For example, a reference first pixel point is selected from the outline region, and 26 adjacent first pixel points are selected to form the first region.
In the embodiment of the present application, the size of N is not limited, and in other embodiments, N may select other values. In this embodiment, as an example, N is 26, and the 26 points are distributed in eight neighborhoods of the reference first pixel point, that is, if the reference first pixel point is located at the center of the 3 × 3 pixel point region, the remaining 26 points surround the point, and some of the 26 points may not be in the lesion contour region.
And stage III: and M second pixel points with the distance from the reference point smaller than a set distance threshold value are selected from the plurality of medical images to form a second area, wherein M is larger than or equal to 1.
Like N above, the size of M is not limited in this embodiment. And the distance threshold can be determined as required, as long as it is ensured that the second region and the first region do not have a common pixel point.
And a sub-process B: and determining whether the first area and the second area are combined or not according to the gray value and the position information of the first pixel and the gray value and the position information of the second pixel.
The sub-process B comprises the following stages:
stage 1: and determining the first maximum gray difference of the first area according to the gray value of each first pixel point.
And taking the gray difference between any two points in the first area as a dissimilarity measure w (e). Defining the first maximum gray difference of the first area as the weight of the maximum edge of the minimum spanning tree in the first area, wherein the weight of the maximum edge is the first maximum gray difference of the first area, namely:
Figure BDA0002674493610000101
int (R) is the weight of the largest edge of the minimum spanning tree, MST (R) is the minimum spanning tree, and w (e) is the weight of each edge therein.
And (2) stage: and determining the second maximum gray difference of the second area according to the gray value of each second pixel point.
The calculation method of the second maximum gray scale difference may be the same as the calculation method of the first maximum gray scale difference, and therefore, the description thereof is omitted.
And (3) stage: and determining the inter-region gray level difference between the first region and the second region according to the position information and the gray level value of the plurality of first pixel points and the position information and the gray level value of the plurality of second pixel points.
The first region is denoted as R1, the second region is denoted as R2, at least one edge of the first region R1 and the second region R2 is connected with two pixel points at two sides of the first region R1 and the second region R2, the regional gray difference between R1 and R2 is calculated according to the position information and the gray value of the pixel points at two sides of R1 and R2, and the regional gray difference is defined as the weight of the minimum weight edge which links the two regions, namely the weight of the minimum weight edge which links the two regions
Figure BDA0002674493610000102
This value represents the maximum difference in gray value between the R1 and R2 regions. Where v1 and v2 are points in R1 and R2, respectively.
And (4) stage: and if the gray difference between the areas is smaller than the smaller one of the first maximum gray difference and the second maximum gray difference, combining the first area and the second area.
In order to more accurately determine whether the combination can be performed or not, and more accurately reflect the maximum difference of the gray values of the pixels in the first region R1 and the maximum difference of the gray values of the pixels in the second region R2, a region penalty parameter τ (R) may be added to each of the two regions, the region penalty parameter is set according to the actual situation, and the size of the region penalty parameter is not limited in this embodiment. Comparing the maximum difference of the gray values of the pixels in the first region R1 with the maximum difference of the gray values of the pixels in the second region R2, and taking the smaller difference as the gray difference in the R1 and R2 regions, namely:
MInt(R1,R2)=min(Int(R1)+τ(R1),Int(R2)+τ(R2)),
this value represents the maximum difference in gray values in the R1 and R2 regions.
The magnitudes of the intra-region gray differences and the inter-region gray differences of R1 and R2 are compared, and if the inter-region gray differences are smaller than the intra-region gray differences, the R1 and R2 regions are merged.
If so, executing a sub-process C; otherwise, if not, the new second area is determined again, and the sub-process B is returned to continue the execution.
And a sub-process C: if so, updating the first area by using the merged area and determining a new second area.
And if the R1 and the R2 are combined, updating the combined R1 and R2 into a new first region, re-determining a second region, and continuously judging whether the new first region and the re-determined second region meet the combination condition until a termination condition is met. The termination condition includes that each pixel point in the contour region belongs to the first region or the second region, namely each pixel point in the focus contour region belongs to the first region or the second region.
When the termination condition is satisfied, the sub-process D is executed.
A sub-process D: and determining a skeleton three-dimensional point set and a focus three-dimensional point set according to the position information of the target point and the target point by taking a first pixel point contained in the first region when the termination condition is met as the target point.
And finally, determining a focus three-dimensional point set according to the target point and the position information of the target point by taking the first pixel point contained in the first area when the termination condition is met as the target point.
In this embodiment, the minimum spanning tree in the area in the contour is calculated to determine the intra-area gray level difference and the inter-area gray level difference between two selected adjacent areas, and since the inter-area gray level difference is smaller than the intra-area gray level difference, it is indicated that the gray level difference between the two areas is not larger than the gray level difference of the two areas, the two areas can be considered as the same area, and thus the two areas are rapidly merged and segmented to obtain the focus three-dimensional point set.
Similarly, the process of acquiring the skeleton three-dimensional point set is similar to the process of acquiring the focus three-dimensional point set, and is not repeated.
It should be noted that, although this embodiment provides a detailed process for obtaining a focus three-dimensional point set and a bone three-dimensional point set from a medical image, this is not a limitation of the present application, and besides, an interpolation algorithm, a region growing algorithm, a Marching Cube algorithm, a watershed algorithm, or the like, or other further optimization algorithms may be used as long as they can correctly segment the focus and the bone three-dimensional point set from the medical image.
Substep S2042: and determining a skeleton three-dimensional model and a focus three-dimensional model according to the skeleton three-dimensional point set and the focus three-dimensional point set.
Sub-step S2042 may be implemented as: and carrying out modeling processing and smoothing processing on the skeleton three-dimensional point set and the focus three-dimensional point set to obtain the skeleton three-dimensional model and the focus three-dimensional model.
In order to better solve the technical problem, modeling processing and smoothing processing are required to be performed on a skeleton three-dimensional point set and a focus three-dimensional point set, the purpose of the modeling processing is to enable the three-dimensional point set to become continuous from discrete pixel points one by one, enable the three-dimensional point set to become more visible and better correspond to the actual state of the head of a person, the smoothing processing can remove unnecessary fine nodes of an initial model after modeling, and generate a smoother model to enable the smoother model to be closer to the actual state.
The modeling process and the smoothing process may be realized by the following processes:
process 1: and connecting the skeleton points in the skeleton three-dimensional point set to obtain a skeleton initial model.
For example, a skeleton point a in the skeleton three-dimensional point set is selected, a skeleton point B closest to the skeleton point a is selected to establish a connection line with the skeleton point a, a skeleton point C closest to the connection line is selected, and the skeleton point C is connected with the skeleton point a and the skeleton point B, so that a skeleton patch is formed, and thus, the skeleton patch set can be obtained by repeatedly traversing all skeleton points in the skeleton three-dimensional point set, and the skeleton initial model is formed.
The generated skeleton patch can well ensure the stability of the skeleton initial model and can well fix the connection relation of all skeleton points. It should be understood that the embodiments of the present application are not intended to be limiting, as long as the connection method that meets the requirements for generating the initial model of bone is considered to be within the scope of the embodiments of the present application.
And (2) a process: and connecting the focus points in the focus three-dimensional point set to obtain a focus initial model.
For example, a focus point a in the focus three-dimensional point set is selected, a connection line is established between a focus point B with the closest distance and the focus point a, a focus point C with the closest connection line is selected, the focus point C is respectively connected with the focus point a and the focus point B, so that a focus patch is formed, and thus, a focus patch set can be obtained by repeatedly traversing all focus points in the focus three-dimensional point set, and a focus initial model is formed.
The lesion patch generation can well ensure the stability of the initial skeleton model and can well fix the connection relation of all lesion points. It should be understood that the embodiments of the present application are not limited as long as the linking method that meets the requirements for generating an initial lesion model is considered to be within the scope of the embodiments of the present application. And 3, process: and respectively carrying out smoothing treatment on the initial skeleton model and the initial focus model, and obtaining the three-dimensional skeleton model and the three-dimensional focus model.
Wherein, the process 3 can be realized by the following stages:
and (B) stage A: and respectively calculating the curvature of each bone point of the initial bone model and the curvature of each focus point of the initial focus model.
Since the connection relationship between all contour points is fixed after the skeleton patch and the lesion patch are generated, when the curvatures of the skeleton point and the lesion point are calculated, a implicit function related to a coordinate system can be fitted in the embodiment to include all the skeleton points and the lesion point, and of course, the embodiment does not limit the number of the fitted implicit functions.
For example, according to a skeleton patch set constructed by the skeleton point connection, a implicit function expression of the skeleton patch set with respect to a coordinate system is established, and according to a focus patch set constructed by the focus point connection, a implicit function expression of the focus patch set with respect to the coordinate system is established; respectively calculating a first order partial derivative and a second order partial derivative of the implicit function expression corresponding to the skeleton three-dimensional model and the focus three-dimensional model; calculating the curvature of the bone point using a curvature formula according to the first and second partial derivatives corresponding to the bone point and the coordinates of the bone point, and calculating the curvature of the lesion point using a curvature formula according to the first and second partial derivatives corresponding to the lesion point and the coordinates of the lesion point.
Specifically, the curvature formula is:
Figure BDA0002674493610000131
wherein H is the average curvature of the bone point or the focus point, and K is the Gaussian curvature of the bone point or the focus point;
the mean curvature is calculated according to the following:
Figure BDA0002674493610000141
the gaussian curvature is calculated according to the following:
Figure BDA0002674493610000142
in this way, the curvature of each bone point and the curvature of each lesion point can be calculated.
The above expression, where Φ is a fitting implicit function, Φ x, Φ y, Φ z are first order partial derivatives of the coordinate system, Φ xx, Φ yy, Φ zz, Φ xy, Φ xz, Φ yz are second order partial derivatives of the coordinate system, and the curvature of a pixel can be obtained by substituting coordinates of the pixel.
And (B) stage: determining an abnormal bone point to be removed according to a preset bone curvature threshold and the curvature of the bone point, and determining an abnormal focus point to be removed according to a preset focus curvature threshold and the curvature of the focus point.
And C: and removing abnormal bone points in the initial bone model, obtaining a three-dimensional bone model, removing abnormal focus points in the initial focus model, and obtaining a three-dimensional focus model.
In the embodiment of the application, unnecessary details are added to the model by the point with small curvature, so that a proper skeleton curvature threshold value and a proper focus curvature threshold value are preset and are respectively compared with the calculated curvature of the skeleton point and the calculated curvature of the focus point, and the abnormal skeleton point and the abnormal focus point with the curvature smaller than the curvature threshold value are removed, so that the skeleton initial model and the focus initial model can be smoother, and the skeleton three-dimensional model and the focus three-dimensional model can be obtained. The finally obtained skeleton three-dimensional model and the focus three-dimensional model are corresponding to the actual condition of the patient to a great extent, conditions are provided for planning the operation area of a doctor, and the further processing is more convenient.
Step S206: and mapping the focus three-dimensional model according to the focus mapping path, and obtaining a projection area of the focus three-dimensional model on the skeleton three-dimensional model so as to determine the operation position according to the projection area.
Because the skeleton three-dimensional model and the focus three-dimensional model of the embodiment of the application are basically consistent with the actual size and position relationship of the skeleton and the focus in the scanning area, the focus three-dimensional model is mapped to the projection area obtained on the skeleton three-dimensional model, so that a doctor can conveniently correspond the projection of the actual focus to an area on the skull of an actual patient, and the operation position can be conveniently determined.
The shape of the lesion mapping path is not limited in this embodiment, and may be, for example, a straight line. The present embodiment also does not limit the rule for selecting the lesion mapping path, and for example, the physician may select a lesion mapping path by himself or automatically select a path that meets a certain condition, for example, the shortest path between the lesion three-dimensional model and the bone three-dimensional model.
After the projection area of the focus three-dimensional model in the skeleton three-dimensional model is obtained, a doctor can accurately determine the operation position according to the projection area, so that the doctor can determine a better operation position, the dependence on personal experience is reduced, and the situations that the damage to a patient is too large and the treatment efficiency does not meet the expectation due to too large operation opening when the focus is removed according to the determined operation position are avoided.
Obviously, the doctor can appropriately adjust the region drawn on the head according to the actual situation as long as the region can fulfill the operation requirement, and the embodiment is not limited to this.
For example, in the actual surgical preparation process, a doctor may specify an operation path according to the actual situation, for example, the operation path may avoid the most physiological tissues in the process of clarifying the lesion, and the operation using such an operation path may greatly protect normal tissues from the interference of the operation, and may better protect the patient, so it is necessary to map the lesion onto the skull by using such an operation path as the lesion mapping path.
In this embodiment, specifically, the step S206 may include the following sub-steps:
substep S2061: and acquiring a focus mapping path.
The lesion mapping path determines the direction in which the lesion three-dimensional model is mapped to the bone three-dimensional model and the position to which the lesion three-dimensional model can be mapped, and is therefore important in the embodiments of the present application. In this embodiment, the designated path may be used as a lesion mapping path. However, the shape of the lesion mapping path is not limited in this embodiment, and may be, for example, a straight line (for convenience of explanation, the specified path is defined as a straight line path by default).
In this embodiment of the present application, the way that the doctor specifies the path may be to specify a pixel point of the three-dimensional focus model as a starting point and specify a pixel point of the three-dimensional skeleton model as an ending point, and the path is determined according to the positions of the starting point and the ending point.
Substep S2062: and calculating the intersection point of the skeleton three-dimensional model and the lesion point in the lesion three-dimensional model according to the lesion mapping path.
In one possible implementation, the substep S2062 includes the following procedure (r) and procedure (c).
The process is as follows: and according to the focus mapping path, aiming at each focus point in the focus three-dimensional model, determining a mapping line corresponding to the focus point.
Obviously, as long as the coordinates of two pixel points are determined, a straight line path can always be determined, and another straight line parallel to the straight line path can always be found at other points in the model coordinate space. Then, a ray along the linear path direction can be found necessarily by taking any pixel point in the coordinate space as a starting point, and the ray is a mapping ray. That is, a mapping line can be found from the focus point on the focus three-dimensional model as the starting point and is parallel to the specified straight line path, and a skeleton point can be found corresponding to the focus point as long as the mapping line and the skeleton three-dimensional model have an intersection.
Therefore, if the coordinates of two points on the specified path are (x)e,ye,ze) And (x)t,yt,zt) Then the direction of the straight path can be represented by a unit direction vector dNormal:
Figure BDA0002674493610000161
if the pixel point set of the focus three-dimensional model is N ═ { N ═ N0,N1,N2,…,Nn-1And the mapping ray Li emitted from any point Ni along the dNormal direction can be expressed by a ray formula as follows:
Li=Ni+dNormal*Length;
the Length is the Length of the mapping line, and in this embodiment, the Length of the mapping line is not limited, as long as the Length of the mapping line emitted from the focal three-dimensional model pixel point can have an intersection with the skeleton three-dimensional model. But exemplarily Length is 400mm, since according to existing records the highest person worldwide is 2.72m, while the ratio of human head Length and height is about 1: 7, the length of the human head can not exceed 400mm theoretically, so that each ray can be ensured to intersect with the three-dimensional model 1000 of the bone, and the optimal length is good.
Optionally, if the specified path is the shortest path, and the shortest path is the path with the shortest distance from a fixed pixel point of the three-dimensional lesion model to any pixel point of the three-dimensional skeleton model, then a mapping ray is transmitted from the fixed point, all points of the three-dimensional skeleton model are traversed, the distances from the fixed point to all points of the three-dimensional skeleton model are measured, the point with the shortest distance is found, the shortest distance is determined, and the shortest distance is taken as the shortest mapping path direction. The position of the fixed pixel point is not limited in this embodiment, and may be, as an example, a central point of the three-dimensional lesion model. This improves accuracy when the surgeon needs the shortest path to perform the procedure.
The process is two: and respectively calculating the intersection points of the mapping rays and the three-dimensional skeleton model.
Because the focus points on all focus three-dimensional models need to find the intersection points with the skeleton three-dimensional model, the calculation amount of directly searching the intersection points one by one is large, and the position of a projection region cannot be quickly determined. In contrast, the method for finding the intersection point of the mapping ray and the three-dimensional skeleton model in an optional embodiment of the application can significantly improve the speed of finding the intersection point.
Optionally, the process (ii) includes sub-processes (i), sub-processes (ii), and sub-processes (iii).
The sub-process is as follows: and for each mapping line, taking an original outer casing box (outer casing box is also called a bounding box) which surrounds the whole bone three-dimensional model as a target outer casing box, and determining whether the mapping line intersects with the target outer casing box.
Taking a mapping line emitted from a focus point as an example, firstly, an original outer box of the outermost layer enclosing the three-dimensional skeleton model is generated from the outer space of the three-dimensional skeleton model, the original outer box is taken as a target outer box, then the mapping line searches the target outer box, and whether the mapping line passes through the target outer box is judged. The reason for using the original bounding box as the first target bounding box is that it encloses the entire space including the three-dimensional model of the bone and must therefore be passed through for later manipulation.
A subprocess of two: and if the target outer package box is crossed, the target outer package box is cut to obtain the sub outer package box.
Obviously, if the mapping ray passes through the target bounding box, an intersection point is generated, and then the target bounding box and the space enclosed therein are segmented.
According to the segmentation strategy of the embodiment of the application, the outermost original outer covering box which surrounds the three-dimensional skeleton model is generated from the external space of the three-dimensional skeleton model, then the original outer covering box is divided into two parts step by step according to the segmentation strategy diagram shown in fig. 3, the outermost original outer covering box is divided into two parts step by step, the outermost original outer covering box is also divided into two parts step by step, the outermost original outer covering box is divided into subspaces, then the subspaces are segmented by repeating the segmentation of the outermost original outer covering box according to the segmentation strategy diagram all the time, and each segmented subspace is guaranteed to contain a plurality of skeleton patches of the three-dimensional skeleton model.
The cutting plan view is shown in fig. 3, in which the target outsourcing box and the internal space thereof are cut with three planes parallel to the coordinate plane, and each intermediate state of the outsourcing box is shown in the cutting plan view. Obviously, the intermediate splitting results will vary depending on the order in which the splitting planes are used. For example, as an example, the first one is to perform a first cut using a plane parallel to the y-z plane, a second cut using a plane parallel to the x-y plane, and a third cut using a plane parallel to the x-z plane; second, a first cut is made using a plane parallel to the x-y plane, a second cut is made using a plane parallel to the x-z plane, and a third cut is made using a plane parallel to the y-z plane, although the final cut results are the same, the ray order of traversal will also vary due to the difference in the intermediate results. The present embodiment does not limit the order of using the slicing planes, but for convenience of description, the following sections are described using the slicing order of fig. 4 as an example.
Subprocess three: and determining a sub-outsourcing box intersected with the mapping line from the sub-outsourcing boxes to serve as a new target outsourcing box, returning to the step of segmenting the target outsourcing box if the sub-outsourcing boxes are intersected, and continuously executing the step of obtaining the sub-outsourcing box until the intersection point of the mapping line and the three-dimensional skeleton model is determined.
As shown in fig. 4, the first casing 11 is an outermost original casing, the second casing 121 and the third casing 122 are child casings of the first casing 11, the fourth casing 131 and the fifth casing 132 are child casings of the second casing 121, the sixth casing 133 is a child casing of the third casing 122, and the fifteenth casing 148 is a child casing of the sixth casing 133.
The mapping line starts from the outermost original outer package box 11, repeatedly traverses the space of the map graph cut by the segmentation strategy, judges whether the mapping line passes through the first outer package box 11, if so, further segments the mapping line, judges whether the mapping line passes through the second outer package box 121 after segmentation, if not, backtracks and searches for a third outer package box 122, and does not search for a fourth outer package box 131, a fifth outer package box 132 and the like; if the mapping line passes through the third outsourcing box 122, the mapping line is further segmented, and the sixth outsourcing box 133 is continuously searched; and the rest is repeated, and finally the image passes through a fifteenth outer packing box 148 on the leaf node, at this time, if the number of the bone patches in the subspace in the fifteenth outer packing box 148 is greater than 1, the fifteenth outer packing box 148 and the packed subspace are continuously cut according to the previous cutting plan until the number of the bone patches is 1 when the mapping ray passes through the outer packing box. And then, traversing all pixel points on the grid panel until finding the intersection point of the mapping line and the skeleton three-dimensional model.
And repeating the intersection point solving process on all the focus points to obtain a set of intersection points of the mapping lines of all the focus points and the skeleton three-dimensional model.
Substep S2063: and obtaining the projection area according to the set of the intersection points.
Obviously, the intersection points are some pixel points on the skeleton three-dimensional model, and the set of the pixel points can form a projection region of the focus three-dimensional model on the skeleton three-dimensional model together.
The method steps are used for all points on the focus three-dimensional model to obtain all intersection points, and the projection area from the focus three-dimensional model to the skeleton three-dimensional model can be obtained. Compared with other methods for searching the intersection point of the ray and the skeleton three-dimensional model, the method has the advantages that the calculated amount is obviously reduced, the operation speed is improved, and the projection area can be accurately and quickly determined.
Obviously, after the projection region of the focus three-dimensional model in the skeleton three-dimensional model is obtained, a doctor can accurately determine the operation position according to the projection region, and on the basis, the doctor can determine a better operation position, so that the dependence on personal experience is reduced, and the situations that the damage to a patient is too large and the treatment efficiency does not meet the expectation due to an overlarge operation opening are avoided when the focus is removed according to the determined operation position.
Obviously, the doctor can appropriately adjust the region drawn on the head according to the actual situation as long as the region can fulfill the operation requirement, and the embodiment is not limited to this.
In conclusion, the data processing method of the embodiment can avoid the problems that the doctor can only determine the operation position through the two-dimensional medical image and the own experience, so that the injury to the patient is too large and the treatment efficiency is not satisfactory.
EXAMPLE III
According to an embodiment of the present application, there is provided a robot navigation system including a processor for transmitting and receiving a control signal; a memory for storing at least one executable command for causing the processor to perform the operations of the data processing method as described above; and the execution mechanism is used for receiving the control signal sent by the processor and drawing an operation area on the patient according to the control signal.
The executing mechanism can be a surgical robot, the processor transmits the coordinates corresponding to the mapping area to the surgical robot processor, the surgical robot processor drives the mechanical arm to move according to the coordinates of the mapping area, and the corresponding area is drawn on the human body, namely the area where the surgery starts. Because the ID is set for each point when the pixel points are traversed when the mapping region is determined, the surgical robot processor drives the mechanical arm to move according to the ID of the coordinate. The mapping area is shown in fig. 5.
By the method, the shape, the position and the size of the focus mapped to the scalp can be simulated according to the designated direction or the shortest direction from the focus to the skull, so that a doctor can conveniently plan an operation path, the accuracy of the operation is improved, and the risk of the operation is reduced. And the mapping contour of the focus is directly drawn on the scalp of the patient, so that a doctor can conveniently determine the position and the size of the incision.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (16)

1. A data processing method, comprising:
acquiring a medical image sequence, wherein the medical image sequence comprises a plurality of two-dimensional medical images;
determining a skeleton three-dimensional model and a focus three-dimensional model according to gray values of pixel points in a plurality of medical images and space position information of the medical images;
and mapping the focus three-dimensional model according to a focus mapping path, and obtaining a projection area of the focus three-dimensional model on the skeleton three-dimensional model so as to determine an operation position according to the projection area.
2. The method of claim 1, wherein determining a three-dimensional model of a bone and a three-dimensional model of a lesion based on gray values of pixels in a plurality of the medical images and spatial location information of the medical images comprises:
according to the spatial position information of the medical images and the gray values of the pixel points in the medical images, carrying out segmentation processing on the medical images, and obtaining a skeleton three-dimensional point set and a focus three-dimensional point set;
and determining the skeleton three-dimensional model and the focus three-dimensional model according to the skeleton three-dimensional point set and the focus three-dimensional point set.
3. The method according to claim 2, wherein segmenting the plurality of medical images according to the spatial position information of the plurality of medical images and the gray values of the pixel points in each of the medical images, and obtaining a skeleton three-dimensional point set and a lesion three-dimensional point set comprises:
carrying out position registration processing on the plurality of medical images to obtain spatial position information of each medical image;
and according to the gray value of the pixel point and a preset segmentation condition, performing segmentation processing on the plurality of medical images, and determining the skeleton three-dimensional point set and the focus three-dimensional point set according to the segmented target point and the position information of the target point.
4. The method of claim 3, wherein the segmenting the medical images according to the gray values of the pixels and the preset segmentation conditions, and determining the skeleton three-dimensional point set and the focus three-dimensional point set according to the segmented target points and the position information of the target points comprises:
determining a first region and a second region from the plurality of medical images, wherein the first region comprises a plurality of first pixel points, the second region comprises a plurality of second pixel points, and the first pixel points are different from the second pixel points;
determining whether the first region and the second region are combined or not according to the gray value and the position information of the first pixel point and the gray value and the position information of the second pixel point;
if yes, updating the first area by using the merged area, and determining a new second area;
returning to the step of determining whether the first region and the second region are combined according to the gray value and the position information of the first pixel point and the gray value and the position information of the second pixel point until a termination condition is met;
and determining the skeleton three-dimensional point set and the focus three-dimensional point set according to the target point and the position information of the target point by taking a first pixel point contained in a first region when a termination condition is met as the target point.
5. The method of claim 4, wherein determining the first region and the second region from the plurality of medical images comprises:
obtaining contour regions in the plurality of medical images;
taking the pixel points in the contour region as reference first pixel points, and determining N first pixel points closest to the reference point according to the position information of the reference first pixel points to form the first region, wherein N is greater than or equal to 1;
and M second pixel points with the distance from the reference point smaller than a set distance threshold value are selected from the plurality of medical images to form the second area, wherein M is larger than or equal to 1.
6. The method of claim 5, wherein the termination condition comprises that each pixel in the outline region belongs to the first region or the second region.
7. The method of claim 4, wherein the determining whether the first region and the second region are merged according to the gray value and the position information of the first pixel point and the gray value and the position information of the second pixel point comprises:
determining a first maximum gray difference of the first area according to the gray value of each first pixel point;
determining a second maximum gray difference of the second area according to the gray value of each second pixel point;
determining inter-region gray level difference between the first region and the second region according to the position information and the gray level value of the first pixel points and the position information and the gray level value of the second pixel points;
and if the gray difference between the areas is smaller than the smaller one of the first maximum gray difference and the second maximum gray difference, combining the first area and the second area.
8. The method of claim 3, wherein the segmentation condition comprises a bone segmentation condition corresponding to a bone and a lesion segmentation condition corresponding to a lesion, the bone segmentation condition comprising a threshold or range of thresholds for a gray level of a bone point; and/or the lesion segmentation condition comprises a gray threshold or a gray threshold range of a lesion point.
9. The method of claim 2, wherein determining the bone three-dimensional model and the lesion three-dimensional model from the set of bone three-dimensional points and the set of lesion three-dimensional points comprises:
and carrying out modeling processing and smoothing processing on the skeleton three-dimensional point set and the focus three-dimensional point set to obtain the skeleton three-dimensional model and the focus three-dimensional model.
10. The method of claim 9, wherein said modeling and smoothing said set of bone three-dimensional points and said set of lesion three-dimensional points to obtain said bone three-dimensional model and said lesion three-dimensional model comprises:
connecting the skeleton points in the skeleton three-dimensional point set to obtain a skeleton initial model;
connecting the focus points in the focus three-dimensional point set to obtain a focus initial model;
and respectively carrying out smoothing treatment on the initial skeleton model and the initial focus model, and obtaining the three-dimensional skeleton model and the three-dimensional focus model.
11. The method of claim 10, wherein the smoothing the initial bone model and the initial lesion model and obtaining the three-dimensional bone model and the three-dimensional lesion model respectively comprises:
respectively calculating the curvature of each bone point of the initial bone model and the curvature of each focus point of the initial focus model;
determining an abnormal bone point to be removed according to a preset bone curvature threshold and the curvature of the bone point, and determining an abnormal focus point to be removed according to a preset focus curvature threshold and the curvature of the focus point;
and removing abnormal bone points in the initial bone model, obtaining the three-dimensional bone model, removing abnormal focus points in the initial focus model, and obtaining the three-dimensional focus model.
12. The method of claim 11, wherein said separately calculating a curvature of each bone point of said initial bone model and a curvature of each lesion point of said initial lesion model comprises:
establishing a implicit function expression of the skeleton patch set relative to a coordinate system according to the skeleton patch set constructed by the skeleton point connection, and establishing a implicit function expression of the focus patch set relative to the coordinate system according to the focus patch set constructed by the focus point connection;
respectively calculating a first order partial derivative and a second order partial derivative of the implicit function expression corresponding to the skeleton three-dimensional model and the focus three-dimensional model;
calculating the curvature of the bone point using a curvature formula according to the first and second partial derivatives corresponding to the bone point and the coordinates of the bone point, and calculating the curvature of the lesion point using a curvature formula according to the first and second partial derivatives corresponding to the lesion point and the coordinates of the lesion point.
13. The method of claim 1, wherein said mapping said lesion three-dimensional model according to a lesion mapping path comprises:
acquiring the focus mapping path;
calculating an intersection point with the skeleton three-dimensional model according to the focus mapping path and a focus point in the focus three-dimensional model;
and obtaining the projection area according to the set of the intersection points.
14. The method of claim 13, said calculating an intersection point with said three-dimensional model of bone from said lesion mapping path and a lesion point in said three-dimensional model of lesion, comprising:
according to a focus mapping path, aiming at each focus point in the focus three-dimensional model, determining a mapping ray corresponding to the focus point;
and respectively calculating the intersection point of each mapping line and the three-dimensional skeleton model.
15. The method of claim 14, said separately calculating an intersection of each said mapping line with said three-dimensional model of bone, comprising:
for each mapping ray, taking an original outer casing surrounding the whole bone three-dimensional model as a target outer casing, and determining whether the mapping ray intersects with the target outer casing;
if the target outer package box is intersected with the target outer package box, the target outer package box is cut to obtain a sub outer package box;
and determining a sub-outsourcing box intersected with the mapping line from the sub-outsourcing boxes to serve as a new target outsourcing box, returning to the step of segmenting the target outsourcing box if the sub-outsourcing boxes are intersected, and continuously executing the step of obtaining the sub-outsourcing box until the intersection point of the mapping line and the three-dimensional skeleton model is determined.
16. A robotic navigation system, comprising:
a processor for transmitting and receiving control signals;
a memory for storing at least one executable command that causes the processor to perform the operations of the data processing method of any one of claims 1-15;
and the execution mechanism is used for receiving the control signal sent by the processor and drawing an operation area on the patient according to the control signal.
CN202010943601.1A 2020-09-09 2020-09-09 Data processing method and robot navigation system Active CN112053400B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010943601.1A CN112053400B (en) 2020-09-09 2020-09-09 Data processing method and robot navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010943601.1A CN112053400B (en) 2020-09-09 2020-09-09 Data processing method and robot navigation system

Publications (2)

Publication Number Publication Date
CN112053400A true CN112053400A (en) 2020-12-08
CN112053400B CN112053400B (en) 2022-04-05

Family

ID=73611644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010943601.1A Active CN112053400B (en) 2020-09-09 2020-09-09 Data processing method and robot navigation system

Country Status (1)

Country Link
CN (1) CN112053400B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419309A (en) * 2020-12-11 2021-02-26 上海联影医疗科技股份有限公司 Medical image phase determination method, apparatus, computer device and storage medium
CN112700551A (en) * 2020-12-31 2021-04-23 青岛海信医疗设备股份有限公司 Virtual choledochoscope interventional operation planning method, device, equipment and storage medium
CN113100934A (en) * 2021-04-06 2021-07-13 德智鸿(上海)机器人有限责任公司 Operation assisting method, device, computer equipment and storage medium
CN115618694A (en) * 2022-12-15 2023-01-17 博志生物科技(深圳)有限公司 Image-based cervical vertebra analysis method, device, equipment and storage medium
CN115798725A (en) * 2022-10-27 2023-03-14 佛山读图科技有限公司 Method for making lesion-containing human body simulation image data for nuclear medicine
CN115953555A (en) * 2022-12-29 2023-04-11 南京鼓楼医院 Adenomyosis modeling method based on ultrasonic measured value
CN116740309A (en) * 2022-03-04 2023-09-12 武汉迈瑞科技有限公司 Medical image processing system, medical image processing method and computer equipment
CN117115159A (en) * 2023-10-23 2023-11-24 北京壹点灵动科技有限公司 Bone lesion determination device, electronic device, and storage medium
CN117930381A (en) * 2024-03-25 2024-04-26 海南中南标质量科学研究院有限公司 Port non-radiation perspective wave pass inspection system based on big data of Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102068281A (en) * 2011-01-20 2011-05-25 深圳大学 Processing method for space-occupying lesion ultrasonic images
CN105342701A (en) * 2015-12-08 2016-02-24 中国科学院深圳先进技术研究院 Focus virtual puncture system based on image information fusion
CN106529188A (en) * 2016-11-25 2017-03-22 苏州国科康成医疗科技有限公司 Image processing method applied to surgical navigation
CN109758233A (en) * 2019-01-21 2019-05-17 上海益超医疗器械有限公司 A kind of diagnosis and treatment integrated operation robot system and its navigation locating method
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102068281A (en) * 2011-01-20 2011-05-25 深圳大学 Processing method for space-occupying lesion ultrasonic images
CN105342701A (en) * 2015-12-08 2016-02-24 中国科学院深圳先进技术研究院 Focus virtual puncture system based on image information fusion
CN106529188A (en) * 2016-11-25 2017-03-22 苏州国科康成医疗科技有限公司 Image processing method applied to surgical navigation
CN109758233A (en) * 2019-01-21 2019-05-17 上海益超医疗器械有限公司 A kind of diagnosis and treatment integrated operation robot system and its navigation locating method
CN109993733A (en) * 2019-03-27 2019-07-09 上海宽带技术及应用工程研究中心 Detection method, system, storage medium, terminal and the display system of pulmonary lesions

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419309A (en) * 2020-12-11 2021-02-26 上海联影医疗科技股份有限公司 Medical image phase determination method, apparatus, computer device and storage medium
CN112700551A (en) * 2020-12-31 2021-04-23 青岛海信医疗设备股份有限公司 Virtual choledochoscope interventional operation planning method, device, equipment and storage medium
CN113100934A (en) * 2021-04-06 2021-07-13 德智鸿(上海)机器人有限责任公司 Operation assisting method, device, computer equipment and storage medium
CN116740309A (en) * 2022-03-04 2023-09-12 武汉迈瑞科技有限公司 Medical image processing system, medical image processing method and computer equipment
CN115798725A (en) * 2022-10-27 2023-03-14 佛山读图科技有限公司 Method for making lesion-containing human body simulation image data for nuclear medicine
CN115798725B (en) * 2022-10-27 2024-03-26 佛山读图科技有限公司 Method for manufacturing human body simulation image data with lesion for nuclear medicine
CN115618694A (en) * 2022-12-15 2023-01-17 博志生物科技(深圳)有限公司 Image-based cervical vertebra analysis method, device, equipment and storage medium
CN115953555A (en) * 2022-12-29 2023-04-11 南京鼓楼医院 Adenomyosis modeling method based on ultrasonic measured value
CN115953555B (en) * 2022-12-29 2023-08-22 南京鼓楼医院 Uterine adenomyosis modeling method based on ultrasonic measurement value
CN117115159A (en) * 2023-10-23 2023-11-24 北京壹点灵动科技有限公司 Bone lesion determination device, electronic device, and storage medium
CN117115159B (en) * 2023-10-23 2024-03-15 北京壹点灵动科技有限公司 Bone lesion determination device, electronic device, and storage medium
CN117930381A (en) * 2024-03-25 2024-04-26 海南中南标质量科学研究院有限公司 Port non-radiation perspective wave pass inspection system based on big data of Internet of things

Also Published As

Publication number Publication date
CN112053400B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN112053400B (en) Data processing method and robot navigation system
US10357316B2 (en) Systems and methods for planning hair transplantation
US9474583B2 (en) Systems and methods for planning hair transplantation
US11547499B2 (en) Dynamic and interactive navigation in a surgical environment
US7953260B2 (en) Predicting movement of soft tissue of the face in response to movement of underlying bone
US7379062B2 (en) Method for determining a path along a biological object with a lumen
US7630750B2 (en) Computer aided treatment planning
US9514533B2 (en) Method for determining bone resection on a deformed bone surface from few parameters
CN107067398B (en) Completion method and device for missing blood vessels in three-dimensional medical model
US20120323547A1 (en) Method for intracranial aneurysm analysis and endovascular intervention planning
AU2018256649A1 (en) System and method for surgical planning
CN117115150B (en) Method, computing device and medium for determining branch vessels
CN113081257B (en) Automatic planning method for operation path
CN113891691A (en) Automatic planning of shoulder stability enhancement surgery
CN116523975A (en) Six-dimensional transformation-based robot global optimal solution space registration and calibration algorithm
US11452566B2 (en) Pre-operative planning for reorientation surgery: surface-model-free approach using simulated x-rays
WO2001056491A2 (en) Computer aided treatment planning
US11826109B2 (en) Technique for guiding acquisition of one or more registration points on a patient's body
US20210015620A1 (en) Digital bone reconstruction method
CN116777751A (en) File grinding area planning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Patentee after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: Room 502, Building No. 3, Garden East Road, Haidian District, Beijing, 100191

Patentee before: Beijing Baihui Wei Kang Technology Co.,Ltd.

CP03 Change of name, title or address