CN112382359B - Patient registration method and device, electronic equipment and computer readable medium - Google Patents

Patient registration method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112382359B
CN112382359B CN202011449427.1A CN202011449427A CN112382359B CN 112382359 B CN112382359 B CN 112382359B CN 202011449427 A CN202011449427 A CN 202011449427A CN 112382359 B CN112382359 B CN 112382359B
Authority
CN
China
Prior art keywords
point cloud
image
coordinate system
patient
probe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011449427.1A
Other languages
Chinese (zh)
Other versions
CN112382359A (en
Inventor
王棋
谢永召
宫明波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN202011449427.1A priority Critical patent/CN112382359B/en
Publication of CN112382359A publication Critical patent/CN112382359A/en
Application granted granted Critical
Publication of CN112382359B publication Critical patent/CN112382359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a patient registration method, a patient registration device, electronic equipment and a computer readable medium, and relates to the field of artificial intelligence. Wherein the method comprises the following steps: acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot; acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; and determining the transformation relation between the camera coordinate system and the image coordinate system as a patient registration result. Through the embodiment of the application, not only can the patient be conveniently and quickly registered, but also the registration accuracy of the patient can be effectively improved.

Description

Patient registration method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a patient registration method, a patient registration device, an electronic device and a computer readable medium.
Background
Since the 90 s of the last century, robotic-assisted surgery has gradually become a significant trend. The successful application of a large number of surgical robot systems in clinic has attracted great interest in the medical and scientific fields at home and abroad. Currently, minimally invasive surgical robots are gradually becoming the leading edge of the international robot field and a research hotspot. The minimally invasive surgery robot system integrates a plurality of emerging disciplines, and realizes the minimally invasive, intelligent and digital surgery. Until now, minimally invasive surgical robots have been widely used all over the world, and the types of the applied surgery include urology, obstetrics and gynecology, cardiac surgery, thoracic surgery, hepatobiliary surgery, gastrointestinal surgery, otorhinolaryngology and other subjects.
When the minimally invasive surgery robot is used for assisting surgery, a doctor stands beside a control table and is dozens of centimeters away from an operation table, and looks inwards through a visiting mirror to study a three-dimensional image sent by a camera in a patient body. The three-dimensional image shows the surgical site and the surgical instruments affixed to the end points of the rod. The surgeon operates the surgical instrument using a control handle located directly below the screen. When the surgeon moves the control handle, the computer sends an electronic signal to the surgical instrument, which moves in synchronism with the control handle.
In order to realize the synchronous moving process, the registration of the patient needs to be completed first, and the relative position relationship between the minimally invasive surgery robot and the patient can be obtained in real time. In the prior art, the head of the patient needs to be positioned by a fixing tool to realize the registration of the patient, or some registration markers are pasted on the head of the patient to realize the registration of the patient. Patient registration methods in the prior art are complicated, errors can be caused due to the influence of external factors, the registration precision of a patient is reduced, and the operation precision of the patient is influenced. Therefore, how to simply perform patient registration and effectively improve the accuracy of patient registration is a technical problem to be solved at present.
Disclosure of Invention
The present application aims to provide a patient registration method, a patient registration device, an electronic device, and a computer-readable medium, which are used to solve the technical problem in the prior art of how to easily perform patient registration and effectively improve the precision of patient registration.
According to a first aspect of embodiments herein, a patient registration method is provided. The method comprises the following steps: acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot; acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; and determining the transformation relation between the camera coordinate system and the image coordinate system as a patient registration result.
According to a second aspect of embodiments herein, a patient registration apparatus is provided. The device comprises: the first acquisition module is used for acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot; the second acquisition module is used for acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; the registration module is used for registering the probe point cloud and the image point cloud so as to obtain a transformation relation between the camera coordinate system and the image coordinate system; a first determining module, configured to determine a transformation relationship between the camera coordinate system and the image coordinate system as a patient registration result.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: one or more processors; a storage configured to store one or more programs; when executed by the one or more processors, cause the one or more processors to implement a patient registration method as described in the first aspect of embodiments herein.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a patient registration method as described in the first aspect of embodiments of the present application.
According to the technical scheme of patient registration provided by the embodiment of the application, probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of a medical robot is obtained; acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; compared with the prior other modes, the method has the advantages that the head of the patient is not required to be positioned by a fixing tool, markers do not need to be pasted on the head of the patient, any marker points do not need to be added on the face of the patient, and the patient can be conveniently and quickly registered. In addition, the influence of external factors is eliminated, and the registration precision of the patient is effectively improved.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1A is a flowchart illustrating steps of a patient registration method according to a first embodiment of the present application;
FIG. 1B is a schematic diagram of an anatomical coordinate system provided in accordance with an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps of a patient registration method according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a patient registration apparatus according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application;
fig. 5 is a hardware structure of an electronic device according to a fifth embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
In the surgical procedure, the medical robot generally needs to obtain visual information related to the patient according to the marker or scene modeling, and the visual information is correlated with information in the medical image, so as to realize the positioning or navigation function for the patient. Visual information (such as pose information of markers, etc.) is obtained by some visual sensors, and is stored in the coordinate system of the visual sensors themselves, referred to as a camera coordinate system for short, and the space represented by the coordinate system is referred to as a camera space. The information in the medical image is obtained by processing the medical image on a computer. Medical images (such as CT, MR, etc.) are obtained by scanning a patient before or during surgery, and have a reference coordinate system attached to the patient, which is called an image coordinate system, and a correspondingly represented space is called an image space. The process of patient registration is the process of solving the transformation relationship between the two spatial coordinate systems. The following will describe in detail the patient registration method provided in the embodiments of the present application.
Referring to fig. 1A, a flowchart illustrating steps of a patient registration method according to a first embodiment of the present application is shown.
Specifically, the patient registration method provided in this embodiment includes the following steps:
in step S101, a probe point cloud in which a probe tip sliding on the face of a patient is formed in a camera coordinate system of a vision sensor of a medical robot is acquired.
In this embodiment, the probe tip is placed on the patient's face, the position of the probe tip is continuously adjusted, the probe tip is always located on the patient's face, and coordinate data of the probe tip in the camera coordinate system is collected to form the probe point cloud. Since the placement of the probe tip on the patient's face is likely to cause deformation of the patient's face, resulting in the coordinate data of the probe tip in the camera coordinate system being collected not to coincide with the coordinate data of the patient's face in the camera coordinate system in a normal state, the tap position of the probe tip is usually selected to be a forehead and nose portion that is not easily deformed. The probe tip touches the face of the patient, the probe is moved to enable the probe tip to slide on the face of the patient, and the marker at the tail of the probe is kept in the visual field of the visual sensor and is identified all the time. And calculating the coordinate data of the probe tip in the camera coordinate system and storing the coordinate data in the set A. After a period of time, the probe point cloud is formed by collecting the A. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when acquiring a probe point cloud in which a probe tip sliding on a patient's face is configured in a camera coordinate system of a vision sensor of a medical robot, acquiring coordinate offset data of the probe tip relative to a marker in a coordinate system of the marker at a probe tail; acquiring a coordinate transformation matrix from a coordinate system of the marker to a coordinate system of the camera, wherein the coordinate transformation matrix is obtained by identifying the marker by the visual sensor; determining coordinate data of the probe tip in the camera coordinate system according to the coordinate offset data and the coordinate transformation matrix; and forming the probe point cloud according to the coordinate data of the probe point in the camera coordinate system. Thereby, the coordinate data of the probe tip in the camera coordinate system can be accurately determined by the coordinate offset data and the coordinate transformation matrix. Further, the probe point cloud can be effectively constructed through the coordinate data of the probe point in the camera coordinate system. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one particular example, the probe is a needle-type tool with a needle-type tip (probe tip) at the end and a fixed marker at the tail that is recognized by a visual sensor. The vision sensor obtains the position and pose data of the marker by identifying the marker, and then calculates the coordinate data of the probe tip under a camera coordinate system through the calibrated offset relation between the probe tip and the marker. There are a total of three coordinate systems in the environment: the coordinate system of the Marker of the probe tail is marked as { Marker }; the coordinate system of the vision sensor, namely a Camera coordinate system { Camera }; image coordinate system { patent }. Coordinate offset data of probe tip under { Marker } coordinate system M p tip As known, the data directly obtained under the vision sensor is a coordinate transformation matrix from a coordinate system { Marker } to a coordinate system { Camera } at a certain time
Figure BDA0002826209620000051
The coordinate data of the probe tip under { Camera } at this moment can be deduced:
Figure BDA0002826209620000052
it should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S102, an image point cloud of the face image of the patient extracted from the medical image of the patient in an image coordinate system is acquired.
In some optional embodiments, when acquiring an image point cloud formed in an image coordinate system of a facial image of the patient extracted from a medical image of the patient, extracting the facial image to obtain a facial model corresponding to the facial image; according to the direction vector of the face of the patient provided by the face image, emitting rays to the face model to obtain an intersection point of the rays and the face model; and forming the image point cloud according to the coordinate data of the intersection point of the ray and the face model in the image coordinate system. Therefore, the image point cloud can be effectively formed through the face model corresponding to the face image and the direction vector of the face of the patient provided by the face image. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, when a ray is emitted to the face model according to a direction vector of the face of the patient provided by the face image to obtain an intersection of the ray and the face model, a ray position is moved on a plane perpendicular to the direction vector of the face of the patient by a specified step size to emit the ray to the face model from different ray positions and obtain a plurality of intersections of the ray and the face model. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one specific example, a facial image is extracted from a patient image to form an image point cloud. The patient image to be registered is selected and a facial model is extracted from the facial image by either an automatic method (e.g., Marching Cubes method) or a manual method (e.g., thresholding method), denoted as SurfB. A direction vector of the patient's face is determined from information provided by the facial image. As shown in fig. 1B, the a- > P direction of the anatomical coordinate system. The direction indicated by the direction vector along the patient's face rays from infinity, the intersection with SurfB, is considered a facial point. The ray positions are shifted in the plane of the direction vector perpendicular to the patient's face by a specified density (step size) to obtain a plurality of intersections, which are stored in the set B. And stopping collecting the intersection points when the ray range exceeds the area of the face image, wherein the set B at the moment forms an image point cloud. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S103, the probe point cloud and the image point cloud are registered to obtain a transformation relationship between the camera coordinate system and the image coordinate system.
In some optional embodiments, in registering the probe point cloud and the image point cloud, coarsely registering the probe point cloud and the image point cloud according to coordinate data of points in the probe point cloud and the image point cloud to obtain a coarse registration matrix for transforming the camera coordinate system and the image coordinate system; according to the rough registration matrix, performing fine registration on the probe point cloud and the image point cloud to obtain a fine registration matrix for transforming the camera coordinate system and the image coordinate system; and determining the fine registration matrix as a transformation relation between the camera coordinate system and the image coordinate system. Therefore, the probe point cloud and the image point cloud are roughly registered through the coordinate data of the points in the probe point cloud and the image point cloud, and the rough registration matrix can be accurately obtained. In addition, the probe point cloud and the image point cloud are precisely registered through the coarse registration matrix, so that the precise registration matrix can be accurately obtained, and the transformation relation between the camera coordinate system and the image coordinate system can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when coarsely registering the probe point cloud and the image point cloud according to the coordinate data of the points in the probe point cloud and the image point cloud, determining normal vectors of the points in the probe point cloud and the image point cloud according to the coordinate data of the points in the probe point cloud and the image point cloud; determining characteristic values of points in the probe point cloud and the image point cloud according to normal vectors of the points in the probe point cloud and the image point cloud; and carrying out coarse registration on the probe point cloud and the image point cloud according to the characteristic values of the points in the probe point cloud and the image point cloud so as to obtain a coarse registration matrix. Therefore, the probe point cloud and the image point cloud are roughly registered through the characteristic values of the points in the probe point cloud and the image point cloud, and the rough registration matrix can be accurately obtained. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one specific example, the feature values of the points in the probe point cloud and the image point cloud may be fast point feature histograms of the points in the probe point cloud and the image point cloud. When determining the fast point feature histogram of the points in the probe point cloud and the image point cloud, firstly calculating the relative relation between each point to be calculated and k field points of the point according to the normal vectors of the point and the k field points, establishing a simplified point feature histogram, then calculating the point feature histogram of the k field points, and finally obtaining the fast point feature histogram through calculation, wherein the calculation expression is
Figure BDA0002826209620000071
Wherein, S (p) q ) Representing the point p to be calculated q Simplified point feature histogram of (1), F (p) q ) Representing the point p to be calculated q Fast point feature histogram of (1), w i A weight value representing a simplified point feature histogram for the ith domain point. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when the probe point cloud and the image point cloud are precisely aligned according to the coarse registration matrix, respectively initializing an optimal rotation matrix and an optimal translation vector according to a rotation matrix and a translation vector included in the coarse registration matrix; iteratively updating the initialized optimal rotation matrix and the optimal translation vector according to the coordinate data of the points in the probe point cloud and the image point cloud; and if the iteration termination condition is met, determining the fine registration matrix according to the optimal rotation matrix and the optimal translation vector. Therefore, the initialized optimal rotation matrix and the initialized optimal translation vector are iteratively updated through the coordinate data of the points in the probe point cloud and the image point cloud, and the fine registration matrix can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, when the initialized optimal rotation matrix and the optimal translation vector are iteratively updated according to the coordinate data of the points in the probe point cloud and the image point cloud, the probe point cloud is transformed according to the optimal rotation matrix and the optimal translation vector, and the transformed probe point cloud is compared with the image point cloud to find out the nearest neighbor point of the points in the probe point cloud in the image point cloud; under the condition that the nearest neighbor point of a point in the probe point cloud in the image point cloud is found, respectively removing the mass center of the probe point cloud and the image point cloud, and determining the covariance matrix of the probe point cloud after the mass center is removed and the covariance matrix of the image point cloud after the mass center is removed; and carrying out singular value decomposition on the covariance matrix, and updating the optimal rotation matrix and the optimal translation vector according to a left singular matrix and a right singular matrix obtained by decomposition. Wherein the iteration termination condition comprises at least one of: the variation of the optimal rotation matrix obtained by the current iteration updating relative to the optimal rotation matrix obtained by the last iteration updating is smaller than a first preset value, and the variation of the optimal translation vector obtained by the current iteration updating relative to the optimal translation vector obtained by the last iteration updating is smaller than a second preset value; and the iteration updating times of the optimal rotation matrix and the optimal translation vector reach the preset maximum iteration times. The first preset value and the second preset value can be set by a person skilled in the art according to actual needs, and this embodiment does not limit this. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, the probe point cloud and the image point cloud are registered, and the transformation obtained after the registration is the transformation relation between the camera coordinate system and the image coordinate system, that is, the registration is completed. The size of the probe point cloud is usually smaller than that of the image point cloud, and the probe point cloud in the camera coordinate system is used as the source point cloud P s Taking the image point cloud in the image coordinate system as the target point cloud P t Then the point cloud registration problem can be described as:
Figure BDA0002826209620000091
wherein p is s ,p t Respectively, corresponding points, | P, in the source point cloud and the target point cloud s And | is the size of the source point cloud, i.e. the number of points. The two point clouds are registered by a plurality of methods, which are generally divided into a coarse registration step and a fine registration step. The rough registration usually calculates the normal vector of each point in the two point clouds, calculates the eigenvalue (such as FPFH) according to the normal vector, matches the two point clouds according to the eigenvalue, and obtains the rough registration matrix T 0 (by the rotation matrix R 0 Sum vector t 0 Composition). Then use T 0 Inputting the initial value into a fine registration algorithm (such as ICP and the like), and performing iterative computation to obtain a fine registration matrix T (formed by a rotation matrix R) * And a translation vector t * Composition). The algorithm flow of ICP is as follows: initialization: k is iteration times, and k is initialized to be 0; given an initial transformation R 0 ,t 0 (ii) a a. Using an initial transformation R 0 ,t 0 Or R from the last iteration k-1 ,t k-1 As the current optimal transformation, the source point cloud is transformed to obtain a temporary transformation point cloud, the temporary transformation point cloud is compared with the target point cloud, and the most optimal point of each point in the source point cloud in the target point cloud is found outA proximity point; b. under the condition of known point correspondence, setting
Figure BDA0002826209620000092
Respectively representing the centroids of the source point cloud and the target point cloud, and calculating the point cloud without the centroids
Figure BDA0002826209620000093
Then, a covariance matrix is calculated
Figure BDA0002826209620000094
The covariance matrix H is a 3x3 matrix, and is subjected to singular value decomposition to obtain H ═ U ∑ V T If the optimal rotation corresponding to the current point cloud is: r k =VU T The optimal translation is:
Figure BDA0002826209620000095
c. and continuously iterating the steps a and b until an iteration termination condition (namely a convergence state) is met: r k ,t k Is less than a certain value or reaches a set maximum number of iterations. Final result R * ,t * I.e. R at convergence k ,t k . It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S104, the transformation relationship between the camera coordinate system and the image coordinate system is determined as a patient registration result.
In this embodiment, the transformation relationship between the camera coordinate system and the image coordinate system may be the fine registration matrix. Thus, the fine registration matrix may be determined as a patient registration result. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, after obtaining the transformation relationship between the camera coordinate system and the image coordinate system, the method further comprises: determining registration errors of the probe point cloud and the image point cloud according to a transformation relation between the camera coordinate system and the image coordinate system, coordinate data of points in the probe point cloud, and coordinate data of points in the image point cloud; and verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system. Therefore, the conversion relation between the camera coordinate system and the image coordinate system is verified through the registration error of the probe point cloud and the image point cloud, and the verification result of the conversion relation between the camera coordinate system and the image coordinate system can be accurately obtained. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, if the registration error satisfies a condition, the registration is successful, otherwise, the registration fails. Registration error is typically measured by the root mean square error Err. The smaller Err indicates the higher similarity of the two point clouds, the more successful the registration. Of course, for some registration algorithms there may be an own error metric, which is not described in detail here.
Figure BDA0002826209620000101
Wherein,
Figure BDA0002826209620000102
is a point in the source point cloud
Figure BDA0002826209620000103
And transforming the nearest neighbor point in the target point cloud. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
According to the patient registration method provided by the embodiment of the application, probe point cloud formed by the probe point sliding on the face of a patient in the camera coordinate system of the visual sensor of the medical robot is obtained; acquiring an image point cloud formed by a face image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; compared with the prior other modes, the method has the advantages that the head of the patient is not required to be positioned by a fixing tool, markers do not need to be pasted on the head of the patient, any marker points do not need to be added on the face of the patient, and the patient can be conveniently and quickly registered. In addition, the influence of external factors is eliminated, and the registration precision of the patient is effectively improved.
The patient registration method provided by the present embodiment may be performed by any suitable device having data processing capabilities, including but not limited to: a camera, a terminal, a mobile terminal, a PC, a server, an in-vehicle device, an entertainment device, an advertising device, a Personal Digital Assistant (PDA), a tablet computer, a notebook computer, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device, a display enhancement device, or the like.
Referring to fig. 2, a flowchart illustrating steps of a patient registration method according to a second embodiment of the present application is shown.
Specifically, the patient registration method provided in this embodiment includes the following steps:
in step S201, a probe point cloud in which a probe tip sliding on the face of a patient is configured in a camera coordinate system of a vision sensor of a medical robot is acquired.
Since the specific implementation of step S201 is similar to the specific implementation of step S101 in the first embodiment, it is not repeated herein.
In step S202, an image point cloud of the face image of the patient extracted from the medical image of the patient in an image coordinate system is acquired.
Since the specific implementation of step S202 is similar to the specific implementation of step S102 in the first embodiment, it is not repeated herein.
In step S203, removing outliers from the points in the image point cloud to obtain a filtered and denoised point cloud.
In this embodiment, an mcmd (maximum Consistency with Minimum distance) -Z automatic denoising algorithm may be adopted:
to the image point cloud P x Calculating a k neighborhood point cloud set P of any point P k It is obvious that
Figure BDA0002826209620000121
Wherein k adjacent point clouds are point clouds formed by k points nearest to the p points;
calculating the center point of the point
Figure BDA0002826209620000122
Sum normal vector
Figure BDA0002826209620000123
Traversing neighborhood point cloud set P k Calculating the orthogonal distance OD from each point to the fitting plane, wherein the set of the orthogonal distances OD from each point to the fitting plane j
Figure BDA0002826209620000124
Wherein p is j Representing a cloud set of points P k At any point, N (P) k ) Representing a cloud set of points P k The number of point clouds in (a);
calculate Rz-core values for all points in the neighborhood:
Figure BDA0002826209620000125
Figure BDA0002826209620000126
wherein MAD represents the mean of absolute deviations;
Figure BDA0002826209620000127
the average value of OD values of all points in the neighborhood is represented, and Rz-core is a self-defined variable and is used for measuring whether a certain point is abnormal or notPoint;
judging the Rz-core value, if the Rz-core value is less than 2.5, determining the Rz-core value as a normal point, otherwise determining the Rz-core value as an abnormal point, eliminating all abnormal points after traversing is completed, and obtaining the filtered and denoised point cloud P m . It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S204, down-sampling the filtered and denoised point cloud to obtain a simplified point cloud.
In this embodiment, a three-dimensional voxel grid is created for each point of the input filtered and denoised point cloud, coordinate values of all the points are searched, and the maximum value X in the direction of X, Y, Z is found max 、Y max 、Z max And the minimum value X min 、Y min 、Z min Determining the side length L of the large cubic grid, if the side length L of the large cubic grid is larger than the preset side length L 0 Dividing a plurality of voxel grids along the direction X, Y, Z; presetting point cloud number N o Sequentially comparing the point cloud number n in the plurality of voxel grids with a preset point cloud number threshold, if the point cloud number n exceeds the preset value, executing the following steps, otherwise deleting the voxel grid;
the side lengths L of several small cubic grids are compared again i With a predetermined side length L 0 If the side length is greater than L 0 Continuously dividing a plurality of small cubes, and if the small cubes are less than or equal to L 0 Traversing the points in the voxel grid, and replacing other points in the voxel grid by the center of gravity of the voxel grid approximately, wherein the calculation formula of the center of gravity is as follows:
Figure BDA0002826209620000131
wherein d is i Indicating point (x) i ,y i ,z i ) Distance to the center of the region of each voxel grid, d i Represents the minimum value of the distance, when the minimum value is reached (x) i ,y i ,z i ) I is more than or equal to 0 and less than or equal to n as the gravity center;
Figure BDA0002826209620000132
wherein d is j Indicating point (x) j ,y j ,z j ) To the region center of gravity (x) of each voxel grid 0 ,y 0 ,z 0 ) Distance of d max Indicates the maximum value of the distance, the corresponding point being the farthest point found, max { d } j Denotes { d } j J is more than or equal to 0 and less than or equal to n-1;
further, the center of gravity (x) in the voxel grid is preserved 0 ,y 0 ,z 0 ) Processing all voxel grids to obtain simplified point clouds, setting a threshold tau, and if tau is less than or equal to d max Then, it remains in accordance with d j The j point, and the gravity point, otherwise, only the gravity point is reserved; the center of gravity point and the point satisfying the distance maximum are the remaining points. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S205, the probe point cloud and the condensed point cloud are registered to obtain a transformation relationship between the camera coordinate system and the image coordinate system.
Since the specific implementation of step S205 is similar to the specific implementation of step S103 in the first embodiment, it is not repeated herein.
In step S206, the transformation relationship between the camera coordinate system and the image coordinate system is determined as a patient registration result.
Since the specific implementation of step S206 is similar to the specific implementation of step S104 in the first embodiment, it is not repeated herein.
In some optional embodiments, after obtaining the transformation relationship between the camera coordinate system and the image coordinate system, the method further comprises: acquiring first coordinate data of a marker of the head of the patient touched by the probe tip in the camera coordinate system and second coordinate data of the marker of the head of the patient in the medical image in the image coordinate system; determining a registration error of the probe point cloud and the image point cloud according to a transformation relation between the camera coordinate system and the image coordinate system, the first coordinate data and the second coordinate data; and verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system. Therefore, the conversion relation between the camera coordinate system and the image coordinate system is verified through the registration error of the probe point cloud and the image point cloud, and the verification result of the conversion relation between the camera coordinate system and the image coordinate system can be accurately obtained. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one specific example, the error metric is implemented by adding a verification point. Before the medical image of the patient is scanned, a marker which can be identified in the medical image can be attached to the head of the patient, so that the marker can be clearly identified in the medical image of the patient and the three-dimensional position of the marker in the image space can be recovered P p verf . After the registration is finished, the marker is touched by the probe tip, and the three-dimensional position of the marker in the camera space is obtained by the vision sensor C p verf . The error is measured by a transformation of two points:
Err(R * ,t * )=||R *C p verf +t * - P p verf || 2
it should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
On the basis of the first embodiment, abnormal points in the image point cloud are removed to obtain a filtered and denoised point cloud, the filtered and denoised point cloud is downsampled to obtain a simplified point cloud, and the probe point cloud and the simplified point cloud are registered to obtain a transformation relation between the camera coordinate system and the image coordinate system. In addition, the filtered and denoised point cloud is subjected to down-sampling, so that the filtered and denoised point cloud can be effectively simplified, and the registration speed of the simplified point cloud is effectively improved.
The patient registration method provided by the present embodiment may be performed by any suitable device having data processing capabilities, including but not limited to: a camera, a terminal, a mobile terminal, a PC, a server, an in-vehicle device, an entertainment device, an advertising device, a Personal Digital Assistant (PDA), a tablet computer, a notebook computer, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device, a display enhancement device, or the like.
Referring to fig. 3, a schematic structural diagram of a patient registration apparatus according to a third embodiment of the present application is shown.
The patient registration apparatus provided in this embodiment includes: a first acquisition module 301, configured to acquire a probe point cloud formed by a probe point sliding on a patient's face in a camera coordinate system of a vision sensor of a medical robot; a second obtaining module 302, configured to obtain an image point cloud formed in an image coordinate system of a facial image of the patient extracted from a medical image of the patient; a registration module 303, configured to register the probe point cloud and the image point cloud to obtain a transformation relationship between the camera coordinate system and the image coordinate system; a first determining module 304, configured to determine a transformation relationship between the camera coordinate system and the image coordinate system as a patient registration result.
Optionally, the first obtaining module 301 is specifically configured to: acquiring coordinate offset data of a probe tip relative to a marker in a coordinate system of the marker at the tail of the probe; acquiring a coordinate transformation matrix from a coordinate system of the marker to a coordinate system of the camera, wherein the coordinate transformation matrix is obtained by identifying the marker by the visual sensor; determining coordinate data of the probe tip in the camera coordinate system according to the coordinate offset data and the coordinate transformation matrix; and forming the probe point cloud according to the coordinate data of the probe point in the camera coordinate system.
Optionally, the second obtaining module 302 includes: the extraction submodule is used for extracting the face image to obtain a face model corresponding to the face image; the transmitting sub-module is used for transmitting rays to the face model according to the direction vector of the face of the patient provided by the face image so as to obtain an intersection point of the rays and the face model; and the forming submodule is used for forming the image point cloud according to the coordinate data of the intersection point of the ray and the face model in the image coordinate system.
Optionally, the transmitting submodule is specifically configured to: the method includes moving a ray position in a plane perpendicular to a direction vector of the patient's face by a specified step size to emit rays from different ray positions toward the face model and obtaining a plurality of intersections of the rays with the face model.
Optionally, after the second obtaining module 302, the apparatus further includes: the removing module is used for removing abnormal points from the points in the image point cloud to obtain the point cloud after filtering and denoising; a down-sampling module, configured to down-sample the filtered and denoised point cloud to obtain a simplified point cloud, where the registration module 303 is specifically configured to: and registering the probe point cloud and the simplified point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system.
Optionally, the registration module 303 includes: the rough registration sub-module is used for carrying out rough registration on the probe point cloud and the image point cloud according to the coordinate data of the points in the probe point cloud and the image point cloud so as to obtain a rough registration matrix for transforming the camera coordinate system and the image coordinate system; the fine registration sub-module is used for performing fine registration on the probe point cloud and the image point cloud according to the coarse registration matrix so as to obtain a fine registration matrix for transforming the camera coordinate system and the image coordinate system; and the determining submodule is used for determining the fine registration matrix as a transformation relation between the camera coordinate system and the image coordinate system.
Optionally, the coarse registration sub-module is specifically configured to: determining normal vectors of points in the probe point cloud and the image point cloud according to coordinate data of the points in the probe point cloud and the image point cloud; determining characteristic values of points in the probe point cloud and the image point cloud according to normal vectors of the points in the probe point cloud and the image point cloud; and carrying out coarse registration on the probe point cloud and the image point cloud according to the characteristic values of the points in the probe point cloud and the image point cloud so as to obtain a coarse registration matrix.
Optionally, the fine registration sub-module comprises: the initialization unit is used for respectively initializing an optimal rotation matrix and an optimal translation vector according to the rotation matrix and the translation vector included by the coarse registration matrix; the iterative updating unit is used for iteratively updating the initialized optimal rotation matrix and the optimal translation vector according to the coordinate data of the points in the probe point cloud and the image point cloud; and the determining unit is used for determining the fine registration matrix according to the optimal rotation matrix and the optimal translation vector if the iteration termination condition is met.
Optionally, the iteration update unit is specifically configured to: transforming the probe point cloud according to the optimal rotation matrix and the optimal translation vector, and comparing the transformed probe point cloud with the image point cloud to find out the nearest neighbor point of the point in the probe point cloud in the image point cloud; under the condition that the nearest neighbor point of a point in the probe point cloud in the image point cloud is found, respectively removing the mass center of the probe point cloud and the image point cloud, and determining the covariance matrix of the probe point cloud after the mass center is removed and the covariance matrix of the image point cloud after the mass center is removed; and carrying out singular value decomposition on the covariance matrix, and updating the optimal rotation matrix and the optimal translation vector according to a left singular matrix and a right singular matrix obtained by decomposition.
Optionally, the iteration termination condition comprises at least one of: the variation of the optimal rotation matrix obtained by the current iteration updating relative to the optimal rotation matrix obtained by the last iteration updating is smaller than a first preset value, and the variation of the optimal translation vector obtained by the current iteration updating relative to the optimal translation vector obtained by the last iteration updating is smaller than a second preset value; and the iteration updating times of the optimal rotation matrix and the optimal translation vector reach the preset maximum iteration times.
Optionally, after the registration module 303, the apparatus further comprises: a second determining module, configured to determine a registration error between the probe point cloud and the image point cloud according to a transformation relationship between the camera coordinate system and the image coordinate system, coordinate data of a point in the probe point cloud, and coordinate data of a point in the image point cloud; the first verification module is used for verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system.
Optionally, after the registration module 303, the apparatus further comprises: the third acquisition module is used for acquiring first coordinate data of a marker of the head of the patient, which is touched by the probe tip, in the camera coordinate system and second coordinate data of the marker of the head of the patient in the medical image in the image coordinate system; a third determining module, configured to determine a registration error between the probe point cloud and the image point cloud according to a transformation relationship between the camera coordinate system and the image coordinate system, the first coordinate data, and the second coordinate data; and the second verification module is used for verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system.
The patient registration apparatus of this embodiment is used to implement the corresponding patient registration method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again.
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application; the electronic device may include:
one or more processors 401;
a computer-readable medium 402, which may be configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a patient registration method as described in embodiment one or embodiment two above.
Fig. 5 is a hardware structure of an electronic device according to a fifth embodiment of the present application; as shown in fig. 5, the hardware structure of the electronic device may include: a processor 501, a communication interface 502, a computer-readable medium 503, and a communication bus 504;
wherein the processor 501, the communication interface 502 and the computer readable medium 503 are communicated with each other through a communication bus 504;
alternatively, the communication interface 502 may be an interface of a communication module, such as an interface of a GSM module;
the processor 501 may be specifically configured to: acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot; acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; and determining the transformation relation between the camera coordinate system and the image coordinate system as a patient registration result.
The Processor 501 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 503 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code configured to perform the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may travel through any type of network: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or the connection may be made to an external computer (for example, through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured for implementing the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first acquisition module, a second acquisition module, a registration module, and a first determination module. The names of these modules do not in some cases constitute a limitation on the module itself, and for example, the first acquisition module may also be described as "a module that acquires a probe point cloud constituted by a probe point tip sliding on a patient's face in a camera coordinate system of a vision sensor of a medical robot".
As another aspect, the present application further provides a computer-readable medium on which a computer program is stored, which program, when executed by a processor, implements the patient registration method as described in the above first or second embodiment.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot; acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system; registering the probe point cloud and the image point cloud to obtain a transformation relation between the camera coordinate system and the image coordinate system; and determining the transformation relation between the camera coordinate system and the image coordinate system as a patient registration result.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing an element from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (11)

1. A method of patient registration, the method comprising:
acquiring a probe point cloud formed by a probe point sliding on the face of a patient in a camera coordinate system of a visual sensor of the medical robot;
acquiring an image point cloud formed by the facial image of the patient extracted from the medical image of the patient in an image coordinate system;
removing abnormal points from the points in the image point cloud to obtain a filtered and denoised point cloud;
down-sampling the filtered and denoised point cloud to obtain a simplified point cloud;
determining normal vectors of the points in the probe point cloud and the simplified point cloud according to the coordinate data of the points in the probe point cloud and the simplified point cloud;
determining characteristic values of points in the probe point cloud and the simplified point cloud according to normal vectors of the points in the probe point cloud and the simplified point cloud;
according to the characteristic values of the points in the probe point cloud and the simplified point cloud, carrying out coarse registration on the probe point cloud and the simplified point cloud so as to obtain a coarse registration matrix for transforming the camera coordinate system and the image coordinate system;
according to the rough registration matrix, performing fine registration on the probe point cloud and the simplified point cloud to obtain a fine registration matrix for transforming the camera coordinate system and the image coordinate system;
determining the fine registration matrix as a transformation relation between the camera coordinate system and the image coordinate system;
determining a transformation relation between the camera coordinate system and the image coordinate system as a patient registration result;
wherein, the point cloud after filtering and denoising is down-sampled to obtain a simplified point cloud, and the simplified point cloud is outputCreating a three-dimensional voxel grid for each point of the filtered and denoised point cloud, searching coordinate values of all the points, and finding out the maximum value X in the direction of X, Y, Z max 、Y max 、Z max And the minimum value X min 、Y min 、Z min Determining the side length L of the large cubic grid, if the side length L of the large cubic grid is larger than the preset side length L 0 Dividing a plurality of voxel grids along the direction X, Y, Z; presetting a point cloud number N o Sequentially comparing the point cloud number n in the plurality of voxel grids with a preset point cloud number threshold, if the point cloud number n exceeds the preset value, executing the following steps, otherwise deleting the voxel grid;
the side lengths L of several small cubic grids are compared again i With a predetermined side length L 0 If the side length is greater than L 0 Continuously dividing a plurality of small cubes, and if the small cubes are less than or equal to L 0 Traversing the points in the voxel grid, and replacing other points in the voxel grid by the center of gravity of the voxel grid approximately, wherein the calculation formula of the center of gravity is as follows:
Figure 682736DEST_PATH_IMAGE001
wherein d is i Indicating point (x) i ,y i ,z i ) Distance to the center of the region of each voxel grid, d min Denotes the minimum value of the distance, d i When the minimum value (x) is reached i ,y i ,z i ) Is center of gravity, min { d } i Denotes { d } i I is more than or equal to 0 and less than or equal to n;
Figure 537559DEST_PATH_IMAGE002
wherein d is j Indicating point (x) j ,y j ,z j ) To the region center of gravity (x) of each voxel grid 0 ,y 0 ,z 0 ) Distance of d, d max Indicates the maximum value of the distance, the corresponding point being the farthest point found, max { d } j Denotes d j J is more than or equal to 0 and less than or equal to n-1;
preserving a center of gravity point (x) within a voxel grid 0 ,y 0 ,z 0 ) Processing all voxel grids to obtain simplified point clouds, setting a threshold tau, and if tau is less than or equal to d max Then, it remains in accordance with d j The j point, and the gravity point, otherwise, only the gravity point is reserved; the gravity center point and the point satisfying the maximum distance are reserved points;
when determining the fast point feature histograms of the points in the probe point cloud and the image point cloud, firstly calculating the relative relationship between each point to be calculated and k field points of the point according to the normal vectors of the point and the k field points, establishing a simplified point feature histogram, then calculating the point feature histograms of the k field points, and finally obtaining the fast point feature histogram by calculation, wherein the calculation expression is that
Figure 317296DEST_PATH_IMAGE003
Wherein, S (p) q ) Representing the point p to be calculated q Simplified point feature histogram of (1), F (p) q ) Representing the point p to be calculated q The fast point feature histogram of (a) is,w t is shown astA weighted value of a simplified point feature histogram of the individual domain points;
when the probe point cloud and the image point cloud are matched, the size of the probe point cloud is smaller than that of the image point cloud, and the probe point cloud in a camera coordinate system is used as a source point cloud p s Taking the image point cloud in the image coordinate system as the target point cloud p t Then the point cloud registration problem is described as:
Figure 712506DEST_PATH_IMAGE004
wherein p is s ,p t Respectively, corresponding points, | p, in the source point cloud and the target point cloud s I is the size of the source point cloud, i.e. the number of points,
Figure 710899DEST_PATH_IMAGE005
representing an mth point in the target point cloud,
Figure 533361DEST_PATH_IMAGE006
m-th point, R, representing the source point cloud * Representing a rotation matrix, t * Representing a translation vector.
2. The patient registration method according to claim 1, wherein the acquiring a probe point cloud composed of a probe point slid on a patient's face in a camera coordinate system of a vision sensor of a medical robot comprises:
acquiring coordinate offset data of a probe tip relative to a marker in a coordinate system of the marker at the tail of the probe;
acquiring a coordinate transformation matrix from a coordinate system of the marker to a coordinate system of the camera, wherein the coordinate transformation matrix is obtained by identifying the marker by the visual sensor;
determining coordinate data of the probe tip in the camera coordinate system according to the coordinate offset data and the coordinate transformation matrix;
and forming the probe point cloud according to the coordinate data of the probe point in the camera coordinate system.
3. The method of claim 1, wherein the obtaining of the image point cloud in the image coordinate system of the facial image of the patient extracted from the medical image of the patient comprises:
extracting the facial image to obtain a facial model corresponding to the facial image;
according to the direction vector of the face of the patient provided by the face image, emitting rays to the face model to obtain an intersection point of the rays and the face model;
and forming the image point cloud according to the coordinate data of the intersection point of the ray and the face model in the image coordinate system.
4. The patient registration method according to claim 3, wherein the emitting a ray to the face model according to a direction vector of the face of the patient provided by the face image to obtain an intersection of the ray and the face model comprises:
the method includes moving a ray position in a plane perpendicular to a direction vector of the patient's face by a specified step size to emit rays from different ray positions toward the face model and obtaining a plurality of intersections of the rays with the face model.
5. The patient registration method of claim 1, wherein the fine registration of the probe point cloud and the image point cloud according to the coarse registration matrix comprises:
respectively initializing an optimal rotation matrix and an optimal translation vector according to the rotation matrix and the translation vector included by the coarse registration matrix;
iteratively updating the initialized optimal rotation matrix and the optimal translation vector according to the coordinate data of the points in the probe point cloud and the image point cloud;
and if the iteration termination condition is met, determining the fine registration matrix according to the optimal rotation matrix and the optimal translation vector.
6. The patient registration method of claim 5, wherein the iteratively updating the initialized optimal rotation matrix and the optimal translation vector according to the coordinate data of the points in the probe point cloud and the image point cloud comprises:
transforming the probe point cloud according to the optimal rotation matrix and the optimal translation vector, and comparing the transformed probe point cloud with the image point cloud to find out the nearest neighbor point of the point in the probe point cloud in the image point cloud;
under the condition that the nearest neighbor point of a point in the probe point cloud in the image point cloud is found out, respectively removing the mass center of the probe point cloud and the image point cloud, and determining the covariance matrix of the probe point cloud after the mass center is removed and the covariance matrix of the image point cloud after the mass center is removed;
and carrying out singular value decomposition on the covariance matrix, and updating the optimal rotation matrix and the optimal translation vector according to a left singular matrix and a right singular matrix obtained by decomposition.
7. The patient registration method of claim 5, wherein the iteration termination condition comprises at least one of:
the variation of the optimal rotation matrix obtained by the current iteration updating relative to the optimal rotation matrix obtained by the last iteration updating is smaller than a first preset value, and the variation of the optimal translation vector obtained by the current iteration updating relative to the optimal translation vector obtained by the last iteration updating is smaller than a second preset value;
and the iteration updating times of the optimal rotation matrix and the optimal translation vector reach the preset maximum iteration times.
8. The patient registration method of claim 1, wherein after obtaining the transformation relationship between the camera coordinate system and the image coordinate system, the method further comprises:
determining registration errors of the probe point cloud and the image point cloud according to a transformation relation between the camera coordinate system and the image coordinate system, coordinate data of points in the probe point cloud, and coordinate data of points in the image point cloud;
and verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system.
9. The patient registration method of claim 1, wherein after obtaining the transformation relationship between the camera coordinate system and the image coordinate system, the method further comprises:
acquiring first coordinate data of a marker of the head of a patient touched by the probe tip in the camera coordinate system and second coordinate data of the marker of the head of the patient in the medical image in the image coordinate system;
determining a registration error of the probe point cloud and the image point cloud according to a transformation relation between the camera coordinate system and the image coordinate system, the first coordinate data and the second coordinate data;
and verifying the transformation relation between the camera coordinate system and the image coordinate system according to the registration error of the probe point cloud and the image point cloud so as to obtain a verification result of the transformation relation between the camera coordinate system and the image coordinate system.
10. An electronic device, characterized in that the device comprises:
one or more processors;
a computer readable medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the patient registration method of any one of claims 1-9.
11. A computer-readable medium, in which a computer program is stored which, when being executed by a processor, carries out the patient registration method according to any one of claims 1 to 9.
CN202011449427.1A 2020-12-09 2020-12-09 Patient registration method and device, electronic equipment and computer readable medium Active CN112382359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011449427.1A CN112382359B (en) 2020-12-09 2020-12-09 Patient registration method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011449427.1A CN112382359B (en) 2020-12-09 2020-12-09 Patient registration method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN112382359A CN112382359A (en) 2021-02-19
CN112382359B true CN112382359B (en) 2022-08-26

Family

ID=74590966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011449427.1A Active CN112382359B (en) 2020-12-09 2020-12-09 Patient registration method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112382359B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114098985A (en) * 2021-11-29 2022-03-01 北京柏惠维康科技有限公司 Method, device, equipment and medium for spatial matching of patient and medical image of patient
CN114587593B (en) * 2022-03-18 2022-11-18 华科精准(北京)医疗科技有限公司 Surgical navigation positioning system and use method thereof
CN115775266B (en) * 2023-02-13 2023-06-09 北京精准医械科技有限公司 Registration method applied to real-time puncture surgical robot
CN117576408A (en) * 2023-07-06 2024-02-20 北京优脑银河科技有限公司 Optimization method of point cloud feature extraction method and point cloud registration method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107292925A (en) * 2017-06-06 2017-10-24 哈尔滨工业大学深圳研究生院 Based on Kinect depth camera measuring methods
CN111311651B (en) * 2018-12-11 2023-10-20 北京大学 Point cloud registration method and device
CN110946659A (en) * 2019-12-25 2020-04-03 武汉中科医疗科技工业技术研究院有限公司 Registration method and system for image space and actual space
CN111724420A (en) * 2020-05-14 2020-09-29 北京天智航医疗科技股份有限公司 Intraoperative registration method and device, storage medium and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种应用于大角度变换点云的配准方法;李健 等;《图书学报》;20181231(第6期);第1098-1104页 *
基于FPFH特征的点云配准技术;陈学伟 等;《电脑知识与技术》;20170228;第207-209页 *

Also Published As

Publication number Publication date
CN112382359A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN112382359B (en) Patient registration method and device, electronic equipment and computer readable medium
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
KR102013866B1 (en) Method and apparatus for calculating camera location using surgical video
CN103793915B (en) Inexpensive unmarked registration arrangement and method for registering in neurosurgery navigation
EP1903972A2 (en) Methods and systems for mapping a virtual model of an object to the object
EP2348954A1 (en) Image-based localization method and system
CN112734776B (en) Minimally invasive surgical instrument positioning method and system
CN113397704B (en) Robot positioning method, device and system and computer equipment
CN109166177A (en) Air navigation aid in a kind of art of craniomaxillofacial surgery
CN113012230B (en) Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN111260704A (en) Vascular structure 3D/2D rigid registration method and device based on heuristic tree search
US20220249174A1 (en) Surgical navigation system, information processing device and information processing method
CN117408908A (en) Preoperative and intraoperative CT image automatic fusion method based on deep neural network
CN112562070A (en) Craniosynostosis operation cutting coordinate generation system based on template matching
CN116612166A (en) Registration fusion algorithm for multi-mode images
CN116030135A (en) Real-time attitude measurement system in remote operation
CN113143459A (en) Navigation method and device for augmented reality operation of laparoscope and electronic equipment
CN114931435B (en) Three-dimensional model processing method and device and electronic equipment
CN116327362A (en) Navigation method, device, medium and electronic equipment in magnetic probe auxiliary bronchus operation
US12094061B2 (en) System and methods for updating an anatomical 3D model
CN114266831A (en) Data processing method, device, equipment, medium and system for assisting operation
CN114782537A (en) Human carotid artery positioning method and device based on 3D vision
CN115272356A (en) Multi-mode fusion method, device and equipment of CT image and readable storage medium
CN113256693A (en) Multi-view registration method based on K-means and normal distribution transformation
Giannarou et al. Tissue deformation recovery with gaussian mixture model based structure from motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Applicant after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: 100191 Room 608, 6 / F, building 9, 35 Huayuan North Road, Haidian District, Beijing

Applicant before: Beijing Baihui Wei Kang Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant