CN114098985A - Method, device, equipment and medium for spatial matching of patient and medical image of patient - Google Patents

Method, device, equipment and medium for spatial matching of patient and medical image of patient Download PDF

Info

Publication number
CN114098985A
CN114098985A CN202111430109.5A CN202111430109A CN114098985A CN 114098985 A CN114098985 A CN 114098985A CN 202111430109 A CN202111430109 A CN 202111430109A CN 114098985 A CN114098985 A CN 114098985A
Authority
CN
China
Prior art keywords
image
patient
line
laser
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111430109.5A
Other languages
Chinese (zh)
Inventor
李腾飞
王琪
谢永召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN202111430109.5A priority Critical patent/CN114098985A/en
Publication of CN114098985A publication Critical patent/CN114098985A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The embodiment of the application provides a method, a device, equipment and a medium for spatial matching of a patient and a medical image of the patient. Wherein the method comprises the following steps: acquiring a first image and a second image of line laser emitted by a line laser emitter and scanned on the head of a patient, wherein the first image and the second image are acquired by binocular vision imaging equipment; respectively extracting the central lines of the line laser in the first image and the second image, and determining parallax data of the first image and the second image according to the central lines of the line laser in the first image and the second image; determining three-dimensional space data of laser points projected by line laser on the head of a patient according to the parallax data and the optical center distance between the left vision imaging device and the right vision imaging device; reconstructing the point cloud of the head of the patient according to the three-dimensional space data of the laser points projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient. The scheme can effectively improve the space matching precision of the medical images of the patient and the patient.

Description

Method, device, equipment and medium for spatial matching of patient and medical image of patient
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a method and a device for spatial matching of a patient and a medical image of the patient, electronic equipment and a computer readable medium.
Background
Since the last 90 s, robot-assisted surgery has gradually become a significant trend. The successful application of a large number of surgical robot systems in clinic has attracted great interest in the medical and scientific fields at home and abroad. Currently, minimally invasive surgical robots are gradually becoming the leading edge of the international robot field and a research hotspot. The minimally invasive surgery robot system integrates a plurality of emerging disciplines, and realizes the minimally invasive, intelligent and digital surgery. Until now, minimally invasive surgical robots have been widely used all over the world, and the types of the applied surgery include urology, obstetrics and gynecology, cardiac surgery, thoracic surgery, hepatobiliary surgery, gastrointestinal surgery, otorhinolaryngology and other subjects.
When the minimally invasive surgery robot is used for assisting surgery, a doctor stands beside a control table and is dozens of centimeters away from an operation table, and looks inwards through a visiting mirror to study a three-dimensional image sent by a camera in a patient body. The three-dimensional image shows the surgical site and the surgical instruments affixed to the end points of the rod. The surgeon operates the surgical instrument using a control handle located directly below the screen. When the surgeon moves the control handle, the computer sends an electronic signal to the surgical instrument, which moves in synchronism with the control handle.
In order to realize the synchronous movement process, the registration of the patient space and the vision sensor space, that is, the spatial matching between the patient and the medical image of the patient, needs to be completed first, so that the relative position relationship between the minimally invasive surgery robot and the patient can be obtained in real time. In the prior art, markers are attached to the head of a patient to achieve spatial matching of the patient to the medical image of the patient. Specifically, during the surgical procedure, the minimally invasive surgical robot generally needs to obtain spatial information related to the patient according to the marker, and the spatial information is used for matching with the spatial information in the medical image, so that the positioning or navigation function of the minimally invasive surgical robot is realized. The spatial matching mode of the patient and the medical image of the patient in the prior art is complicated, errors can be caused due to the influence of external factors, the spatial matching precision of the patient and the medical image of the patient is reduced, and the operation precision of the patient is influenced.
Therefore, how to simply and conveniently perform the spatial matching between the patient and the medical image of the patient and effectively improve the spatial matching precision between the patient and the medical image of the patient becomes a technical problem to be solved at present.
Disclosure of Invention
The present application aims to provide a method, an apparatus, an electronic device and a computer readable medium for matching a patient medical image with a patient medical image, so as to solve the technical problems in the prior art, such as how to easily perform spatial matching between the patient medical image and the patient medical image, and effectively improve the spatial matching accuracy between the patient medical image and the patient medical image.
According to a first aspect of embodiments of the present application, a method for spatial matching of a patient to medical images of the patient is provided. The method comprises the following steps: acquiring a first image and a second image of line laser emitted by a line laser emitter and scanned on the head of a patient, wherein the first image is acquired by a left vision imaging device in the binocular vision imaging device, and the second image is acquired by a right vision imaging device in the binocular vision imaging device; respectively extracting the center line of the line laser in the first image and the center line of the line laser in the second image, and determining parallax data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image; determining three-dimensional space data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device; reconstructing a point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
According to a second aspect of embodiments of the present application, there is provided a spatial matching apparatus for medical images of a patient and a patient. The device comprises: the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first image and a second image which are scanned by line laser emitted by a line laser emitter and acquired by binocular vision imaging equipment on the head of a patient, the first image is acquired by left vision imaging equipment in the binocular vision imaging equipment, and the second image is acquired by right vision imaging equipment in the binocular vision imaging equipment; a first determining module, configured to extract a center line of the line laser in the first image and a center line of the line laser in the second image, and determine disparity data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image; a second determining module, configured to determine three-dimensional spatial data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device; the registration module is used for reconstructing point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: one or more processors; a storage configured to store one or more programs; when executed by the one or more processors, cause the one or more processors to implement a method of spatial matching of patient and patient medical images as described in the first aspect of an embodiment of the present application.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of spatial matching of patient and patient medical images as described in the first aspect of embodiments of the present application.
According to the space matching scheme of the medical images of the patient and the patient provided by the embodiment of the application, a first image and a second image of line laser emitted by a line laser emitter collected by binocular vision imaging equipment and scanned on the head of the patient are obtained, wherein the first image is collected by left vision imaging equipment in the binocular vision imaging equipment, the second image is collected by right vision imaging equipment in the binocular vision imaging equipment, the central line of the line laser in the first image and the central line of the line laser in the second image are respectively extracted, parallax data of the first image and the second image are determined according to the central line of the line laser in the first image and the central line of the line laser in the second image, and parallax data of the first image and the second image are determined according to the parallax data of the first image and the second image, and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, determining the three-dimensional space data of the laser point projected by the line laser on the head of the patient, reconstructing the point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain the space registration result of the medical image of the patient and the patient, compared with the prior other modes, the head of the patient can not be pasted with any marker, the point cloud reconstruction of the head of the patient can be completed only by the binocular vision imaging device and the line laser emitter, the reconstruction precision of the point cloud of the head of the patient is improved, in addition, the space matching of the patient and the medical image of the patient can be simply and conveniently carried out by registering the point cloud of the head of the reconstructed patient and the medical image of the patient, but also can effectively improve the spatial matching precision of the medical images of the patient and the patient.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1A is a flowchart illustrating a method for spatial matching of medical images of a patient according to a first embodiment of the present application;
fig. 1B is a schematic diagram of a gray value variation of each row of pixel points according to an embodiment of the present disclosure;
FIG. 1C is a schematic diagram of solving three-dimensional space coordinates of a laser spot according to an embodiment of the present disclosure;
FIG. 1D is a schematic diagram of solving three-dimensional space coordinates of a laser spot according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a second embodiment of the present invention of a device for spatially matching patient medical images;
fig. 3 is a schematic structural diagram of an electronic device in a third embodiment of the present application;
fig. 4 is a hardware structure of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
Referring to fig. 1A, a flowchart illustrating steps of a method for spatial matching of medical images of a patient according to a first embodiment of the present application is shown.
Specifically, the method for matching the medical images of the patient with the space provided by the embodiment includes the following steps:
in step S101, a first image and a second image of a line laser scan of a patient' S head, which is acquired by a binocular vision imaging apparatus and emitted by a line laser emitter, are acquired.
In this embodiment, the binocular vision imaging device is composed of a left vision imaging device and a right vision imaging device, and the binocular vision imaging device may be a binocular camera. The line laser transmitter may be a laser pointer. The first image is acquired by a left vision imaging device of the binocular vision imaging devices, and the second image is acquired by a right vision imaging device of the binocular vision imaging devices. In addition, the acquisition direction of the binocular vision imaging device is perpendicular to the emission direction of the line laser emitter. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one specific example, the laser pointer is fixed on the operation table, and the operation table is actuated at a constant speed to scan the head of the patient. Meanwhile, the binocular camera photographs the head of the patient to acquire an image scanned by the line laser of the laser pen on the head of the patient. In the process of photographing by the binocular camera, the exposure rate of the binocular camera can be adjusted, and interference of natural light to line laser is reduced. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S102, a central line of the line laser in the first image and a central line of the line laser in the second image are extracted, and parallax data of the first image and the second image are determined according to the central line of the line laser in the first image and the central line of the line laser in the second image.
In some optional embodiments, when the center line of the line laser in the first image and the center line of the line laser in the second image are extracted respectively, screening each line of pixel points of the first image according to a gray value of each line of pixel points of the first image and a preset central gray threshold value to obtain pixel points related to the center line of the line laser in each line of pixel points of the first image, and determining the center line of the line laser in the first image according to pixel points related to the center line of the line laser in each line of pixel points of the first image; and screening each line of pixel points of the second image according to the gray value of each line of pixel points of the second image and the central gray threshold value to obtain pixel points related to the central line of the line laser in each line of pixel points of the second image, and determining the central line of the line laser in the second image according to pixel points related to the central line of the line laser in each line of pixel points of the second image. The central gray threshold may be set by a person skilled in the art according to actual needs, and this embodiment does not limit this. Therefore, by screening each line of pixel points of the first image through the gray value of each line of pixel points of the first image and the preset central gray threshold, pixel points related to the central line of the line laser in each line of pixel points of the first image can be accurately obtained, and by screening each line of pixel points of the second image through the gray value of each line of pixel points of the first image and the central gray threshold, pixel points related to the central line of the line laser in each line of pixel points of the second image can be accurately obtained, and by screening each line of pixel points of the second image through the gray value of each line of pixel points of the second image and the central gray threshold, pixel points related to the central line of the line laser in each line of pixel points of the second image, the centerline of the line laser in the second image can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when each row of pixels of the first image is screened according to the gray value of each row of pixels of the first image and a preset central gray threshold, it is determined that a pixel having a gray value greater than or equal to the central gray threshold in each row of pixels of the first image is a pixel in each row of pixels of the first image that is related to the centerline of the line laser. Therefore, the pixel points related to the central line of the line laser in each row of pixel points of the first image can be accurately determined. When each line of pixel points of the second image is screened according to the gray value of each line of pixel points of the second image and the central gray threshold, determining that the pixel points of which the gray values are greater than or equal to the central gray threshold in each line of pixel points of the second image are the pixel points related to the central line of the line laser in each line of pixel points of the second image. Therefore, the pixel points related to the central line of the line laser in each row of pixel points of the second image can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, since the width of the center line of the line laser light is not one pixel, the extraction of the center line of the line laser light has a large influence on the accuracy, and the grayscale centroid method may be selected to extract the center line of the line laser light. The gray centroid method is to use the centroid of the gray value as the center line of the line laser. Specifically, the center line of the line laser may be extracted using the following formula one:
Figure BDA0003379944520000071
wherein f isijAnd (3) expressing the gray value of a pixel point with the coordinate (i, j) in each row of pixel points in the image, and T expressing the central gray threshold value. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, as shown in fig. 1B, a graph illustrating the variation of the gray level value of each row of pixels in the image is shown. The graph has two threshold lines, one of which is a fixed threshold line and the other of which is a central threshold line, and the curve has extreme points of gray values. In the process of extracting the center line of the line laser, a fixed gray threshold represented by a fixed threshold line may be used first to perform rough selection on each line of pixel points in the first image and the second image, then a central gray threshold represented by a central threshold line is used to perform fine selection on each line of pixel points after rough selection in the first image and the second image so as to obtain pixel points related to the center line of the line laser in each line of pixel points in the first image and the second image, and finally the center line of the line laser in the first image and the second image is determined according to pixel points related to the center line of the line laser in each line of pixel points in the first image and the second image. In addition, the pixel point corresponding to the gray value extreme point in each row of pixel points in the first image and the second image is a laser point of each row of the first image and the second image on the central line of the line laser. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when determining the parallax data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image, determining the coordinates of the laser point of each line of the first image on the center line of the line laser according to the gray value of the pixel point of each line of the pixel point of the first image, which is related to the center line of the line laser, and the coordinates of the pixel point of each line of the pixel point of the first image, which is related to the center line of the line laser; determining the coordinates of the laser points of each line of the second image on the central line of the line laser according to the gray values of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser, and the coordinates of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser; and determining parallax data of the first image and the second image according to the coordinates of the laser point of each line of the first image on the central line of the line laser and the coordinates of the laser point of each line of the second image on the central line of the line laser. Thereby, by the coordinates of the laser spot on the center line of the line laser for each line of the first image and the coordinates of the laser spot on the center line of the line laser for each line of the second image, the parallax data of the first image and the second image can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, the gray scale centroid method takes the gray scale value obtained by each pixel point in the region as the "quality" of the point, and a formula two for solving the coordinate of the region center is as follows:
Figure BDA0003379944520000081
Figure BDA0003379944520000082
wherein f (u, v) represents the gray value of the pixel point with the coordinate (u, v), omega is the set of the pixel points in the target area,
Figure BDA0003379944520000083
the coordinates of the center of the target area are extracted by the gray centroid method. In determining the coordinates of the laser spot on the centerline of the line laser for each row of the first image, the coordinates of the laser spot on the centerline of the line laser for each row of the first image may be calculated using the above equation two. When the second formula is adopted to calculate the coordinates of the laser points of each line of the first image on the central line of the line laser, f (u, v) represents the gray value of the pixel point with the coordinate (u, v) in the pixel points of each line of the first image related to the central line of the line laser, and omega is the set of the pixel points of each line of the first image related to the central line of the line laser,
Figure BDA0003379944520000091
is the coordinates of the laser spot on the centre line of the line laser for each row of the first image. In determining the coordinates of the laser spot on the centerline of the line laser for each row of the second image, the coordinates of the laser spot on the centerline of the line laser for each row of the second image can be calculated using the above equation two. When the coordinates of the laser points of each line of the second image on the central line of the line laser are calculated by adopting the formula IIF (u, v) represents the gray value of the pixel point with the coordinate (u, v) in the pixel points related to the central line of the line laser in each row of the pixel points of the second image, and omega is the set of the pixel points related to the central line of the line laser in each row of the pixel points of the second image,
Figure BDA0003379944520000092
is the coordinates of the laser spot on the centre line of the line laser for each row of the second image. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, the left visual imaging device and the right visual imaging device in the binocular visual imaging device may be respectively calibrated by using a zhang scaling method, and a distortion parameter of the left visual imaging device and a distortion parameter of the right visual imaging device may be obtained. Prior to determining disparity data for the first image and the second image, the first image and the second image may be aligned according to a distortion parameter of the left vision imaging device and a distortion parameter of the right vision imaging device to obtain the aligned first image and second image. After obtaining the first and second images in line alignment, disparity data for the first and second images may be determined from the coordinates of the laser spot on the centerline of the line laser for each line of the first image and the coordinates of the laser spot on the centerline of the line laser for each line of the second image.
In a specific example, in determining the parallax data of the first image and the second image, an absolute value of a difference between an abscissa in coordinates of the laser spot on the center line of the line laser in each line of the first image and an abscissa in coordinates of the laser spot on the center line of the line laser in each line of the second image is determined; determining the absolute value as disparity data of the first image and the second image. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S103, three-dimensional space data of the laser point projected on the head of the patient by the line laser is determined according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device.
In this embodiment, the left vision imaging device and the right vision imaging device in the binocular vision imaging device may be calibrated by a zhang's calibration method, respectively, and the internal parameters of the left vision imaging device and the internal parameters of the right vision imaging device may be obtained. When determining the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device may be determined according to the internal reference of the left vision imaging device and the internal reference of the right vision imaging device. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when determining three-dimensional spatial data of the laser spot projected on the head of the patient by the line laser according to the parallax data of the first and second images and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, determining Z-axis coordinates in the three-dimensional spatial coordinates of the laser spot projected on the head of the patient by the line laser according to the parallax data of the first and second images, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the focal length of the left vision imaging device or the right vision imaging device; determining an X-axis coordinate in a three-dimensional space coordinate of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and a vertical coordinate of an imaging point of the laser point projected by the line laser on the head of the patient in the first image; and determining the Y-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device and the abscissa of the imaging point of the laser point projected by the line laser on the head of the patient in the first image. Thereby, the Z-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient can be determined through the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the focal length of the left vision imaging device or the right vision imaging device; by means of the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the vertical coordinate of the imaging point of the laser point projected by the line laser on the head of the patient in the first image, the X-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient can be accurately determined; through the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the abscissa of the imaging point of the laser point projected by the line laser on the head of the patient in the first image, the Y-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, as shown in FIG. 1C, P is the laser spot projected by the line laser on the patient's head, ORAnd OTThe optical centers of the right visual imaging device and the left visual imaging device are respectively, imaging points of a point P on the right visual imaging device and the left visual imaging device are respectively P and P' (an imaging plane of the visual imaging device is placed in front of a lens after being rotated), f is the focal length of the visual imaging device, B is the distance between the optical centers of the right visual imaging device and the left visual imaging device, and Z is the depth information of the point P to be obtained. XRAnd XTRespectively, the distance of the imaged point from the leftmost side of the image. Parallax is defined as d ═ XR-XTThe parallax value is a positional difference in the horizontal direction between corresponding points on the first image and the second image. Ideally, the right visual imaging device and the left visual imaging device are located in the same plane (optical axes are parallel), and the device parameters are consistent, so the depth values obviously have the following values according to the triangle similarity principle:
Figure BDA0003379944520000111
can be pushed to
Figure BDA0003379944520000121
The focal length of the visual imaging device can be obtained according to internal parameters calibrated by the visual imaging device. As shown in fig. 1D, according to the law of similar triangles:
Figure BDA0003379944520000122
from this it can be derived
Figure BDA0003379944520000123
Figure BDA0003379944520000124
Wherein x represents the abscissa of the point P, y represents the ordinate of the point P, and xlAbscissa, y, representing the imaged point of point P in the first imagelRepresenting the ordinate of the imaged point of the point P in the first image.
In general, in order to obtain a three-dimensional coordinate of a point P in the real world, a key point is to determine a parallax value of the point P. After the limit correction, the difference value of the horizontal coordinates of the laser points corresponding to each row on the right visual imaging device and the left visual imaging device is the parallax value. Head point cloud data can be obtained through line laser of head scanning. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In step S104, reconstructing a point cloud of the head of the patient according to the three-dimensional spatial data of the laser point projected on the head of the patient by the line laser, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a spatial registration result of the patient and the medical image of the patient.
In the embodiment, the head of the patient is not added with any marker, the point cloud reconstruction of the head of the patient can be completed only by the binocular camera and the line laser emitter, and the reconstruction precision of the point cloud of the head of the patient is improved. The more the laser emitter scans, the wider the scanning range, the higher the registration precision, and the surgical requirements can be basically met. The point cloud of the patient medical image may be provided by a capturing device of the patient medical image. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, in registering the point cloud of the patient's head with the point cloud of the patient medical image, coarsely registering the point cloud of the patient's head with the point cloud of the patient medical image according to the three-dimensional space coordinates of the point in the point cloud of the patient's head and the three-dimensional space coordinates of the point in the point cloud of the patient medical image to obtain a coarse registration matrix for transforming the point cloud of the patient's head with the point cloud of the patient medical image; according to the rough registration matrix, performing fine registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a fine registration matrix for transforming the point cloud of the head of the patient and the point cloud of the medical image of the patient; and determining the fine registration matrix as a spatial registration result of the patient and the medical image of the patient. Therefore, the point cloud of the head of the patient and the point cloud of the medical image of the patient are roughly registered through the three-dimensional space coordinates of the points in the point cloud of the head of the patient and the point cloud of the medical image of the patient, and the rough registration matrix can be accurately obtained. In addition, the point cloud of the head of the patient and the point cloud of the medical image of the patient are precisely registered through the coarse registration matrix, so that the precise registration matrix can be accurately obtained, and the spatial registration result of the patient and the medical image of the patient can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when coarsely registering the point cloud of the patient's head with the point cloud of the patient medical image according to the three-dimensional space coordinates of the points in the point cloud of the patient's head and the three-dimensional space coordinates of the points in the point cloud of the patient medical image, determining a normal vector of the points in the point cloud of the patient's head and the point cloud of the patient medical image according to the three-dimensional space coordinates of the points in the point cloud of the patient's head and the point cloud of the patient medical image; determining characteristic values of points in the point cloud of the head of the patient and the point cloud of the medical image of the patient according to normal vectors of the points in the point cloud of the head of the patient and the point cloud of the medical image of the patient; and performing coarse registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient according to the characteristic values of the points in the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a coarse registration matrix. Therefore, the point cloud of the head of the patient and the point cloud of the medical image of the patient are roughly registered through the characteristic values of the points in the point cloud of the head of the patient and the point cloud of the medical image of the patient, and the rough registration matrix can be accurately obtained. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, the feature values of the points in the point cloud of the patient's head and the point cloud of the patient medical image may be a histogram of fast point features of the points in the point cloud of the patient's head and the point cloud of the patient medical image. When determining the fast point feature histogram of the point cloud of the head of the patient and the points in the point cloud of the medical image of the patient, firstly calculating the relative relation between each point to be calculated and k field points of the point according to the normal vectors of the point and the k field points, establishing a simplified point feature histogram, then calculating the point feature histograms of the k field points, and finally obtaining the fast point feature histogram through calculation, wherein the calculation expression is
Figure BDA0003379944520000141
Wherein, S (p)q) Representing the point p to be calculatedqSimplified point feature histogram of (1), F (p)q) Representing the point p to be calculatedqFast point feature histogram of (1), wiA weight value representing a simplified point feature histogram for the ith domain point. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In some optional embodiments, when the point cloud of the head of the patient is precisely aligned with the point cloud of the medical image of the patient according to the coarse registration matrix, respectively initializing an optimal rotation matrix and an optimal translation vector according to a rotation matrix and a translation vector included in the coarse registration matrix; iteratively updating the initialized optimal rotation matrix and the optimal translation vector according to the three-dimensional space coordinates of the point cloud of the head of the patient and the points in the point cloud of the medical image of the patient; and if the iteration termination condition is met, determining the fine registration matrix according to the optimal rotation matrix and the optimal translation vector. Therefore, the initialized optimal rotation matrix and the optimal translation vector are iteratively updated through the coordinate data of the points in the point cloud of the head of the patient and the point cloud of the medical image of the patient, and the fine registration matrix can be accurately determined. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, when the initialized optimal rotation matrix and the optimal translation vector are iteratively updated according to the three-dimensional space coordinates of the point cloud of the head of the patient and the points in the point cloud of the medical image of the patient, transforming the point cloud of the head of the patient according to the optimal rotation matrix and the optimal translation vector, and comparing the transformed point cloud of the head of the patient with the point cloud of the medical image of the patient to find out the nearest neighbor point of the points in the point cloud of the head of the patient in the point cloud of the medical image of the patient; under the condition that the nearest neighbor point of the point in the point cloud of the patient head in the point cloud of the patient medical image is found, removing the center of mass of the point cloud of the patient head and the point cloud of the patient medical image respectively, and determining covariance matrixes of the point cloud of the patient head after removing the center of mass and the point cloud of the patient medical image after removing the center of mass; and carrying out singular value decomposition on the covariance matrix, and updating the optimal rotation matrix and the optimal translation vector according to a left singular matrix and a right singular matrix obtained by decomposition. Wherein the iteration termination condition comprises at least one of: the variation of the optimal rotation matrix obtained by the current iteration updating relative to the optimal rotation matrix obtained by the last iteration updating is smaller than a first preset value, and the variation of the optimal translation vector obtained by the current iteration updating relative to the optimal translation vector obtained by the last iteration updating is smaller than a second preset value; and the iteration updating times of the optimal rotation matrix and the optimal translation vector reach the preset maximum iteration times. The first preset value and the second preset value can be set by a person skilled in the art according to actual needs, and this embodiment does not limit this. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In one specific example, two point clouds are registered. There are many registration methods, and the present embodiment is not limited. It is usually performed in two steps, coarse and fine registration. The rough registration usually extracts a normal vector of each point in two point clouds, calculates a characteristic value (such as FPFH (fuzzy programming frequency)) and matches the two point clouds according to the characteristic value to obtain a rough registration matrix T0. Then use T0Inputting the initial value into a fine registration algorithm (such as ICP and the like), and performing iterative computation to obtain a fine registration matrix T. It is to be understood that the above description is intended to be exemplary onlyThe present embodiment is not limited to this.
In some optional embodiments, after obtaining the spatial registration result of the patient and the patient medical image, the method further comprises: determining a registration error of the point cloud of the patient's head and the point cloud of the patient medical image according to a spatial registration result of the patient and the patient medical image, a three-dimensional space coordinate of a point in the point cloud of the patient's head, and a three-dimensional space coordinate of a point in the point cloud of the patient medical image; and verifying the spatial registration result of the patient and the medical image of the patient according to the registration error of the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a verification result of the spatial registration result of the patient and the medical image of the patient. Therefore, the registration error of the point cloud of the head of the patient and the point cloud of the medical image of the patient is used for verifying the spatial registration result of the patient and the medical image of the patient, and the verification result of the spatial registration result of the patient and the medical image of the patient can be accurately obtained. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
In a specific example, if the registration error satisfies a condition, the registration is successful, otherwise, the registration fails. Registration error is typically measured by the root mean square error Err. The smaller Err indicates the higher similarity of the two point clouds, the more successful the registration. Of course, for some registration algorithms there may be an own error metric, which is not described in detail here.
Figure BDA0003379944520000161
Wherein the content of the first and second substances,
Figure BDA0003379944520000162
is a point p in the point cloud of the patient's headiThe nearest neighbor point in the point cloud of the patient medical image after transformation, T represents the fine registration matrix, N represents the number of points in the point cloud of the patient head, qjPoints in a point cloud representing the patient medical image. The metric of the error affects the matching accuracy and also affects the operation time. It should be understood that the above description is only exemplary, and the present embodiment is not limited thereto.
By the spatial matching method for medical images of a patient and a patient provided by this embodiment, a first image and a second image of line laser emitted by a line laser emitter acquired by binocular vision imaging equipment and scanned on the head of the patient are acquired, wherein the first image is acquired by left vision imaging equipment in the binocular vision imaging equipment, the second image is acquired by right vision imaging equipment in the binocular vision imaging equipment, a center line of the line laser in the first image and a center line of the line laser in the second image are respectively extracted, parallax data of the first image and the second image are determined according to the center line of the line laser in the first image and the center line of the line laser in the second image, and parallax data of the first image and the second image are determined according to the parallax data of the first image and the second image, and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, determining the three-dimensional space data of the laser point projected by the line laser on the head of the patient, reconstructing the point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain the space registration result of the medical image of the patient and the patient, compared with the prior other modes, the head of the patient can not be pasted with any marker, the point cloud reconstruction of the head of the patient can be completed only by the binocular vision imaging device and the line laser emitter, the reconstruction precision of the point cloud of the head of the patient is improved, in addition, the space matching of the patient and the medical image of the patient can be simply and conveniently carried out by registering the point cloud of the head of the reconstructed patient and the medical image of the patient, but also can effectively improve the spatial matching precision of the medical images of the patient and the patient.
The spatial matching method for patient and patient medical images provided by the present embodiment can be performed by any suitable device with data processing capability, including but not limited to: a camera, a terminal, a mobile terminal, a PC, a server, an in-vehicle device, an entertainment device, an advertising device, a Personal Digital Assistant (PDA), a tablet computer, a notebook computer, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device, a display enhancement device, or the like.
Referring to fig. 2, a schematic structural diagram of a patient and patient medical image spatial matching apparatus according to a second embodiment of the present application is shown.
The spatial matching device for medical images of patients comprises: an obtaining module 201, configured to obtain a first image and a second image of line laser emitted by a line laser emitter and scanned on a head of a patient, where the first image is obtained by a left visual imaging device of the binocular visual imaging devices, and the second image is obtained by a right visual imaging device of the binocular visual imaging devices; a first determining module 202, configured to extract a centerline of the line laser in the first image and a centerline of the line laser in the second image, and determine disparity data of the first image and the second image according to the centerline of the line laser in the first image and the centerline of the line laser in the second image; a second determining module 203, configured to determine three-dimensional space data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device; a registration module 204, configured to reconstruct a point cloud of the head of the patient according to three-dimensional spatial data of the laser point projected by the line laser on the head of the patient, and register the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a spatial registration result of the patient and the medical image of the patient.
Optionally, the first determining module 202 includes: the first determining submodule is used for screening each row of pixel points of the first image according to the gray value of each row of pixel points of the first image and a preset central gray threshold value so as to obtain pixel points, related to the central line of the line laser, in each row of pixel points of the first image, and determining the central line of the line laser in the first image according to pixel points, related to the central line of the line laser, in each row of pixel points of the first image; and the second determining submodule is used for screening each row of pixel points of the second image according to the gray value of each row of pixel points of the second image and the central gray threshold value so as to obtain pixel points related to the central line of the line laser in each row of pixel points of the second image, and determining the central line of the line laser in the second image according to pixel points related to the central line of the line laser in each row of pixel points of the second image.
Optionally, the first determining submodule is specifically configured to: determining that pixel points with gray values greater than or equal to the central gray threshold value in each row of pixel points of the first image are pixel points related to the central line of the line laser in each row of pixel points of the first image, wherein the second determining submodule is specifically configured to: and determining pixel points of which the gray values are greater than or equal to the central gray threshold value in each row of pixel points of the second image as pixel points related to the central line of the line laser in each row of pixel points of the second image.
Optionally, the first determining module 202 is specifically configured to: determining the coordinates of the laser points of each line of the first image on the central line of the line laser according to the gray values of the pixels of each line of the pixels of the first image, which are related to the central line of the line laser, and the coordinates of the pixels of each line of the pixels of the first image, which are related to the central line of the line laser; determining the coordinates of the laser points of each line of the second image on the central line of the line laser according to the gray values of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser, and the coordinates of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser; and determining parallax data of the first image and the second image according to the coordinates of the laser point of each line of the first image on the central line of the line laser and the coordinates of the laser point of each line of the second image on the central line of the line laser.
Optionally, the second determining module 203 is specifically configured to: determining a Z-axis coordinate in a three-dimensional space coordinate of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the focal length of the left vision imaging device or the right vision imaging device; determining an X-axis coordinate in a three-dimensional space coordinate of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and a vertical coordinate of an imaging point of the laser point projected by the line laser on the head of the patient in the first image; and determining the Y-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device and the abscissa of the imaging point of the laser point projected by the line laser on the head of the patient in the first image.
Optionally, the registration module 204 is specifically configured to: performing coarse registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient according to the three-dimensional space coordinates of the points in the point cloud of the head of the patient and the three-dimensional space coordinates of the points in the point cloud of the medical image of the patient to obtain a coarse registration matrix for transforming the point cloud of the head of the patient and the point cloud of the medical image of the patient; according to the rough registration matrix, performing fine registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a fine registration matrix for transforming the point cloud of the head of the patient and the point cloud of the medical image of the patient; and determining the fine registration matrix as a spatial registration result of the patient and the medical image of the patient.
Optionally, after the registration module 204, the apparatus further comprises: a third determination module, configured to determine a registration error between the point cloud of the head of the patient and the point cloud of the medical image of the patient according to a spatial registration result between the patient and the medical image of the patient, a three-dimensional space coordinate of a point in the point cloud of the head of the patient, and a three-dimensional space coordinate of a point in the point cloud of the medical image of the patient; the verification module is used for verifying the spatial registration result of the patient and the medical image of the patient according to the registration error of the point cloud of the head of the patient and the point cloud of the medical image of the patient so as to obtain the verification result of the spatial registration result of the patient and the medical image of the patient.
The spatial matching device for medical images of patients and patients provided by this embodiment is used to implement the spatial matching method for medical images of patients and patients in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again.
Fig. 3 is a schematic structural diagram of an electronic device in a third embodiment of the present application; the electronic device may include:
one or more processors 301;
a computer-readable medium 302, which may be configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method for spatial matching of patient and patient medical images as described in embodiment one above.
Fig. 4 is a hardware structure of an electronic device according to a fourth embodiment of the present application; as shown in fig. 4, the hardware structure of the electronic device may include: a processor 401, a communication interface 402, a computer-readable medium 403, and a communication bus 404;
wherein the processor 401, the communication interface 402, and the computer-readable medium 403 are in communication with each other via a communication bus 404;
alternatively, the communication interface 402 may be an interface of a communication module, such as an interface of a GSM module;
the processor 401 may be specifically configured to: acquiring a first image and a second image of line laser emitted by a line laser emitter and scanned on the head of a patient, wherein the first image is acquired by a left vision imaging device in the binocular vision imaging device, and the second image is acquired by a right vision imaging device in the binocular vision imaging device; respectively extracting the center line of the line laser in the first image and the center line of the line laser in the second image, and determining parallax data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image; determining three-dimensional space data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device; reconstructing a point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
Processor 401 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The computer-readable medium 403 may be, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code configured to perform the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access storage media (RAM), a read-only storage media (ROM), an erasable programmable read-only storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only storage media (CD-ROM), an optical storage media piece, a magnetic storage media piece, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code configured to carry out operations for the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may operate over any of a variety of networks: including a Local Area Network (LAN) or a Wide Area Network (WAN) -to the user's computer, or alternatively, to an external computer (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions configured to implement the specified logical function(s). In the above embodiments, specific precedence relationships are provided, but these precedence relationships are only exemplary, and in particular implementations, the steps may be fewer, more, or the execution order may be modified. That is, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, a first determination module, a second determination module, and a registration module. The names of these modules do not in some cases constitute a limitation on the modules themselves, and for example, the acquisition module may also be described as a "module for acquiring a first image and a second image of line laser light scanned on the head of a patient by line laser light emitted from a line laser emitter acquired by a binocular vision imaging apparatus".
As another aspect, the present application further provides a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the spatial matching method of the medical images of the patient and the patient as described in the first embodiment.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a first image and a second image of line laser emitted by a line laser emitter and scanned on the head of a patient, wherein the first image is acquired by a left vision imaging device in the binocular vision imaging device, and the second image is acquired by a right vision imaging device in the binocular vision imaging device; respectively extracting the center line of the line laser in the first image and the center line of the line laser in the second image, and determining parallax data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image; determining three-dimensional space data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device; reconstructing a point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
The expressions "first", "second", "said first" or "said second" used in various embodiments of the present disclosure may modify various components regardless of order and/or importance, but these expressions do not limit the respective components. The above description is only configured for the purpose of distinguishing elements from other elements. For example, the first user equipment and the second user equipment represent different user equipment, although both are user equipment. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When an element (e.g., a first element) is referred to as being "operably or communicatively coupled" or "connected" (operably or communicatively) to "another element (e.g., a second element) or" connected "to another element (e.g., a second element), it is understood that the element is directly connected to the other element or the element is indirectly connected to the other element via yet another element (e.g., a third element). In contrast, it is understood that when an element (e.g., a first element) is referred to as being "directly connected" or "directly coupled" to another element (a second element), no element (e.g., a third element) is interposed therebetween.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A method for spatial matching of a patient to a medical image of the patient, the method comprising:
acquiring a first image and a second image of line laser emitted by a line laser emitter and scanned on the head of a patient, wherein the first image is acquired by a left vision imaging device in the binocular vision imaging device, and the second image is acquired by a right vision imaging device in the binocular vision imaging device;
respectively extracting the center line of the line laser in the first image and the center line of the line laser in the second image, and determining parallax data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image;
determining three-dimensional space data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device;
reconstructing a point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
2. The method of claim 1, wherein the extracting the centerline of the line laser in the first image and the centerline of the line laser in the second image respectively comprises:
screening each row of pixel points of the first image according to the gray value of each row of pixel points of the first image and a preset central gray threshold value to obtain pixel points related to the central line of the line laser in each row of pixel points of the first image, and determining the central line of the line laser in the first image according to pixel points related to the central line of the line laser in each row of pixel points of the first image;
and screening each line of pixel points of the second image according to the gray value of each line of pixel points of the second image and the central gray threshold value to obtain pixel points related to the central line of the line laser in each line of pixel points of the second image, and determining the central line of the line laser in the second image according to pixel points related to the central line of the line laser in each line of pixel points of the second image.
3. The method of claim 2, wherein the step of screening each row of pixels of the first image according to the gray-level value of each row of pixels of the first image and a preset central gray-level threshold to obtain pixels of each row of pixels of the first image that are related to the centerline of the line laser comprises:
determining pixel points with gray values larger than or equal to the central gray threshold value in each row of pixel points of the first image as pixel points related to the central line of the line laser in each row of pixel points of the first image,
the screening of each line of pixel points of the second image according to the gray value of each line of pixel points of the second image and the central gray threshold value to obtain pixel points related to the central line of the line laser in each line of pixel points of the second image comprises:
and determining pixel points of which the gray values are greater than or equal to the central gray threshold value in each row of pixel points of the second image as pixel points related to the central line of the line laser in each row of pixel points of the second image.
4. The method of claim 1, wherein determining the parallax data of the first image and the second image according to the centerline of the line laser in the first image and the centerline of the line laser in the second image comprises:
determining the coordinates of the laser points of each line of the first image on the central line of the line laser according to the gray values of the pixels of each line of the pixels of the first image, which are related to the central line of the line laser, and the coordinates of the pixels of each line of the pixels of the first image, which are related to the central line of the line laser;
determining the coordinates of the laser points of each line of the second image on the central line of the line laser according to the gray values of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser, and the coordinates of the pixels of each line of the pixels of the second image, which are related to the central line of the line laser;
and determining parallax data of the first image and the second image according to the coordinates of the laser point of each line of the first image on the central line of the line laser and the coordinates of the laser point of each line of the second image on the central line of the line laser.
5. The method of claim 1, wherein the determining three-dimensional spatial data of the laser point projected on the head of the patient by the line laser according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device comprises:
determining a Z-axis coordinate in a three-dimensional space coordinate of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and the focal length of the left vision imaging device or the right vision imaging device;
determining an X-axis coordinate in a three-dimensional space coordinate of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device, and a vertical coordinate of an imaging point of the laser point projected by the line laser on the head of the patient in the first image;
and determining the Y-axis coordinate in the three-dimensional space coordinate of the laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image, the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device and the abscissa of the imaging point of the laser point projected by the line laser on the head of the patient in the first image.
6. The method of claim 1, wherein the registering the point cloud of the head of the patient with the point cloud of the medical image of the patient to obtain the spatial registration result of the patient with the medical image of the patient comprises:
performing coarse registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient according to the three-dimensional space coordinates of the points in the point cloud of the head of the patient and the three-dimensional space coordinates of the points in the point cloud of the medical image of the patient to obtain a coarse registration matrix for transforming the point cloud of the head of the patient and the point cloud of the medical image of the patient;
according to the rough registration matrix, performing fine registration on the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a fine registration matrix for transforming the point cloud of the head of the patient and the point cloud of the medical image of the patient;
and determining the fine registration matrix as a spatial registration result of the patient and the medical image of the patient.
7. The method of spatial matching of patient and patient medical images of claim 1, wherein after obtaining the results of spatial registration of the patient and the patient medical images, the method further comprises:
determining a registration error of the point cloud of the patient's head and the point cloud of the patient medical image according to a spatial registration result of the patient and the patient medical image, a three-dimensional space coordinate of a point in the point cloud of the patient's head, and a three-dimensional space coordinate of a point in the point cloud of the patient medical image;
and verifying the spatial registration result of the patient and the medical image of the patient according to the registration error of the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a verification result of the spatial registration result of the patient and the medical image of the patient.
8. An apparatus for spatial matching of patient to medical images of a patient, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first image and a second image which are scanned by line laser emitted by a line laser emitter and acquired by binocular vision imaging equipment on the head of a patient, the first image is acquired by left vision imaging equipment in the binocular vision imaging equipment, and the second image is acquired by right vision imaging equipment in the binocular vision imaging equipment;
a first determining module, configured to extract a center line of the line laser in the first image and a center line of the line laser in the second image, and determine disparity data of the first image and the second image according to the center line of the line laser in the first image and the center line of the line laser in the second image;
a second determining module, configured to determine three-dimensional spatial data of a laser point projected by the line laser on the head of the patient according to the parallax data of the first image and the second image and the distance between the optical center of the left vision imaging device and the optical center of the right vision imaging device;
the registration module is used for reconstructing point cloud of the head of the patient according to the three-dimensional space data of the laser point projected by the line laser on the head of the patient, and registering the point cloud of the head of the patient and the point cloud of the medical image of the patient to obtain a space registration result of the patient and the medical image of the patient.
9. An electronic device, characterized in that the device comprises:
one or more processors;
a computer readable medium configured to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of spatial matching of patient and patient medical images as claimed in any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out a method of spatial matching of a patient to a medical image of a patient according to any one of claims 1 to 7.
CN202111430109.5A 2021-11-29 2021-11-29 Method, device, equipment and medium for spatial matching of patient and medical image of patient Pending CN114098985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111430109.5A CN114098985A (en) 2021-11-29 2021-11-29 Method, device, equipment and medium for spatial matching of patient and medical image of patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111430109.5A CN114098985A (en) 2021-11-29 2021-11-29 Method, device, equipment and medium for spatial matching of patient and medical image of patient

Publications (1)

Publication Number Publication Date
CN114098985A true CN114098985A (en) 2022-03-01

Family

ID=80371088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111430109.5A Pending CN114098985A (en) 2021-11-29 2021-11-29 Method, device, equipment and medium for spatial matching of patient and medical image of patient

Country Status (1)

Country Link
CN (1) CN114098985A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 Method for measuring three-dimensional geometrical outline of large-size appearance object based on binocular vision
CN103759671A (en) * 2014-01-10 2014-04-30 西北农林科技大学 Non-contact scanning method of dental cast three-dimensional surface data
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN110702025A (en) * 2019-05-30 2020-01-17 北京航空航天大学 Grating type binocular stereoscopic vision three-dimensional measurement system and method
CN112241984A (en) * 2019-07-16 2021-01-19 长沙智能驾驶研究院有限公司 Binocular vision sensor calibration method and device, computer equipment and storage medium
CN112382359A (en) * 2020-12-09 2021-02-19 北京柏惠维康科技有限公司 Patient registration method and device, electronic equipment and computer readable medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 Method for measuring three-dimensional geometrical outline of large-size appearance object based on binocular vision
CN103759671A (en) * 2014-01-10 2014-04-30 西北农林科技大学 Non-contact scanning method of dental cast three-dimensional surface data
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN107907048A (en) * 2017-06-30 2018-04-13 长沙湘计海盾科技有限公司 A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN110702025A (en) * 2019-05-30 2020-01-17 北京航空航天大学 Grating type binocular stereoscopic vision three-dimensional measurement system and method
CN112241984A (en) * 2019-07-16 2021-01-19 长沙智能驾驶研究院有限公司 Binocular vision sensor calibration method and device, computer equipment and storage medium
CN112382359A (en) * 2020-12-09 2021-02-19 北京柏惠维康科技有限公司 Patient registration method and device, electronic equipment and computer readable medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
林义忠 等: "基于机器视觉的机器人定位抓取的研究进展", 《自动化与仪器仪表》 *
郑太雄 等: "基于视觉的三维重建关键技术研究综述", 《自动化学报》 *
陈苑锋: "视觉深度估计与点云建图研究进展", 《液晶与显示》 *
龚文超 等: "双目视觉测距系统软硬件设计研究", 《舰船电子工程》 *

Similar Documents

Publication Publication Date Title
US11310480B2 (en) Systems and methods for determining three dimensional measurements in telemedicine application
CN112382359B (en) Patient registration method and device, electronic equipment and computer readable medium
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN110946659A (en) Registration method and system for image space and actual space
CN115089303A (en) Robot positioning method and system
WO2001057805A2 (en) Image data processing method and apparatus
CN111493878A (en) Optical three-dimensional scanning device for orthopedic surgery and method for measuring bone surface
CN112261399B (en) Capsule endoscope image three-dimensional reconstruction method, electronic device and readable storage medium
CN111658142A (en) MR-based focus holographic navigation method and system
CN113143459B (en) Navigation method and device for laparoscopic augmented reality operation and electronic equipment
CN115457093B (en) Tooth image processing method and device, electronic equipment and storage medium
WO2022237787A1 (en) Robot positioning and pose adjustment method and system
CN115984203A (en) Eyeball protrusion measuring method, system, terminal and medium
CN114098985A (en) Method, device, equipment and medium for spatial matching of patient and medical image of patient
Detchev et al. Image matching and surface registration for 3D reconstruction of a scoliotic torso
CN114886558A (en) Endoscope projection method and system based on augmented reality
CN114298986A (en) Thoracic skeleton three-dimensional construction method and system based on multi-viewpoint disordered X-ray film
EP3655919A1 (en) Systems and methods for determining three dimensional measurements in telemedicine application
CN109872353B (en) White light data and CT data registration method based on improved iterative closest point algorithm
KR101596868B1 (en) Camera parameter computation method
Ahmad et al. 3D reconstruction of gastrointestinal regions using shape-from-focus
CN111743628A (en) Automatic puncture mechanical arm path planning method based on computer vision
CN111408066A (en) Tumor position calibration system and equipment based on magnetic resonance image
CN113674333B (en) Precision verification method and medium for calibration parameters and electronic equipment
CN115908121B (en) Endoscope registration method, device and calibration system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Applicant after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: 100191 Room 608, 6 / F, building 9, 35 Huayuan North Road, Haidian District, Beijing

Applicant before: Beijing Baihui Wei Kang Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220301