CN116563297B - Craniocerebral target positioning method, device and storage medium - Google Patents

Craniocerebral target positioning method, device and storage medium Download PDF

Info

Publication number
CN116563297B
CN116563297B CN202310850978.6A CN202310850978A CN116563297B CN 116563297 B CN116563297 B CN 116563297B CN 202310850978 A CN202310850978 A CN 202310850978A CN 116563297 B CN116563297 B CN 116563297B
Authority
CN
China
Prior art keywords
point cloud
transformation
image
target
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310850978.6A
Other languages
Chinese (zh)
Other versions
CN116563297A (en
Inventor
崔玥
黎诚译
余山
韩新勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310850978.6A priority Critical patent/CN116563297B/en
Publication of CN116563297A publication Critical patent/CN116563297A/en
Application granted granted Critical
Publication of CN116563297B publication Critical patent/CN116563297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The application provides a craniocerebral target positioning method, a device and a storage medium, relating to the technical field of medical treatment, comprising the following steps: determining checkerboard mark points based on a structural light point cloud image of the head of a tested object shot by a structural light camera, registering an image point cloud image of the head of the tested object on the structural light point cloud image, and obtaining a first transformation; the first transformation is used for representing transformation of the point cloud from an image space to a camera space; determining a second transformation based on coordinates of the checkerboard marker points in the camera space and coordinates in the physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space; the craniocerebral target coordinates in physical space are determined based on the first transformation and the second transformation. The application improves the accuracy of the craniocerebral target positioning by improving the accuracy of the transformation from the image space to the camera space and the transformation from the camera space to the physical space.

Description

Craniocerebral target positioning method, device and storage medium
Technical Field
The application relates to the technical field of medical treatment, in particular to a craniocerebral target positioning method, a craniocerebral target positioning device and a storage medium.
Background
The accurate position of the craniocerebral target spot in the physical space is obtained and is the basis of craniocerebral operation navigation, transcranial magnetic stimulation and electrical stimulation treatment navigation. In general, the target point can be manually selected in the image space or can be obtained by means of automatic planning through an algorithm, and the like, so that the real coordinates in the physical space need to be obtained through space transformation.
When solving the coordinate transformation, a three-dimensional image is usually shot by adopting a multi-camera, the coordinates of a plurality of mark points on the surface of the tested brain in the camera space are obtained through probe indication or other modes, and the coordinates of the mark points in the image space are registered, so that the transformation from the image space to the camera space is realized; the camera simultaneously recognizes some marker points in the physical space (the coordinates of the physical space are known), thereby obtaining the transformation from the camera space to the physical space. The application of two transforms can map targets in image space to physical space.
However, in the prior art, if the facial key points are selected, the positions of the mark points are not fixed due to skin deformation, so that the accuracy of space transformation is affected, and the target positioning accuracy is reduced; if the bone nail mark point is selected, the skull is damaged. In addition, in the existing fixed camera shooting scheme, in order to acquire a large field angle, the resolution of the camera is reduced, so that the accuracy of target spot positioning is also reduced.
Disclosure of Invention
The embodiment of the application provides a craniocerebral target positioning method, a craniocerebral target positioning device and a storage medium, which are used for solving the technical problem of low craniocerebral target positioning precision in the prior art.
In a first aspect, an embodiment of the present application provides a method for positioning a craniocerebral target, including:
determining checkerboard mark points based on a structural light point cloud image of a head of a tested object, registering an image point cloud image of the head of the tested object onto the structural light point cloud image, and obtaining first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
determining a second transformation based on the coordinates of the checkerboard marker points in camera space and the coordinates of the checkerboard marker points in physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
and determining craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
In some embodiments, the method further comprises:
shooting the head of the tested object from a plurality of poses by using a structured light camera to obtain a plurality of initial point cloud images; the structured light camera is pulled by a mechanical arm;
Registering the plurality of initial point cloud images to the same image to obtain a structural light point cloud image.
In some embodiments, before the determining the second transformation based on the coordinates of the tessellation marker in the camera space and the coordinates of the tessellation marker in the physical space, further comprising:
and determining the coordinates of the checkerboard mark points in a physical space by using a laser ranging sensor.
In some embodiments, the determining checkerboard marker points based on the structured light point cloud image of the subject's head includes:
extracting a target black square lattice from the structural light point cloud image; the target black square is obtained by preprocessing the structural light spot cloud image;
determining the nearest distance between every two target black squares;
and taking the midpoint of the connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point under the condition that the nearest distance is smaller than a preset threshold value.
In some embodiments, the determining the craniocerebral target coordinates in physical space based on the first transformation and the second transformation comprises:
determining craniocerebral target coordinates in an image space based on the three-dimensional image of the head of the tested object;
And converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in the physical space based on the first transformation and the second transformation.
In some embodiments, the method further comprises:
shooting the face of the tested object at different time points by using a structured light camera in the same pose to obtain a face point cloud image, and taking the point cloud in the face point cloud image shot before the operation or the treatment operation as a reference point cloud;
performing point cloud registration based on the face point cloud in the face point cloud image and the reference point cloud to obtain a third transformation; the third transformation is used for representing transformation from the space where the datum point cloud is located to the space where the face point cloud is located;
calculating an average displacement of the cloud of facial points based on the third transformation and the cloud of reference points;
and correcting the displacement of the craniocerebral target points based on the average displacement of the facial point cloud.
In some embodiments, in a case that the average displacement of the facial point cloud is greater than or equal to a preset threshold, the performing displacement correction on the craniocerebral target based on the average displacement of the facial point cloud includes:
updating the second transform based on the third transform;
Updating the craniocerebral target coordinates in the physical space based on the first transformation and the updated second transformation, and updating the datum point cloud according to the facial point cloud of the current time point;
and carrying out displacement correction on the craniocerebral target points after updating the coordinates based on the updated datum point cloud.
In a second aspect, embodiments of the present application provide a craniocerebral target location device comprising:
the first determining module is used for determining checkerboard mark points based on a structural light point cloud image of the head of the tested object, registering the image point cloud image of the head of the tested object to the structural light point cloud image, and obtaining first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
a second determination module for determining a second transformation based on coordinates of the tessellation marker points in a camera space and coordinates of the tessellation marker points in a physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
and a third determining module for determining the craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the craniocerebral target location method according to the first aspect when executing the program.
In a fourth aspect, embodiments of the present application also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a craniocerebral target localization method according to the first aspect described above.
In a fifth aspect, embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements a craniocerebral target localization method according to the first aspect described above.
According to the craniocerebral target positioning method, device and storage medium provided by the embodiment of the application, the image point cloud image of the head of the tested object is registered to the structure point cloud image, the first transformation is obtained, the second transformation is determined based on the coordinates of the checkerboard mark points in the camera space and the coordinates of the checkerboard mark points in the physical space, and the noninvasive and accurate positioning of the craniocerebral target in the physical space is realized based on the two-step transformation of the first transformation and the second transformation from the image space to the camera space and then to the physical space.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for locating a craniocerebral target according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an installation of a device in an exemplary scenario of a method for locating a craniocerebral target according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a target position monitoring and correction procedure for an exemplary scenario of a craniocerebral target location method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a brain target positioning device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
reference numerals:
1: a mechanical arm; 2: a structured light camera; 3: a working platform; 4: a laser ranging sensor; 5: a computer; 6: a three-dimensional positioning headstock; 7: checkerboard markers.
Detailed Description
The key point of the positioning of the cranium brain target is the selection of the mark point, in order to achieve the positioning of the cranium brain target with higher precision, the selected mark point is required to be positioned in the space of the camera and can be positioned on the image with high precision, the modes of automatic identification or manual indication by using a probe and the like are included, and the relative position of the mark point and the cranium bone is fixed.
In the existing target point positioning technology, if facial key points (such as nose tip, mouth angle, eye angle and the like) are utilized for automatic identification, the facial key points may be at different positions when an image and a camera are shot due to the fact that skin is easy to deform in different states, so that the accuracy of transformation from an image space to a camera space is affected. In addition, since the facial key point detection technology is basically developed based on face data, the application of the method to experimental animals (such as macaque) is limited, and the experimental difficulty is high.
If the bone nail is fixed on the skull and used as a mark point, the bone nail needs to be visible on the image, and the top end is exposed outside the skin. The method can ensure that the mark point does not move relative to the skull, thereby achieving the positioning of the craniocerebral target point with higher precision, but causing a certain wound to the tested skull.
In addition, the existing fixed camera shooting scheme needs to ensure a large shooting field angle, and the field angle needs to comprise a craniocerebral mark point and a physical space mark point. However, under the same hardware condition, the larger the field angle is, the lower the resolution of the camera is, so the positioning accuracy of such a scheme is limited by the relatively lower resolution of the camera.
Based on the technical problems, the embodiment of the application provides a craniocerebral target positioning method, which adopts a structured light camera to shoot head point cloud, thereby improving the point cloud registration accuracy from an image space to a camera space; the transformation from the camera space to the physical space is performed based on the checkered mark points, so that the accurate two-step transformation from the image space to the camera space and then to the physical space is realized, the accuracy of positioning the craniocerebral target points in the physical space is ensured, and the safety and the accuracy of surgical navigation are improved.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a craniocerebral target positioning method according to an embodiment of the present application, as shown in fig. 1, and the embodiment of the present application provides a craniocerebral target positioning method. The method comprises the following steps:
step 101, determining checkerboard mark points based on a structural light point cloud image of a head of a tested object, and registering an image point cloud image of the head of the tested object onto the structural light point cloud image to obtain first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used to characterize a transformation of the point cloud from image space to camera space.
Specifically, in embodiments of the present application, targets may refer to all intra-brain and brain surface target points, including, but not limited to, intracranial brain cortex target points, intracranial deep nucleus target points, and the like, of implantation procedures.
In the embodiment of the application, the tested object may be a human or an experimental animal (such as a macaque), the pose of the human may be supine, and the pose of the experimental animal may be prone (Sphinx).
The structured light point cloud image is photographed from multiple angles by using a structured light camera and comprises a plurality of head point clouds. Compared with the common facial marker points, the head point cloud has larger information quantity, is less influenced by local skin movement, and has higher registration accuracy; compared with the bone screw mark points, the head point cloud-based non-invasive positioning can be realized, and the advantages of safety, convenience and easiness in operation are achieved.
Optionally, the image point cloud image of the head of the tested object can be obtained through the following steps: firstly, acquiring three-dimensional images of the head of a tested object, such as an electronic computer tomography (Computed Tomography, CT) image or magnetic resonance imaging (Magnetic Resonance Imaging, MRI) and the like; the three-dimensional image mask of the brain of the tested object is obtained by carrying out threshold binarization, hole filling and other operations on the three-dimensional image; and then, extracting coordinates of boundary voxels of the three-dimensional image mask to obtain a skin surface point cloud of the head of the tested object, thereby obtaining an image point cloud image.
After a structural point cloud image and an image point cloud image of the head of a tested object are obtained, determining checkerboard mark points based on the structural point cloud image, and obtaining a first transformation by registering the image point cloud image to the structural point cloud image; the first transformation is used to characterize a transformation of the point cloud from image space to camera space. Among them, the method of point cloud registration includes a method of random sampling consistency (Random Sample Consensus, RANSAC) global registration based on feature matching and/or iterative closest point (Iterative Closed Point, ICP) local registration.
For example, a CT image of the head of the subject is obtained, the CT image is subjected to thresholding, hole filling and other processes to obtain a mask of the CT image, and then coordinates of boundary voxels of the mask are extracted to generate a skin surface point cloud of the head of the subject, so as to obtain an image point cloud image. And shooting the head of the tested object by using a structured light camera to obtain a structured light point cloud image. Then, performing rough registration of point clouds on the image point cloud image and the structure point cloud image by adopting RANSAC global registration based on feature matching, and performing fine registration of point clouds on the image point cloud image and the structure point cloud image by adopting ICP local registration to obtain conversion from an image space to a camera space, namely, first conversion; and identifying checkerboard marker points from the structured light point cloud image.
For another example, an MRI image of the head of the subject is obtained, the MRI image is subjected to thresholding, hole filling and the like to obtain a mask of the MRI image, and then coordinates of boundary voxels of the mask are extracted to generate a skin surface point cloud of the head of the subject, thereby obtaining an image point cloud image. And shooting the head of the tested object by using a structured light camera to obtain a structured light point cloud image. Then, performing point cloud registration on the image point cloud image and the structure point cloud image by adopting RANSAC global registration and ICP local registration based on feature matching to obtain conversion from an image space to a camera space, namely, first conversion; and identifying checkerboard marker points from the structured light point cloud image.
102, determining a second transformation based on the coordinates of the checkerboard mark points in a camera space and the coordinates of the checkerboard mark points in a physical space; the second transformation is used to characterize the transformation of the point cloud from camera space to physical space.
Specifically, after the checkerboard marker points are obtained, coordinates of the checkerboard marker points in the camera space and the physical space are obtained, and a second transformation is determined based on the coordinates of the checkerboard marker points in the camera space and the coordinates of the checkerboard marker points in the physical space, wherein the second transformation is used for representing transformation of the point cloud from the camera space to the physical space.
For example, after obtaining the checkered marker points, mapping the checkered marker points on the structural light point cloud image to obtain coordinates of the checkered marker points in a camera space; and the coordinates of the checkerboard mark points in the physical space are determined by using a laser ranging sensor. The camera space to physical space transformation, the second transformation, is determined using a landmark registration (landmark registration) method based on the coordinates of the checkerboard landmark in the camera space and the coordinates of the checkerboard landmark in the physical space.
For another example, after the checkered mark points are obtained, the checkered mark points are mapped on the structural light point cloud image, and coordinates of the checkered mark points in a camera space are obtained; and determining coordinates of the checkerboard mark points in the physical space. Based on the coordinates of the checkerboard mark points in the camera space and the coordinates of the checkerboard mark points in the physical space, an ICP local registration is used to determine a camera space to physical space transformation, i.e. a second transformation.
Step 103, determining the craniocerebral target coordinates in the physical space based on the first transformation and the second transformation.
Specifically, in the embodiment of the application, the positioning of the craniocerebral target points is to obtain the coordinates of the craniocerebral target points in the physical space. And realizing transformation of the craniocerebral target point from the image space to the physical space based on the determined first transformation and the second transformation, and obtaining the craniocerebral target point coordinate under the physical space.
For example, coordinates of a brain target in an image space are obtained based on a three-dimensional image of a head of a subject, coordinates of the brain target in a camera space are obtained according to the coordinates of the brain target in the image space and the first transformation, and coordinates of the brain target in a physical space are obtained according to the coordinates of the brain target in the camera space and the second transformation.
For another example, coordinates of a craniocerebral target in an image space are obtained based on a three-dimensional image of the head of the subject and are recorded asThe method comprises the steps of carrying out a first treatment on the surface of the Based on the first transformation->And second transformation->The method for converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in the physical space can be realized by using the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,,/>representing craniocerebral target coordinates in physical space; />Representing a second transformation; / >Representing a first transformation; />,/>And (5) representing the craniocerebral target coordinates in the image space.
According to the craniocerebral target positioning method provided by the embodiment of the application, the head point cloud is shot by adopting the structured light camera, so that the point cloud registration accuracy from an image space to a camera space is improved, and the craniocerebral injury is avoided; and the transformation from the camera space to the physical space is performed based on the checkered mark points, so that the accurate two-step transformation from the image space to the camera space and then to the physical space is realized, and the accuracy of positioning the craniocerebral target points in the physical space is ensured.
In some embodiments, the method further comprises:
shooting the head of the tested object from a plurality of poses by using a structured light camera to obtain a plurality of initial point cloud images; the structured light camera is pulled by the mechanical arm to shoot;
registering the plurality of initial point cloud images to the same image to obtain a structural light point cloud image.
Specifically, a structured light camera is utilized to obtain a structured light point cloud image; the structured light camera is towed by the mechanical arm to shoot.
And using a mechanical arm to connect a structured light camera, and shooting the head of the tested object from a plurality of poses (positions and postures) to obtain a plurality of initial point cloud images. The method for obtaining the plurality of poses is not limited, but each initial point cloud image needs to have a certain overlapping area with other initial point cloud images so as to ensure the feasibility of registering the plurality of initial point cloud images; and can shoot a complete checkerboard without shielding.
Then, a point cloud registration method, such as RANSAC global registration and/or ICP local registration based on feature matching, is adopted to register a plurality of initial point cloud images one by one, so as to obtain a structure point cloud image, and a coordinate system where the structure point cloud image is located is called a camera coordinate system or a camera space.
Optionally, before registering the multiple initial point cloud images, the known transformation of the mechanical arm base coordinate system-end coordinate system and the transformation of the mechanical arm end coordinate system-original camera coordinate system can be utilized to transform the point cloud in each initial point cloud image from the original camera coordinate system to the common mechanical arm base coordinate system, so as to achieve the purposes of reducing the calculation time of point cloud registration and reducing the risk of registration failure.
For example, fig. 2 is a schematic diagram of an installation manner of a device in an exemplary scenario of a craniocerebral target positioning method according to an embodiment of the present application, where, as shown in fig. 2, the device for craniocerebral target positioning includes a mechanical arm 1, a high-precision structured light camera 2, a working platform 3 capable of specifying an axial movement amount, a laser ranging sensor 4 with a visible light indication, a computer for integrated control 5, a stereotactic head frame for fixing a tested object 6, and a checkerboard marker 7 (at least 3 intersection points and asymmetric intersection points). And connecting the structured light camera at the tail end of the mechanical arm, fixing the mechanical arm base, and drawing the structured light camera by the mechanical arm to shoot the head of the tested object from a plurality of poses to obtain a plurality of initial point cloud images. Transforming with a known robot arm base-end coordinate system And the robot arm end coordinate system-original camera coordinate system transformation +.>Transforming the point cloud in each initial point cloud image from an original camera coordinate system to a common mechanical arm base coordinate system, and registering a plurality of initial point cloud images under the mechanical arm base coordinate system to the same image by utilizing RANSAC global registration and ICP local registration based on feature matching to obtain a structural point cloud image.
Compared with a large-angle-of-view camera with fixed shooting, the structured light camera pulled by the mechanical arm can achieve higher spatial resolution, so that lower positioning errors of the marks of the spatial checkerboard of the camera can be obtained, head point clouds obtained from heads of tested objects shot by a plurality of poses can carry more information, registration accuracy is improved, and craniocerebral target positioning accuracy is improved.
In some embodiments, the determining checkerboard marker points based on the structured light point cloud image of the subject's head includes:
extracting a target black square lattice from the structural light point cloud image; the target black square is obtained by preprocessing the structural light spot cloud image;
determining the nearest distance between every two target black squares;
And taking the midpoint of the connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point under the condition that the nearest distance is smaller than a preset threshold value.
Specifically, after the structural light point cloud image is obtained, the checkerboard mark points, which can also be called checkerboard intersection points, are identified from the structural light point cloud image. First, a series of preprocessing, such as image segmentation, gray scale processing, thresholding, median filtering denoising, image erosion and/or polygon detection, is performed on the structured light point cloud image, thereby extracting a target black square.
For example, dividing the structural light point cloud image to obtain a target checkerboard plane; and performing threshold binarization, median filtering denoising, image corrosion, polygon detection and other treatments on the two-dimensional gray level image of the target checkerboard plane to obtain the target black checkers on the target checkerboard plane.
Then, checkerboard marker points are determined based on the distance between the target black squares: determining the nearest distance between every two target black squares, judging whether the nearest distance is smaller than a preset threshold value, and taking the midpoint of a connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point if the nearest distance is smaller than the preset threshold value.
For example, a plurality of distances between every two target black squares are calculated so as to obtain the nearest distances between every two target black squares, for example, six nearest distances, d1, d2, d3, d4, d5 and d6, are obtained based on four target black squares. And judging whether the nearest distance is smaller than a preset threshold d0, if so, taking the midpoint of the connecting line between the two target black grids corresponding to the nearest distance as a checkerboard mark point, and taking the midpoint of the connecting line between the two target black grids corresponding to d2, the midpoint of the connecting line between the two target black grids corresponding to d4 and the midpoint of the connecting line between the two target black grids corresponding to d5 as the checkerboard mark point if the values of d2, d4 and d5 in the six nearest distances are smaller than d 0.
According to the craniocerebral target positioning method provided by the embodiment of the application, the target black squares are obtained by dividing and gray processing the structural light point cloud image, the checkerboard mark points are determined based on the distance between the target black squares, the nearest distance between the target black squares is calculated, the preset threshold value is used as a judgment basis, and the midpoint of the connecting line between the target black squares corresponding to the nearest distance meeting the condition is used as the checkerboard mark point, so that the effectiveness and usability of the checkerboard mark points are ensured.
In some embodiments, before the determining the second transformation based on the coordinates of the tessellation marker in the camera space and the coordinates of the tessellation marker in the physical space, further comprising:
and determining the coordinates of the checkerboard mark points in a physical space by using a laser ranging sensor.
Specifically, after the checkerboard mark points are obtained, the checkerboard mark points are mapped on the structural light point cloud image, the coordinates of the checkerboard mark points in a camera space are obtained, and the coordinates of the checkerboard mark points in a physical space are determined by utilizing a laser ranging sensor. And then determining a second transformation based on the coordinates of the checkerboard mark points in the camera space and the coordinates of the checkerboard mark points in the physical space.
The laser ranging sensor is fixed above the checkerboard, can move up and down relative to the working platform and ensures that the laser emitting direction is vertically downward.
For example, as shown in fig. 2, the checkerboard is fixed on the stereotactic head frame at a position close to the head, and the laser ranging sensor is fixed above the checkerboard and can move up and down relative to the working platform. The computer carefully controls the movement of the working platform, the laser is visually aligned with each checkerboard mark point, and the computer records the axial movement of the working platform and the movement distance of the laser ranging sensor after alignment. And calculating coordinates of the checkerboard mark points in the physical space based on the axial movement amount of the working platform and the movement distance of the laser ranging sensor.
According to the craniocerebral target positioning method provided by the embodiment of the application, the checkerboard mark points in the physical space are manually positioned in a laser-assisted mode, so that the positioning error of the checkerboard mark points is reduced, and the craniocerebral target positioning accuracy is further improved.
In some embodiments, the determining the craniocerebral target coordinates in physical space based on the first transformation and the second transformation comprises:
determining craniocerebral target coordinates in an image space based on the three-dimensional image of the head of the tested object;
and converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in a physical space based on the first transformation and the second transformation.
Specifically, after the first transformation and the second transformation are obtained, the brain target coordinates in the image space are determined based on the three-dimensional image of the head of the tested object, and the brain target coordinates in the image space are converted into the brain target coordinates in the physical space based on the first transformation and the second transformation.
For example, the coordinates of the craniocerebral target in the image space based on the three-dimensional image of the head of the subject are recorded asThe method comprises the steps of carrying out a first treatment on the surface of the Based on the first transformation->And second transformation->The method for converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in the physical space can be realized by using the following formula:
Wherein, the liquid crystal display device comprises a liquid crystal display device,,/>representing craniocerebral target coordinates in physical space; />Representing a second transformation; />Representing a first transformation; />,/>And (5) representing the craniocerebral target coordinates in the image space.
The craniocerebral target positioning method provided by the embodiment of the application adopts a mode of two-step transformation from an image space to a camera space and then to a physical space, thereby realizing noninvasive and accurate positioning of the craniocerebral target in the physical space.
In some embodiments, the method further comprises:
shooting the face of the tested object at different time points by using a structured light camera in the same pose to obtain a face point cloud image, and taking the point cloud in the face point cloud image shot before the operation or the treatment operation as a reference point cloud;
performing point cloud registration based on the face point cloud in the face point cloud image and the reference point cloud to obtain a third transformation; the third transformation is used for representing transformation from the space where the datum point cloud is located to the space where the face point cloud is located;
calculating an average displacement of the cloud of facial points based on the third transformation and the cloud of reference points;
and correcting the displacement of the craniocerebral target points based on the average displacement of the facial point cloud.
Specifically, photographing the face of a subject with a structured light camera is required to be photographed in the same pose. After the multi-angle shooting with the structured light camera is finished, or before the operation of surgery or treatment, a face point cloud of the tested object is shot again to be used as a datum point cloud. The face of the tested object is photographed at intervals in the operation or treatment operation process, and a plurality of face point cloud images are obtained.
And carrying out point cloud registration on the face point cloud in the face point cloud image and the reference point cloud to obtain a third transformation. And calculating the average displacement of the facial point cloud based on the third transformation and the reference point cloud, and carrying out displacement correction on the craniocerebral target points according to the average displacement.
For example, fig. 3 is a schematic diagram of a target position monitoring and correcting process of an exemplary scenario of a craniocerebral target positioning method according to an embodiment of the present application, where, as shown in fig. 3, after multi-angle shooting with a structured light camera is completed, a point cloud of a tested face is shotAs a reference point cloud, the phase pose is noted as V at this time. Photographing a subject's facial point cloud with a camera pose V every a while in surgery or treatment operation>Will->And->Registration (including but not limited to ICP registration), resulting in +.>Space (original camera)Space) to->Transformation of the space>. Calculating the average displacement of the facial point cloud>The method comprises the following steps:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing facial point cloud->Average displacement of (a); n represents the reference point cloud->The number of points in (a);,/>representation->The i-th point in (a); />Representation->Space in->And (5) transforming the space. Then based onAnd correcting the displacement of the craniocerebral target spot.
According to the craniocerebral target positioning method provided by the embodiment of the application, the face of the tested object is shot by the structured light camera to monitor and calibrate the target position in time, so that the accuracy of craniocerebral target positioning is further ensured.
In some embodiments, the performing, in a case where the average displacement of the facial point cloud is greater than or equal to a preset threshold, the displacement correction on the craniocerebral target based on the average displacement of the facial point cloud includes:
updating the second transform based on the third transform;
updating the craniocerebral target coordinates in the physical space based on the first transformation and the updated second transformation, and updating the datum point cloud according to the facial point cloud of the current time point;
and carrying out displacement correction on the craniocerebral target points after updating the coordinates based on the updated datum point cloud.
Specifically, the average displacement of the face point cloud is judged by using a preset threshold, if the average displacement of the face point cloud is smaller than the preset threshold, the face of the tested object is continuously shot by using the structured light camera at intervals of a same pose, a face point cloud image is obtained, and the operation of calculating the average displacement of the face point cloud is repeatedly performed based on the reference point cloud and the newly obtained face point cloud image.
And if the average displacement of the facial point cloud is greater than or equal to a preset threshold, performing craniocerebral target displacement correction. Updating the second transformation based on the third transformation, and updating the craniocerebral target coordinates in physical space based on the first transformation and the updated second transformation. And then updating the datum point cloud to be the facial point cloud of the current time point, repeating the displacement correction operation, namely re-shooting the face of the tested object at intervals of the same pose to obtain a facial point cloud image, repeatedly calculating the average displacement of the facial point cloud based on the datum point cloud and the newly obtained facial point cloud image, and then executing target displacement correction based on a preset threshold.
For example, as shown in FIG. 3, an average displacement of the facial point cloud is obtainedAfter that, if->Less than a preset threshold->And re-acquiring the cloud image of the facial points, otherwise, sending out displacement prompt and correcting target displacement. First, the second transform (camera-physical space transform) is updated:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a second transformation; />Representing the transformation of the space in which the reference point cloud is located into the space in which the facial point cloud is located.
New coordinates of target point in physical spaceThe method comprises the following steps:
updating a subject facial reference point cloudFor the facial point cloud at this time->The displacement correction operation is repeated until the operation is completed.
According to the craniocerebral target positioning method provided by the embodiment of the application, the head point cloud is shot by adopting the structured light camera, so that the point cloud registration accuracy from an image space to a camera space is improved; the transformation from the camera space to the physical space is performed based on the checkered mark points, so that the accurate two-step transformation from the image space to the camera space and then to the physical space is realized, the accuracy of positioning the craniocerebral target points in the physical space is ensured, and the safety and the accuracy of surgical navigation are improved.
Fig. 4 is a schematic structural diagram of a craniocerebral target positioning device according to an embodiment of the present application, and as shown in fig. 4, the embodiment of the present application provides a craniocerebral target positioning device, including a first determining module 401, a second determining module 402, and a third determining module 403.
The first determining module 401 is configured to determine a checkerboard mark point based on a structural light point cloud image of a head of a subject, and register an image point cloud image of the head of the subject onto the structural light point cloud image to obtain a first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used to characterize a transformation of the point cloud from image space to camera space.
The second determining module 402 is configured to determine a second transformation based on the coordinates of the checkerboard marker points in the camera space and the coordinates of the checkerboard marker points in the physical space; the second transformation is used to characterize the transformation of the point cloud from camera space to physical space.
The third determining module 403 is configured to determine craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
In some embodiments, further comprising:
the first acquisition module is used for shooting the head of the tested object from a plurality of poses by utilizing the structured light camera to obtain a plurality of initial point cloud images; the structured light camera is pulled by the mechanical arm to shoot;
and the second acquisition module is used for registering the plurality of initial point cloud images to the same image to obtain a structural light point cloud image. In some embodiments, further comprising:
And the fourth determining module is used for determining the coordinates of the checkerboard mark points in the physical space by using a laser ranging sensor.
In some embodiments, the first determining module comprises:
an extracting unit, configured to extract a target black square from the structural light point cloud image; the target black square is obtained by preprocessing the structural light spot cloud image;
a first determining unit for determining a nearest distance between every two target black squares;
and the second determining unit is used for taking the midpoint of the connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point under the condition that the nearest distance is smaller than a preset threshold value.
In some embodiments, the third determination module includes:
the third determining unit is used for determining the craniocerebral target point coordinates in the image space based on the three-dimensional image of the head of the tested object;
and the conversion unit is used for converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in the physical space based on the first conversion and the second conversion.
In some embodiments, further comprising:
the third acquisition module is used for shooting the face of the tested object at different time points by using the structured light camera in the same pose to obtain a face point cloud image, and taking the point cloud in the face point cloud image shot before the operation or the treatment operation as a reference point cloud;
A fourth obtaining module, configured to perform point cloud registration based on a face point cloud in the face point cloud image and the reference point cloud, to obtain a third transformation; the third transformation is used for representing transformation from the space where the datum point cloud is located to the space where the face point cloud is located;
a calculation module for calculating an average displacement of the cloud of facial points based on the third transformation and the cloud of reference points;
and the correction module is used for carrying out displacement correction on the craniocerebral target points based on the average displacement of the facial point cloud.
In some embodiments, in a case where the average displacement of the facial point cloud is greater than or equal to a preset threshold, the correction module includes:
a first updating unit configured to update the second transformation based on the third transformation;
the second updating unit is used for updating the craniocerebral target coordinates in the physical space based on the first transformation and the updated second transformation and updating the datum point cloud according to the facial point cloud of the current time point;
and the correction unit is used for carrying out displacement correction on the craniocerebral target spot after the coordinates are updated based on the updated datum point cloud.
Specifically, the craniocerebral target positioning device provided by the embodiment of the application can realize all the method steps realized by the embodiment of the craniocerebral target positioning method and can achieve the same technical effects, and the parts and beneficial effects which are the same as those of the embodiment of the method in the embodiment are not described in detail.
It should be noted that the division of the units/modules in the above embodiments of the present application is merely a logic function division, and other division manners may be implemented in practice. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, where the electronic device may include: a processor (processor) 501, a communication interface (Communications Interface) 502, a memory (memory) 503 and a communication bus 504, wherein the processor 501, the communication interface 502, and the memory 503 communicate with each other via the communication bus 504. The processor 501 may invoke logic instructions in the memory 503 to perform a craniocerebral target location method comprising:
determining checkerboard mark points based on a structural light point cloud image of a head of a tested object, registering an image point cloud image of the head of the tested object onto the structural light point cloud image, and obtaining first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
Determining a second transformation based on the coordinates of the checkerboard marker points in camera space and the coordinates of the checkerboard marker points in physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
and determining craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
Specifically, the processor 501 may be a central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA), or a complex programmable logic device (Complex Programmable Logic Device, CPLD), and the processor may also employ a multi-core architecture.
The logic instructions in memory 503 may be implemented in the form of software functional units and may be stored in a processor-readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In some embodiments, there is also provided a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the craniocerebral target localization method provided by the method embodiments described above, the method comprising:
determining checkerboard mark points based on a structural light point cloud image of a head of a tested object, registering an image point cloud image of the head of the tested object onto the structural light point cloud image, and obtaining first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
determining a second transformation based on the coordinates of the checkerboard marker points in camera space and the coordinates of the checkerboard marker points in physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
and determining craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
Specifically, the computer program product provided by the embodiment of the present application can implement all the method steps implemented by the above method embodiments, and can achieve the same technical effects, and the parts and beneficial effects that are the same as those of the method embodiments in this embodiment are not described in detail herein.
In some embodiments, there is also provided a computer readable storage medium storing a computer program for causing a computer to execute the craniocerebral target location method provided by the above method embodiments, the method comprising:
determining checkerboard mark points based on a structural light point cloud image of a head of a tested object, registering an image point cloud image of the head of the tested object onto the structural light point cloud image, and obtaining first transformation; the structure light point cloud image is obtained by shooting with a structure light camera; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
determining a second transformation based on the coordinates of the checkerboard marker points in camera space and the coordinates of the checkerboard marker points in physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
and determining craniocerebral target coordinates in physical space based on the first transformation and the second transformation.
Specifically, the computer readable storage medium provided by the embodiment of the present application can implement all the method steps implemented by the above method embodiments and achieve the same technical effects, and the parts and beneficial effects that are the same as those of the method embodiments in this embodiment are not described in detail herein.
It should be noted that: the computer readable storage medium may be any available medium or data storage device that can be accessed by a processor including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CD, DVD, BD, HVD, etc.), and semiconductor memory (e.g., ROM, EPROM, EEPROM, nonvolatile memory (NAND FLASH), solid State Disk (SSD)), etc.
In addition, it should be noted that: the terms "first," "second," and the like in embodiments of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more.
In the embodiment of the application, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The term "plurality" in embodiments of the present application means two or more, and other adjectives are similar.
The term "determining B based on a" in the present application means that a is a factor to be considered in determining B. Not limited to "B can be determined based on A alone", it should also include: "B based on A and C", "B based on A, C and E", "C based on A, further B based on C", etc. Additionally, a may be included as a condition for determining B, for example, "when a satisfies a first condition, B is determined using a first method"; for another example, "when a satisfies the second condition, B" is determined, etc.; for another example, "when a satisfies the third condition, B" is determined based on the first parameter, and the like. Of course, a may be a condition in which a is a factor for determining B, for example, "when a satisfies the first condition, C is determined using the first method, and B is further determined based on C", or the like.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-executable instructions. These computer-executable instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be stored in a processor-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the processor-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for locating a craniocerebral target, comprising:
shooting the head of the tested object from a plurality of poses by using a structured light camera to obtain a plurality of initial point cloud images; the structured light camera is connected to the tail end of the mechanical arm; the base of the mechanical arm is fixed;
transforming the point cloud in each initial point cloud image from the original camera coordinate system to a common mechanical arm base coordinate system by utilizing mechanical arm base coordinate system-end coordinate system transformation and mechanical arm end coordinate system-original camera coordinate system transformation to obtain initial point cloud images under a plurality of mechanical arm base coordinate systems;
Registering the initial point cloud images under the base coordinate system of the plurality of mechanical arms to the same image to obtain a structural point cloud image;
determining checkerboard mark points based on the structural light point cloud image, registering the image point cloud image of the head of the tested object to the structural light point cloud image, and obtaining first transformation; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
determining a second transformation based on the coordinates of the checkerboard marker points in camera space and the coordinates of the checkerboard marker points in physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
determining craniocerebral target coordinates in physical space based on the first transformation and the second transformation;
the method further comprises the steps of:
shooting the face of the tested object at different time points by using a structured light camera in the same pose to obtain a face point cloud image, and taking the point cloud in the face point cloud image shot before the operation or the treatment operation as a reference point cloud;
performing point cloud registration based on the face point cloud in the face point cloud image and the reference point cloud to obtain a third transformation; the third transformation is used for representing transformation from the space where the datum point cloud is located to the space where the face point cloud is located;
Calculating an average displacement of the cloud of facial points based on the third transformation and the cloud of reference points;
updating the second transformation based on the third transformation if the average displacement of the facial point cloud is greater than or equal to a preset threshold;
updating the craniocerebral target coordinates in the physical space based on the first transformation and the updated second transformation, and updating the datum point cloud according to the facial point cloud of the current time point;
and carrying out displacement correction on the craniocerebral target points after updating the coordinates based on the updated datum point cloud.
2. The craniocerebral target localization method of claim 1, wherein said determining a second transformation based on the coordinates of said checkerboard marker points in camera space and the coordinates of said checkerboard marker points in physical space further comprises:
and determining the coordinates of the checkerboard mark points in a physical space by using a laser ranging sensor.
3. The method of locating a cranial target according to claim 1, wherein the determining checkerboard marker points based on the structured light point cloud image includes:
extracting a target black square lattice from the structural light point cloud image; the target black square is obtained by preprocessing the structural light spot cloud image;
Determining the nearest distance between every two target black squares;
and taking the midpoint of the connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point under the condition that the nearest distance is smaller than a preset threshold value.
4. The method of brain target localization according to claim 1, wherein said determining brain target coordinates in physical space based on the first transformation and the second transformation comprises:
determining craniocerebral target coordinates in an image space based on the three-dimensional image of the head of the tested object;
and converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in a physical space based on the first transformation and the second transformation.
5. A craniocerebral target location device comprising:
the first shooting module is used for shooting the head of the tested object from a plurality of poses by using the structured light camera to obtain a plurality of initial point cloud images; the structured light camera is connected to the tail end of the mechanical arm; the base of the mechanical arm is fixed;
the transformation module is used for transforming the point cloud in each initial point cloud image from the original camera coordinate system to the common mechanical arm base coordinate system by utilizing mechanical arm base coordinate system-end coordinate system transformation and mechanical arm end coordinate system-original camera coordinate system transformation to obtain initial point cloud images under a plurality of mechanical arm base coordinate systems;
The first registration module is used for registering the initial point cloud images under the base coordinate system of the plurality of mechanical arms to the same image to obtain a structural point cloud image;
the first determining module is used for determining checkerboard mark points based on the structural light point cloud image, registering the image point cloud image of the head of the tested object to the structural light point cloud image and obtaining first transformation; the first transformation is used for representing transformation of the point cloud from an image space to a camera space;
a second determination module for determining a second transformation based on coordinates of the tessellation marker points in a camera space and coordinates of the tessellation marker points in a physical space; the second transformation is used for representing transformation of the point cloud from camera space to physical space;
a third determining module for determining a craniocerebral target coordinate in physical space based on the first transformation and the second transformation;
the second shooting module is used for shooting the face of the tested object at different time points by using the structured light camera in the same pose to obtain a face point cloud image, and taking the point cloud in the face point cloud image shot before the operation or the treatment operation as a datum point cloud;
the second registration module is used for carrying out point cloud registration on the basis of the face point cloud in the face point cloud image and the reference point cloud to obtain a third transformation; the third transformation is used for representing transformation from the space where the datum point cloud is located to the space where the face point cloud is located;
A calculation module for calculating an average displacement of the cloud of facial points based on the third transformation and the cloud of reference points;
a first updating module, configured to update the second transformation based on the third transformation if an average displacement of the facial point cloud is greater than or equal to a preset threshold;
the second updating module is used for updating the craniocerebral target coordinates in the physical space based on the first transformation and the updated second transformation and updating the datum point cloud according to the facial point cloud of the current time point;
and the correction module is used for carrying out displacement correction on the craniocerebral target points with updated coordinates based on the updated datum point cloud.
6. The craniocerebral target localization device of claim 5, further comprising a fourth determination module;
the fourth determining module is used for determining coordinates of the checkerboard mark points in a physical space by using a laser ranging sensor.
7. The craniocerebral target location device of claim 5, wherein the first determination module comprises:
an extracting unit, configured to extract a target black square from the structural light point cloud image; the target black square is obtained by preprocessing the structural light spot cloud image;
A first determining unit for determining a nearest distance between every two target black squares;
and the second determining unit is used for taking the midpoint of the connecting line between the two target black squares corresponding to the nearest distance as a checkerboard mark point under the condition that the nearest distance is smaller than a preset threshold value.
8. The craniocerebral target localization device of claim 5, wherein the third determination module comprises:
the third determining unit is used for determining the craniocerebral target point coordinates in the image space based on the three-dimensional image of the head of the tested object;
and the conversion unit is used for converting the craniocerebral target coordinates in the image space into craniocerebral target coordinates in the physical space based on the first conversion and the second conversion.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the craniocerebral target localization method of any one of claims 1 to 4 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the craniocerebral target localization method of any one of claims 1 to 4.
CN202310850978.6A 2023-07-12 2023-07-12 Craniocerebral target positioning method, device and storage medium Active CN116563297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310850978.6A CN116563297B (en) 2023-07-12 2023-07-12 Craniocerebral target positioning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310850978.6A CN116563297B (en) 2023-07-12 2023-07-12 Craniocerebral target positioning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116563297A CN116563297A (en) 2023-08-08
CN116563297B true CN116563297B (en) 2023-10-31

Family

ID=87498666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310850978.6A Active CN116563297B (en) 2023-07-12 2023-07-12 Craniocerebral target positioning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116563297B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106308946A (en) * 2016-08-17 2017-01-11 清华大学 Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN109464196A (en) * 2019-01-07 2019-03-15 北京和华瑞博科技有限公司 Using the operation guiding system and registration signal acquisition method of structure light Image registration
CN113870329A (en) * 2021-09-30 2021-12-31 上海寻是科技有限公司 Medical image registration system and method for surgical navigation
CN113902851A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113993582A (en) * 2019-05-31 2022-01-28 新宁研究院 System and method for reducing skull-induced thermal aberrations during transcranial ultrasound therapy procedures
JP2022039906A (en) * 2020-08-28 2022-03-10 中国計量大学 Multi-sensor combined calibration device and method
CN115439558A (en) * 2022-09-20 2022-12-06 深圳市正浩创新科技股份有限公司 Combined calibration method and device, electronic equipment and computer readable storage medium
CN115810055A (en) * 2022-12-16 2023-03-17 南京信息工程大学 Annular structure light calibration method based on planar checkerboard
CN116392246A (en) * 2023-04-07 2023-07-07 苏州派尼迩医疗科技有限公司 Method and system for registering surgical robot coordinate system and CT machine coordinate system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136208B (en) * 2019-05-20 2020-03-17 北京无远弗届科技有限公司 Joint automatic calibration method and device for robot vision servo system
CN115227979A (en) * 2022-07-18 2022-10-25 河南翔宇医疗设备股份有限公司 Control device, system and equipment of transcranial magnetic stimulation equipment
CN115797416A (en) * 2022-12-19 2023-03-14 河北工业大学 Image reconstruction method, device and equipment based on point cloud image and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106308946A (en) * 2016-08-17 2017-01-11 清华大学 Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN109464196A (en) * 2019-01-07 2019-03-15 北京和华瑞博科技有限公司 Using the operation guiding system and registration signal acquisition method of structure light Image registration
CN113993582A (en) * 2019-05-31 2022-01-28 新宁研究院 System and method for reducing skull-induced thermal aberrations during transcranial ultrasound therapy procedures
JP2022039906A (en) * 2020-08-28 2022-03-10 中国計量大学 Multi-sensor combined calibration device and method
CN113870329A (en) * 2021-09-30 2021-12-31 上海寻是科技有限公司 Medical image registration system and method for surgical navigation
CN113902851A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Face three-dimensional reconstruction method and device, electronic equipment and storage medium
CN115439558A (en) * 2022-09-20 2022-12-06 深圳市正浩创新科技股份有限公司 Combined calibration method and device, electronic equipment and computer readable storage medium
CN115810055A (en) * 2022-12-16 2023-03-17 南京信息工程大学 Annular structure light calibration method based on planar checkerboard
CN116392246A (en) * 2023-04-07 2023-07-07 苏州派尼迩医疗科技有限公司 Method and system for registering surgical robot coordinate system and CT machine coordinate system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SSA-PointNet++: 空间自注意力机制下的 3D 点云语义分割网络;崔玥等;《计算机辅助设计与图形学学报》;第34卷(第3期);第437-448页 *

Also Published As

Publication number Publication date
CN116563297A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
US8437518B2 (en) Registration of electroanatomical mapping points to corresponding image data
CN111599432B (en) Three-dimensional craniofacial image feature point marking analysis system and method
EP3788596B1 (en) Lower to higher resolution image fusion
CN114404039B (en) Tissue drift correction method and device for three-dimensional model, electronic equipment and storage medium
WO2014036222A1 (en) Systems, methods and computer readable storage media storing instructions for integrating fluoroscopy venogram and myocardial images
CN115511960A (en) Method and device for positioning central axis of femur, computer equipment and storage medium
US20210068695A1 (en) Method Providing ECG Analysis Interface and System
US11682187B2 (en) Method and apparatus to classify structures in an image
CN116563297B (en) Craniocerebral target positioning method, device and storage medium
US10182782B2 (en) Evaluation apparatus, evaluation method, and evaluation program
US11508070B2 (en) Method and apparatus to classify structures in an image
CN115116113A (en) Optical navigation method
US20210342655A1 (en) Method And Apparatus To Classify Structures In An Image
CN110772319A (en) Registration method, registration device and computer readable storage medium
US11423553B2 (en) Calibration of image-registration based tracking procedures
US9235888B2 (en) Image data determination method, image processing workstation, target object determination device, imaging device, and computer program product
CN110009666B (en) Method and device for establishing matching model in robot space registration
CN113870324A (en) Method of registering multi-modality images, registering apparatus and computer-readable storage medium thereof
Ali et al. Automated fiducial point selection for reducing registration error in the co-localisation of left atrium electroanatomic and imaging data
CN115775611B (en) Puncture operation planning system
CN116630383B (en) Evaluation method and device for image registration, electronic equipment and storage medium
CN115880469B (en) Registration method of surface point cloud data and three-dimensional image
EP4345747A1 (en) Medical image data processing technique
CN116862963A (en) Registration method and device
CN114782537A (en) Human carotid artery positioning method and device based on 3D vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant