CN114098980B - Camera pose adjustment method, space registration method, system and storage medium - Google Patents

Camera pose adjustment method, space registration method, system and storage medium Download PDF

Info

Publication number
CN114098980B
CN114098980B CN202111400891.6A CN202111400891A CN114098980B CN 114098980 B CN114098980 B CN 114098980B CN 202111400891 A CN202111400891 A CN 202111400891A CN 114098980 B CN114098980 B CN 114098980B
Authority
CN
China
Prior art keywords
camera
pose
coordinate system
target object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111400891.6A
Other languages
Chinese (zh)
Other versions
CN114098980A (en
Inventor
吴童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202111400891.6A priority Critical patent/CN114098980B/en
Publication of CN114098980A publication Critical patent/CN114098980A/en
Priority to EP22806750.0A priority patent/EP4321121A1/en
Priority to PCT/CN2022/092003 priority patent/WO2022237787A1/en
Priority to US18/506,980 priority patent/US20240075631A1/en
Application granted granted Critical
Publication of CN114098980B publication Critical patent/CN114098980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a camera pose adjusting method, a space registering method, a system and a storage medium, wherein the camera pose adjusting method comprises the following steps: the method comprises the steps of obtaining target object image data shot by a camera, determining image feature points of a target object according to the target object image data, obtaining corresponding scanning feature points of the image feature points in a three-dimensional scanning image of a standard face model, determining target pose of the camera in a base coordinate system according to the image feature points, the scanning feature points and camera internal parameters through coordinate transformation, adjusting the camera pose according to the target pose of the camera in the base coordinate system, automatically positioning a registration camera to an optimal position in the mode, and obtaining point cloud data with higher precision at the position, so that the problem that the obtained point cloud data is low in precision is solved, and the precision of a registration scheme of facial structure light of neurosurgery is improved.

Description

Camera pose adjustment method, space registration method, system and storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a camera pose adjustment method, a spatial registration system, and a storage medium.
Background
In recent years, surgical robots have found wide application in medical fields, such as orthopedics, neurosurgery, thoracoabdominal interventions, and the like. Generally, the surgical robot includes a mechanical arm with a multi-degree-of-freedom structure, and the mechanical arm may include a base joint where a mechanical arm base is located and an end joint where a mechanical arm flange is located, where the mechanical arm flange is fixedly connected with an end tool, such as various surgical tools including an electrode needle, a puncture needle, an injector, an ablation needle, and the like.
In the context of neurosurgical robots, a scheme for spatial registration of CT image end-to-robot end tool coordinate system is implemented using face structured light registration. Throughout the registration workflow, the cameras are mounted on the robot end-of-tool coordinate system, the doctor drags the cameras to aim at the patient's face, and the cameras begin registration of facial structured light after acquiring the patient's facial point cloud data. Since the doctor does not pay attention to the physical characteristics of the camera during the imaging of the camera during the dragging of the camera, it is difficult to determine the pose of the camera with the best accuracy during the facial structured light registration, resulting in low accuracy of the acquired point cloud data.
Disclosure of Invention
The embodiment provides a camera pose adjusting method, a space registering method, a system and a storage medium, so as to solve the problem of low precision of point cloud data acquired in the related technology.
In a first aspect, in this embodiment, there is provided a camera pose adjustment method applied to a robot system including a robot arm and a camera mounted to the robot arm, the method including:
acquiring target object image data shot by the camera;
Determining image feature points of the target object according to the target object image data;
Acquiring corresponding scanning characteristic points of the image characteristic points in a three-dimensional scanning image of the standard face model;
According to the image characteristic points, the scanning characteristic points and the camera internal parameters, determining the target pose of the camera in a base coordinate system through coordinate transformation;
And adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
In some embodiments, the determining the target pose of the camera in the base coordinate system according to the image feature points, the scanning feature points and the camera internal parameters through coordinate transformation includes:
determining the initial pose of the target object in a camera coordinate system according to the image characteristic points, the scanning characteristic points and the camera internal parameters;
determining a coordinate transformation relationship between the camera and the base coordinate system;
And determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
In some of these embodiments, the coordinate transformation relationship between the camera and the base coordinate system is determined by:
Acquiring a first transformation matrix between the camera and a tail end coordinate system of the mechanical arm;
Acquiring a second transformation matrix between the tail end coordinate system of the mechanical arm and the base coordinate system;
and determining a coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
In some of these embodiments, the method further comprises:
Determining a preset distance between the target object and the camera;
The determining the initial pose of the target object in a camera coordinate system, then comprises:
determining a third transformation matrix according to the preset distance;
And determining the target pose of the target object in a camera coordinate system under the preset distance according to the third transformation matrix and the initial pose.
In some embodiments, the robot arm further includes an end joint where a robot arm connector of an end tool is located, and the adjusting the camera pose according to the target pose of the camera in the base coordinate system includes:
According to the target pose of the camera in the base coordinate system, determining the pose of the tail end joint in the base coordinate system through coordinate transformation;
and adjusting the pose of the camera according to the pose of the tail end joint in the base coordinate system.
In some of these embodiments, the determining the pose of the end joint in the base coordinate system by coordinate transformation includes:
Acquiring a first transformation matrix between the camera and a tail end coordinate system of the mechanical arm;
and determining the pose of the tail end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
In some embodiments, the acquiring the target object image data captured by the camera includes:
judging whether a complete target object is presented in the visual field of the camera;
If not, the mechanical arm is adjusted to enable the complete target object to be displayed in the visual field of the camera.
In a second aspect, in this embodiment, there is provided a spatial registration method, including:
According to the camera pose adjustment method of the first aspect, automatic camera positioning is realized;
capturing a target object image by a camera;
Registration of the target object with the planning image is achieved through the target object image.
In a third aspect, in this embodiment, a space registration system is provided, including a mechanical arm, a camera and a processor, where the camera is installed at an end of the mechanical arm, and the processor is connected to the mechanical arm and the camera respectively, and the processor executes the space registration method in the second aspect during operation.
In a fourth aspect, in this embodiment, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the camera pose adjustment method described in the first aspect or the spatial registration method described in the second aspect.
Compared with the related art, the camera pose adjustment method, the space registration method, the system and the storage medium provided in the embodiment automatically position the registration camera to an optimal position by acquiring target object image data shot by the camera, determining image feature points of the target object according to the target object image data, acquiring scanning feature points corresponding to the image feature points in a three-dimensional scanning image of a standard face model, determining the target pose of the camera in a base coordinate system according to the image feature points, the scanning feature points and camera internal parameters through coordinate transformation, and adjusting the camera pose according to the target pose of the camera in the base coordinate system, so that the registration camera can be automatically positioned to an optimal position, and higher-precision point cloud data can be acquired at the position, thereby solving the problem of low precision of the acquired point cloud data and improving the precision of the registration scheme of the facial structured light of neurosurgery.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a hardware configuration block diagram of an application terminal of a camera pose adjustment method according to an embodiment of the present application;
FIG. 2 is a flow chart of a camera pose adjustment method according to an embodiment of the present application;
FIG. 3 is a schematic front view of a standard face model according to an embodiment of the present application;
FIG. 4 is a schematic side view of a standard face model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of facial contour points according to an embodiment of the present application;
FIG. 6 is a schematic illustration of a robotic arm pose according to an embodiment of the application;
FIG. 7 is a schematic illustration of another robotic arm pose according to an embodiment of the application;
FIG. 8 is a schematic diagram of a spatial registration system according to an embodiment of the application;
FIG. 9 is a schematic diagram of another spatial registration system according to an embodiment of the application;
FIG. 10 is a flowchart of another camera pose adjustment method according to an embodiment of the present application;
FIG. 11 is a flow chart of a facial structured light registration method according to an embodiment of the present application;
FIG. 12 is a schematic view of facial feature points acquired prior to camera pose adjustment according to an embodiment of the present application;
FIG. 13 is a schematic view of facial feature points obtained after camera pose adjustment according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating a pose description of the swing of a mechanical arm according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples for a clearer understanding of the objects, technical solutions and advantages of the present application.
Unless defined otherwise, technical or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these" and similar terms in this application are not intended to be limiting in number, but may be singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used herein, are intended to encompass non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this disclosure are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this disclosure, merely distinguish similar objects and do not represent a particular ordering for objects.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, the camera pose adjusting method is operated on a terminal, and fig. 1 is a hardware block diagram of the terminal of the camera pose adjusting method of the present embodiment. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 102 and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the camera pose adjustment method in the present embodiment, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a camera pose adjustment method is provided, and the camera pose adjustment method is applied to a robot system, where the robot system includes a mechanical arm and a camera mounted on the mechanical arm, and fig. 2 is a flowchart of a camera pose adjustment method according to an embodiment of the present application, as shown in fig. 2, and the flowchart includes the following steps:
in step S201, target object image data captured by a camera is acquired.
Step S202, determining image feature points of the target object according to the target object image data.
Step S203, corresponding scanning characteristic points of the image characteristic points in the three-dimensional scanning image of the standard face model are obtained.
In this embodiment, a standard face model may be created according to the head features of the target object, or may be downloaded from an open source website, where the obtaining manner of the standard face model is not limited, and the front view and the side view of the standard face model are respectively shown in fig. 3 and fig. 4;
Step S204, determining the target pose of the camera in the base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and the camera internal parameters.
In this embodiment, the base coordinate system is located on the base of the robotic arm.
Step S205, the pose of the camera is adjusted according to the target pose of the camera in the base coordinate system.
Through the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model is determined according to the corresponding scanning feature points of the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, namely, the target pose of the camera in the base coordinate system is determined through coordinate transformation, so that the camera pose can be adjusted according to the target pose of the camera in the base coordinate system, the automatic positioning of the registered camera to the optimal position is realized, the point cloud data with higher precision can be acquired at the position, the problem that the acquired point cloud data is low in precision is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
In some embodiments, step S204, determining, by coordinate transformation, a target pose of the camera in the base coordinate system according to the image feature points, the scan feature points, and the camera internal parameters, includes the steps of:
Determining the initial pose of the target object in a camera coordinate system according to the image feature points, the scanning feature points and the camera internal parameters;
determining a coordinate transformation relation between the camera and the base coordinate system;
and determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
In this embodiment, the initial pose of the determination target object in the camera coordinate system is converted into a solution PNP problem, which is the object positioning problem as follows: assuming that the camera is in a small hole model and calibrated, capturing an image of N space points with known coordinates in an object coordinate system, and determining the coordinates of the N space points in the camera coordinate system, wherein the coordinates of the N image points are known.
For example, the above image feature points are face contour points of the target object, after the face contour points of the target object are identified, pixel coordinates corresponding to the face contour points may be obtained, the identified n pixel coordinates corresponding to the face contour points are recorded as Ai (xi, yi) (i=1, 2, 3..n), the n standard face model space coordinates corresponding to the face contour points are recorded as Bj (Xj, yj, zj) (j=1, 2,3 … n), in this example, the standard face model space coordinates are a preset fixed value, the camera internal reference is a known matrix M, and the initial pose of the target object in the camera coordinate system is recorded as poseNamely, the pose transformation matrix from the face coordinate system corresponding to the target object to the camera coordinate system is/>The initial pose of the target object in the camera coordinate system can be determined as/>, by
According to the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the camera coordinate system is determined according to the corresponding scanning feature points of the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, namely, the initial pose of the target object in the camera coordinate system is determined through coordinate transformation, and further, the target pose of the camera in the base coordinate system is determined according to the initial pose of the target object in the camera coordinate system, so that the pose of the camera can be adjusted according to the target pose of the camera in the base coordinate system, the purpose of automatically positioning the registered camera to the optimal position is achieved, and point cloud data with higher precision can be acquired at the position, so that the problem that the acquired point cloud data is low in precision is solved, and the precision of the registration scheme of facial structured light of neurosurgery is improved.
In some of these embodiments, determining a coordinate transformation relationship between the camera and the base coordinate system includes:
acquiring a first transformation matrix between a camera and a tail end coordinate system of the mechanical arm;
Acquiring a second transformation matrix between a tail end coordinate system and a base coordinate system of the mechanical arm;
And determining a coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
In this embodiment, the first transformation matrix between the camera and the end coordinate system of the mechanical arm may be obtained through a hand-eye calibration algorithm, the second transformation matrix between the end coordinate system of the mechanical arm and the base coordinate system may be directly fed back through a controller in the robot system, where the manner of obtaining the first transformation matrix and the second transformation matrix is specifically limited, and it is assumed that the first transformation matrix between the camera and the end coordinate system of the mechanical arm isSecond transformation matrix/>, between mechanical arm terminal coordinate system and base coordinate systemThe coordinate transformation relation T between the camera and the base coordinate system can be determined by the following equation.
By the method, the coordinate transformation relation between the camera and the base coordinate system can be accurately determined, and further, the target pose of the camera in the base coordinate system is determined according to the coordinate transformation relation between the camera and the base coordinate system, so that the camera pose can be adjusted according to the target pose of the camera in the base coordinate system, the registered camera can be automatically positioned to an optimal position, and point cloud data with higher precision can be acquired at the position, the problem that the acquired point cloud data is low in precision is solved, and the precision of a registration scheme of facial structured light for neurosurgery is improved.
It can be appreciated that the distance between the target object and the camera affects the accuracy of the acquired point cloud data, and based on this, the application provides a method for determining the distance between the target object and the camera, which comprises the following steps:
if a complete target object is presented in the visual field of the camera, dragging the camera along the direction of an optical axis Z c of the camera, acquiring point cloud data of the target object shot by the camera at different distances, and acquiring face point cloud data at each distance, wherein the distance is the distance between the camera and the target face;
And determining the optimal distance between the target object and the camera according to the accuracy and the quality of the face point cloud data under each distance.
By the method, the optimal distance between the target object and the camera can be accurately determined, so that the accuracy of the obtained point cloud data can be further improved.
In some of these embodiments, the camera pose adjustment method further includes:
Determining a preset distance between a target object and a camera;
in this embodiment, when the distance between the target object and the camera is a preset distance, the preferred point cloud data can be obtained, and the preset distance in this embodiment is, for example, the optimal distance determined in the above manner.
Step S2040, determining the initial pose of the target object in the camera coordinate system, and then includes:
Determining a third transformation matrix according to the preset distance;
and determining the target pose of the target object in the camera coordinate system under the preset distance according to the third transformation matrix and the initial pose.
In the present embodiment, assuming that the preset distance is H, the third transformation matrix P is shown in the following formula.
Assume that the initial pose of the target object in the camera coordinate system isTarget pose of target object in camera coordinate system under preset distance is/>The target pose of the target object in the camera coordinate system at the preset distance is determined by the following equation.
By the method, the target pose of the target object in the camera coordinate system under the preset distance is determined, namely, when the distance between the target object and the camera is the optimal distance, the target pose of the target object in the camera coordinate system is determined, and after the camera pose is adjusted according to the target pose, more accurate point cloud data can be obtained, so that the accuracy of the registration scheme of the facial structure light of the neurosurgery is improved.
In some embodiments, the mechanical arm further includes an end joint where the mechanical arm connector connected to the end tool is located, step S205, adjusting a pose of the camera according to a target pose of the camera in the base coordinate system, including:
According to the target pose of the camera in the base coordinate system, determining the pose of the tail end joint in the base coordinate system through coordinate transformation;
and adjusting the pose of the camera according to the pose of the tail joint in the base coordinate system.
By the method, the pose of the tail end joint in the base coordinate system can be accurately determined, so that the pose of the camera can be accurately adjusted according to the pose of the tail end joint in the base coordinate system, namely, the camera can reach the optimal target pose by adjusting the pose of the tail end joint, the registered camera can be automatically positioned to an optimal position, and point cloud data with higher precision can be acquired at the position, so that the problem that the acquired point cloud data is low in precision is solved, and the precision of a registration scheme of facial structure light of neurosurgery is improved.
In some of these embodiments, determining the pose of the end joint in the base coordinate system by coordinate transformation includes:
acquiring a first transformation matrix between a camera and a tail end coordinate system of the mechanical arm;
And determining the pose of the tail end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
In the present embodiment, it is assumed that the first transformation matrix between the camera and the robot arm end coordinate system isTarget pose/>, of camera in base coordinate systemThe pose of the end joint in the base coordinate system can be determined as/>, by
By the method, the pose of the tail end joint in the base coordinate system can be accurately determined, so that the pose of the camera can be accurately adjusted according to the pose of the tail end joint in the base coordinate system, namely, the camera can reach the optimal target pose by adjusting the pose of the tail end joint, the registered camera can be automatically positioned to an optimal position, and point cloud data with higher precision can be acquired at the position, so that the problem that the acquired point cloud data is low in precision is solved, and the precision of a registration scheme of facial structure light of neurosurgery is improved.
It can be understood that after the doctor completes the installation of the camera, the pose of the mechanical arm is at a random initial position, as shown in fig. 6, and the complete face of the patient is not presented in the field of view of the camera, so that the pose of the tail end of the mechanical arm needs to be locally adjusted first to enable the complete facial feature information of the patient to appear in the field of view of the camera.
Judging whether a complete target object is presented in the field of view of the camera;
If not, the mechanical arm is adjusted to enable the complete target object to be displayed in the visual field of the camera.
In this embodiment, the doctor may drag the pose of the terminal camera or adjust the mechanical arm according to other schemes to make the camera view a complete target object, and the embodiment does not specifically limit how to adjust the mechanical arm, as shown in fig. 7, after adjusting the mechanical arm, the camera view shows a complete target object.
By the method, before the target object image data are acquired, the complete target object is displayed in the visual field of the camera, so that the point cloud data of the target object can be acquired more accurately, the problem of low accuracy of the acquired point cloud data is solved, and the accuracy of a registration scheme of the facial structured light of the neurosurgery is improved.
In addition, the application also provides a space registration method, which comprises the following steps:
according to the camera pose adjusting method, automatic camera positioning is realized;
capturing a target object image by a camera;
Registration of the target object with the planning image is achieved through the target object image.
By the method, the camera can be automatically positioned more accurately according to the camera pose adjusting method, the problem that the acquired point cloud data is low in precision is solved, and the precision of a registration scheme of facial structured light in neurosurgery is improved.
For example, the above spatial registration method may be applied to a spatial registration system as shown in fig. 7, where the spatial registration system includes a camera and a mechanical arm, the camera is mounted on a tool coordinate system at the end of the mechanical arm, the camera is connected with the mechanical arm in a rigid manner, a rigid transformation matrix of the camera and the mechanical arm is a known parameter, and the camera is aligned to the face of the patient to acquire point cloud data for spatial registration.
As shown in fig. 9, the present application further provides a spatial registration system, where the spatial registration system includes a mechanical arm 91, a camera 92, and a processor 93, the camera 92 is installed at the end of the mechanical arm, the processor 93 is connected to the mechanical arm 91 and the camera 92 respectively, and the processor 93 executes the spatial registration method when running.
The embodiment also provides a camera pose adjusting method. Fig. 10 is a flowchart of another camera pose adjustment method according to an embodiment of the present application, applied to a robot system including a robot arm and a camera mounted to the robot arm, as shown in fig. 10, the flowchart including the steps of:
in step S1001, target object image data captured by a camera is acquired.
Step S1002, determining image feature points of the target object according to the target object image data.
In step S1003, corresponding scanning feature points of the image feature points in the three-dimensional scanning image of the standard face model are acquired.
Step S1004, determining the initial pose of the target object in the camera coordinate system according to the image feature points, the scanning feature points and the camera internal parameters.
In step S1005, a coordinate transformation relationship between the camera and the base coordinate system is determined.
Step S1006, determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
Step S1007, adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
According to the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the camera coordinate system is determined according to the corresponding scanning feature points of the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, namely, the initial pose of the target object in the camera coordinate system is determined through coordinate transformation, and further, the target pose of the camera in the base coordinate system is determined according to the initial pose of the target object in the camera coordinate system, so that the pose of the camera can be adjusted according to the target pose of the camera in the base coordinate system, the purpose of automatically positioning the registered camera to the optimal position is achieved, and point cloud data with higher precision can be acquired at the position, so that the problem that the acquired point cloud data is low in precision is solved, and the precision of the registration scheme of facial structured light of neurosurgery is improved.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein. For example, step S1004 and step S1005 may be interchanged.
In this embodiment, a method for registering facial structured light is also provided. Fig. 11 is a flowchart of a facial structure light registering method according to an embodiment of the present application, applied to a robot system, the robot system including a robot arm and a camera mounted on the robot arm, the robot arm further including an end joint where a robot arm connector to which an end tool is connected is located, as shown in fig. 11, the flowchart includes the steps of:
Step S1101, entering a facial structured light registration module.
Step S1102, adjusting the pose of the end of the mechanical arm.
In this embodiment, after the doctor completes the installation of the camera, the pose of the mechanical arm is at a random initial position, as shown in fig. 6, and the complete face of the patient is not presented in the field of view of the camera, so that the pose of the tail end of the mechanical arm needs to be locally adjusted first to enable the complete facial feature information of the patient to appear in the field of view of the camera.
In step S1103, it is determined whether the complete face contour can be recognized.
In this embodiment, if a complete face contour can be identified, the process proceeds to step S1104, otherwise, the process proceeds to step S1102.
In step S1104, facial feature recognition is performed to obtain the pixel coordinates of the facial feature points.
Step S1105, obtaining corresponding scanning feature points of the facial feature points in the three-dimensional scanning image of the standard face model.
In step S1106, the target pose of the end joint in the base coordinate system is determined according to the pixel coordinates of the facial feature points and the spatial coordinates of the scanned feature points.
In this embodiment, as shown in fig. 14, the current camera coordinate system is camera_link, the face coordinate system calculated by equation (1) is face_link, the camera coordinate system at the optimal distance H between the target object and the camera is face_link_view, the positioning motion of the camera is from the camera_link to face_link_view, and the target pose of the end joint after the mechanical arm is positioned in the base coordinate system is recorded asThe initial pose of the current terminal joint in a basic coordinate system is recorded as/>Pose of face in camera coordinate system under optimal distance H between target object and camera is/>The matrix between the camera and the mechanical arm terminal coordinate system is/>Determining the pose of the face in a camera coordinate system as/>, under the optimal distance H between the target object and the camera, through a formula (4)The matrix between the camera and the mechanical arm terminal coordinate system is/>The initial pose of the current terminal joint in the base coordinate system is/>, which can be obtained through a hand-eye calibration algorithmThe target pose/>, in the base coordinate system, of the tail end joint after the mechanical arm is positioned, is determined through a formula (5) and a formula (6) through direct feedback of a controller in the robot system
Step S1107, the pose of the current end joint in the base coordinate system is adjusted to the target pose of the end joint in the base coordinate system.
In this embodiment, the initial pose of the current end joint in the base coordinate system and the target pose of the end joint in the base coordinate system are determined by the camera pose adjustment method, as shown in fig. 12 and fig. 13, fig. 12 is a schematic view of the initial pose of the end joint in the base coordinate system, fig. 13 is a schematic view of the target pose of the end joint in the base coordinate system, and as can be seen from fig. 12 and fig. 13, the point cloud data acquired in the target pose is more accurate than the point cloud data acquired in the initial pose.
In step S1108, face structured light registration is started.
Through the steps, according to the corresponding scanning feature points of the image feature points of the target object in the standard face model and the current image feature points of the target object, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the base coordinate system is determined, namely, the target pose of the tail end joint in the base coordinate system is determined, further, the camera pose can be adjusted according to the target pose of the tail end joint in the base coordinate system, the automatic positioning of the registration camera to the optimal position is realized, and the point cloud data with higher precision can be acquired at the position, so that the problem that the acquired point cloud data is low in precision is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
There is also provided in this embodiment an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
And acquiring target object image data shot by the camera.
And determining the image characteristic points of the target object according to the image data of the target object.
And acquiring corresponding scanning characteristic points of the image characteristic points in the three-dimensional scanning image of the standard face model.
And determining the target pose of the camera in the base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and the camera internal parameters.
And adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and are not described in detail in this embodiment.
In addition, in combination with the camera pose adjustment method provided in the above embodiment, a storage medium may be further provided in this embodiment to implement. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the camera pose adjustment methods of the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure in accordance with the embodiments provided herein.
It is to be understood that the drawings are merely illustrative of some embodiments of the present application and that it is possible for those skilled in the art to adapt the present application to other similar situations without the need for inventive work. In addition, it should be appreciated that while the development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as a departure from the disclosure.
The term "embodiment" in this disclosure means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive. It will be clear or implicitly understood by those of ordinary skill in the art that the embodiments described in the present application can be combined with other embodiments without conflict.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the claims. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (9)

1. A camera pose adjustment method, characterized by being applied to a robot system including a robot arm and a camera mounted to the robot arm, the method comprising:
acquiring target object image data shot by the camera;
Determining image feature points of the target object according to the target object image data;
Acquiring corresponding scanning characteristic points of the image characteristic points in a three-dimensional scanning image of the standard face model;
According to the image feature points, the scanning feature points and the camera internal parameters, determining an angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in a camera coordinate system, namely determining the initial pose of the target object in the camera coordinate system through coordinate transformation; the image characteristic points are two-dimensional pixel coordinate points, and the scanning characteristic points are three-dimensional space coordinate points;
Determining a coordinate transformation relationship between the camera and a base coordinate system;
determining a target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation;
And adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
2. The camera pose adjustment method according to claim 1, characterized in that a coordinate transformation relationship between the camera and the base coordinate system is determined by:
Acquiring a first transformation matrix between the camera and a tail end coordinate system of the mechanical arm;
Acquiring a second transformation matrix between the tail end coordinate system of the mechanical arm and the base coordinate system;
and determining a coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
3. The camera pose adjustment method according to claim 1, characterized in that the method further comprises:
Determining a preset distance between the target object and the camera;
The determining the initial pose of the target object in a camera coordinate system, then comprises:
determining a third transformation matrix according to the preset distance;
And determining the target pose of the target object in a camera coordinate system under the preset distance according to the third transformation matrix and the initial pose.
4. The camera pose adjustment method according to claim 1, wherein the mechanical arm further comprises an end joint where a mechanical arm connector of an end tool is located, the adjusting the camera pose according to the target pose of the camera in a base coordinate system comprises:
According to the target pose of the camera in the base coordinate system, determining the pose of the tail end joint in the base coordinate system through coordinate transformation;
and adjusting the pose of the camera according to the pose of the tail end joint in the base coordinate system.
5. The camera pose adjustment method according to claim 4, characterized in that said determining the pose of the end joint in the base coordinate system by coordinate transformation comprises:
Acquiring a first transformation matrix between the camera and a tail end coordinate system of the mechanical arm;
and determining the pose of the tail end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
6. The camera pose adjustment method according to claim 1, wherein the acquiring the target object image data photographed by the camera previously includes:
judging whether a complete target object is presented in the visual field of the camera;
If not, the mechanical arm is adjusted to enable the complete target object to be displayed in the visual field of the camera.
7. A method of spatial registration, comprising:
The camera pose adjustment method according to any one of claims 1 to 6, realizing automatic camera pose;
capturing a target object image by a camera;
Registration of the target object with the planning image is achieved through the target object image.
8. A space registration system comprising a robotic arm, a camera, and a processor, wherein the camera is mounted at a distal end of the robotic arm, the processor is coupled to the robotic arm and the camera, respectively, and the processor is operative to perform the space registration method of claim 7.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the camera pose adjustment method according to any one of claims 1 to 6 or the spatial registration method according to claim 7.
CN202111400891.6A 2021-05-10 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium Active CN114098980B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202111400891.6A CN114098980B (en) 2021-11-19 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium
EP22806750.0A EP4321121A1 (en) 2021-05-10 2022-05-10 Robot positioning and pose adjustment method and system
PCT/CN2022/092003 WO2022237787A1 (en) 2021-05-10 2022-05-10 Robot positioning and pose adjustment method and system
US18/506,980 US20240075631A1 (en) 2021-05-10 2023-11-10 Methods and systems for positioning robots and adjusting postures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111400891.6A CN114098980B (en) 2021-11-19 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium

Publications (2)

Publication Number Publication Date
CN114098980A CN114098980A (en) 2022-03-01
CN114098980B true CN114098980B (en) 2024-06-11

Family

ID=80440810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111400891.6A Active CN114098980B (en) 2021-05-10 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium

Country Status (1)

Country Link
CN (1) CN114098980B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4321121A1 (en) * 2021-05-10 2024-02-14 Wuhan United Imaging Healthcare Surgical Technology Co., Ltd. Robot positioning and pose adjustment method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187985A1 (en) * 2015-05-28 2016-12-01 中兴通讯股份有限公司 Photographing device, tracking photographing method and system, and computer storage medium
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method
CN110266940A (en) * 2019-05-29 2019-09-20 昆明理工大学 A kind of face-video camera active pose collaboration face faces image acquiring method
CN110382046A (en) * 2019-02-26 2019-10-25 武汉资联虹康科技股份有限公司 A kind of transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
CN111862180A (en) * 2020-07-24 2020-10-30 三一重工股份有限公司 Camera group pose acquisition method and device, storage medium and electronic equipment
CN112022355A (en) * 2020-09-27 2020-12-04 平安科技(深圳)有限公司 Hand-eye calibration method and device based on computer vision and storage medium
CN112446917A (en) * 2019-09-03 2021-03-05 北京地平线机器人技术研发有限公司 Attitude determination method and device
CN113274130A (en) * 2021-05-14 2021-08-20 上海大学 Markless surgery registration method for optical surgery navigation system
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium
CN113397704A (en) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 Robot positioning method, device and system and computer equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357442A (en) * 2015-11-27 2016-02-24 小米科技有限责任公司 Shooting angle adjustment method and device for camera
US20200237459A1 (en) * 2019-01-25 2020-07-30 Biosense Webster (Israel) Ltd. Flexible multi-coil tracking sensor
US20210244485A1 (en) * 2020-02-12 2021-08-12 Medtech S.A. Robotic guided 3d structured light-based camera

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187985A1 (en) * 2015-05-28 2016-12-01 中兴通讯股份有限公司 Photographing device, tracking photographing method and system, and computer storage medium
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
CN110382046A (en) * 2019-02-26 2019-10-25 武汉资联虹康科技股份有限公司 A kind of transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN110266940A (en) * 2019-05-29 2019-09-20 昆明理工大学 A kind of face-video camera active pose collaboration face faces image acquiring method
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method
CN112446917A (en) * 2019-09-03 2021-03-05 北京地平线机器人技术研发有限公司 Attitude determination method and device
CN111862180A (en) * 2020-07-24 2020-10-30 三一重工股份有限公司 Camera group pose acquisition method and device, storage medium and electronic equipment
CN112022355A (en) * 2020-09-27 2020-12-04 平安科技(深圳)有限公司 Hand-eye calibration method and device based on computer vision and storage medium
CN113397704A (en) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 Robot positioning method, device and system and computer equipment
CN113274130A (en) * 2021-05-14 2021-08-20 上海大学 Markless surgery registration method for optical surgery navigation system
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
光学定位脑外科机器人系统及其空间配准;陈国栋等;仪器仪表学报;20070331;第28卷(第03期);第499-502段 *

Also Published As

Publication number Publication date
CN114098980A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN112964196B (en) Three-dimensional scanning method, system, electronic device and computer equipment
CN110355755B (en) Robot hand-eye system calibration method, device, equipment and storage medium
CN114098980B (en) Camera pose adjustment method, space registration method, system and storage medium
CN110722558B (en) Origin correction method and device for robot, controller and storage medium
CN112659129B (en) Robot positioning method, device and system and computer equipment
CN105496556A (en) High-precision optical positioning system for surgical navigation
EP4209312A1 (en) Error detection method and robot system based on association identification
CN113610741A (en) Point cloud processing method and device based on laser line scanning
CN113172636B (en) Automatic hand-eye calibration method and device and storage medium
CN113768627A (en) Method and device for acquiring receptive field of visual navigator and surgical robot
CN116019562A (en) Robot control system and method
CN115457093B (en) Tooth image processing method and device, electronic equipment and storage medium
CN114063046A (en) Parameter calibration method and device, computer equipment and storage medium
WO2022237787A1 (en) Robot positioning and pose adjustment method and system
CN116817787A (en) Three-dimensional scanning method, three-dimensional scanning system and electronic device
CN116269763A (en) Coordinate conversion relation calibration method and device, operation navigation system and medium
CN113246145B (en) Pose compensation method and system for nuclear industry grabbing equipment and electronic device
CN113974834B (en) Method and device for determining sleeve pose of surgical robot system
CN115705621A (en) Monocular vision real-time distance measurement method and distance measurement system based on embedded platform
CN117754561A (en) Cable grabbing point positioning method, device and robot system
CN110675454A (en) Object positioning method, device and storage medium
CN117953078A (en) Parameter calibration method, processor, tracking scanning system and electronic device
CN118081735A (en) Path simulation method and device based on three-dimensional scanning system and computer equipment
CN114670199B (en) Identification positioning device, system and real-time tracking system
CN115493512B (en) Data processing method, three-dimensional scanning system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant