CN114098980A - Camera pose adjusting method, space registration method, system and storage medium - Google Patents

Camera pose adjusting method, space registration method, system and storage medium Download PDF

Info

Publication number
CN114098980A
CN114098980A CN202111400891.6A CN202111400891A CN114098980A CN 114098980 A CN114098980 A CN 114098980A CN 202111400891 A CN202111400891 A CN 202111400891A CN 114098980 A CN114098980 A CN 114098980A
Authority
CN
China
Prior art keywords
camera
pose
coordinate system
target object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111400891.6A
Other languages
Chinese (zh)
Other versions
CN114098980B (en
Inventor
吴童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202111400891.6A priority Critical patent/CN114098980B/en
Priority claimed from CN202111400891.6A external-priority patent/CN114098980B/en
Publication of CN114098980A publication Critical patent/CN114098980A/en
Priority to EP22806750.0A priority patent/EP4321121A1/en
Priority to PCT/CN2022/092003 priority patent/WO2022237787A1/en
Priority to US18/506,980 priority patent/US20240075631A1/en
Application granted granted Critical
Publication of CN114098980B publication Critical patent/CN114098980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Robotics (AREA)
  • Manipulator (AREA)

Abstract

The application relates to a camera pose adjusting method, a space registration method, a system and a storage medium, wherein the camera pose adjusting method comprises the following steps: the method comprises the steps of obtaining image data of a target object shot by a camera, determining image feature points of the target object according to the image data of the target object, obtaining corresponding scanning feature points of the image feature points in a three-dimensional scanning image of a standard human face model, determining a target pose of the camera in a base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and camera internal parameters, adjusting the pose of the camera according to the target pose of the camera in the base coordinate system, automatically positioning a registered camera to an optimal position through the mode, and obtaining point cloud data with higher precision at the position, so that the problem of low precision of the obtained point cloud data is solved, and the precision of a registration scheme of facial structured light of neurosurgery is improved.

Description

Camera pose adjusting method, space registration method, system and storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a camera pose adjustment method, a spatial registration method, a system, and a storage medium.
Background
In recent years, surgical robots have been widely used in medical fields, such as orthopedics, neurosurgery, thoraco-abdominal interventions, and the like. Generally, a surgical robot includes a robot arm having a multi-degree-of-freedom structure, and the robot arm may include a base joint where a robot arm base is located and a tip joint where a robot arm flange is located, wherein the robot arm flange is fixedly connected with a tip tool, such as an electrode needle, a puncture needle, an injector, an ablation needle, and other various surgical tools.
In the scene of the neurosurgical robot, a scheme of spatial registration of a CT image end to a robot end tool coordinate system is realized by using a registration mode of facial structured light. In the whole registration workflow, a camera is installed on a robot end tool coordinate system, a doctor drags the camera to align the camera to the face of a patient, and the camera starts to register facial structured light after acquiring point cloud data of the face of the patient. Since a doctor cannot pay attention to the physical characteristics of the camera during imaging in the process of dragging the camera, the best-precision camera pose is difficult to determine during facial structured light registration, and the acquired point cloud data is low in precision.
Disclosure of Invention
The embodiment provides a camera pose adjusting method, a space registration method, a system and a storage medium, so as to solve the problem that the accuracy of point cloud data acquired in the related art is low.
In a first aspect, in this embodiment, a camera pose adjusting method is provided, which is applied to a robot system including a robot arm and a camera mounted on the robot arm, and includes:
acquiring target object image data shot by the camera;
determining image characteristic points of the target object according to the target object image data;
acquiring scanning feature points corresponding to the image feature points in a three-dimensional scanning image of a standard human face model;
determining the target pose of the camera in a base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and camera internal parameters;
and adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
In some embodiments, the determining, according to the image feature points, the scanning feature points, and the camera internal reference, the target pose of the camera in the base coordinate system through coordinate transformation includes:
determining an initial pose of the target object in a camera coordinate system according to the image feature points, the scanning feature points and camera internal parameters;
determining a coordinate transformation relation between the camera and the base coordinate system;
and determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
In some of these embodiments, the coordinate transformation relationship between the camera and the base coordinate system is determined by:
acquiring a first transformation matrix between the camera and a terminal coordinate system of the mechanical arm;
acquiring a second transformation matrix between the tail end coordinate system of the mechanical arm and the base coordinate system;
and determining the coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
In some of these embodiments, the method further comprises:
determining a preset distance between the target object and the camera;
the determining an initial pose of the target object in a camera coordinate system, then comprising:
determining a third transformation matrix according to the preset distance;
and determining the target pose of the target object in a camera coordinate system at the preset distance according to the third transformation matrix and the initial pose.
In some of these embodiments, the robotic arm further comprises an end joint to which an end tool is attached, and the adjusting the camera pose according to the target pose of the camera in the base coordinate system comprises:
determining the pose of the end joint in the base coordinate system through coordinate transformation according to the target pose of the camera in the base coordinate system;
and adjusting the pose of the camera according to the pose of the tail end joint in the base coordinate system.
In some of these embodiments, said determining the pose of said end joint in said base coordinate system by coordinate transformation comprises:
acquiring a first transformation matrix between the camera and a terminal coordinate system of the mechanical arm;
and determining the pose of the end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
In some of these embodiments, said obtaining target object image data captured by said camera previously comprises:
judging whether a complete target object is presented in the visual field of the camera;
and if not, adjusting the mechanical arm to enable the complete target object to be presented in the visual field of the camera.
In a second aspect, in this embodiment, a spatial registration method is provided, including:
the camera pose adjusting method according to the first aspect realizes automatic camera positioning;
capturing a target object image by a camera;
registration of the target object with the planning image is achieved through the target object image.
In a third aspect, the present embodiment provides a spatial registration system, which includes a robot arm, a camera and a processor, wherein the camera is mounted at a distal end of the robot arm, the processor is respectively connected to the robot arm and the camera, and the processor executes the spatial registration method according to the second aspect when operating.
In a fourth aspect, in the present embodiment, there is provided a storage medium having stored thereon a computer program that, when executed by a processor, implements the camera pose adjustment method described in the first aspect above or the space registration method described in the second aspect above.
In contrast to the related art, the camera pose adjustment method, the space registration method, the system, and the storage medium provided in the present embodiment, by acquiring target object image data taken by the camera, determining the image characteristic points of the target object according to the image data of the target object, acquiring the corresponding scanning characteristic points of the image characteristic points in the three-dimensional scanning image of the standard human face model, determining the target pose of the camera in a base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and camera internal parameters, the pose of the camera is adjusted according to the target pose of the camera in the base coordinate system, so that the registered camera is automatically placed to the optimal position, the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light in the neurosurgery is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of an application terminal of a camera pose adjustment method according to an embodiment of the present application;
fig. 2 is a flowchart of a camera pose adjustment method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a front view of a standard face model according to an embodiment of the present application;
FIG. 4 is a schematic side view of a standard face model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of facial contour points according to an embodiment of the present application;
FIG. 6 is a schematic illustration of a robot pose according to an embodiment of the present application;
FIG. 7 is a schematic illustration of another robot arm pose according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a spatial registration system according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another spatial registration system according to an embodiment of the present application;
fig. 10 is a flowchart of another camera pose adjustment method according to an embodiment of the present application;
FIG. 11 is a flow chart of a method of facial structure light registration according to an embodiment of the present application;
FIG. 12 is a schematic diagram of facial feature points acquired before camera pose adjustment according to an embodiment of the present application;
FIG. 13 is a schematic diagram of facial feature points obtained after camera pose adjustment according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating a pose of a robot arm according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal of the camera pose adjustment method according to the embodiment. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the camera pose adjustment method in the present embodiment, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-described method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a camera pose adjusting method is provided, which is applied to a robot system, where the robot system includes a mechanical arm and a camera mounted on the mechanical arm, and fig. 2 is a flowchart of the camera pose adjusting method according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
in step S201, target object image data captured by a camera is acquired.
Step S202, determining image characteristic points of the target object according to the image data of the target object.
Step S203, scanning feature points of the image feature points corresponding to the three-dimensional scanning image of the standard human face model are obtained.
In this embodiment, a standard face model may be created according to the head features of the target object, or the standard face model may be downloaded from an open source website, where the obtaining manner of the standard face model is not limited, and the front view and the side view schematic diagrams of the standard face model are respectively shown in fig. 3 and fig. 4;
and S204, determining the target pose of the camera in the base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and the camera internal parameters.
In this embodiment, the base coordinate is located on the base of the robot.
And S205, adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
Through the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model is determined according to the corresponding scanning feature points of the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, namely the target pose of the camera in the base coordinate system is determined through coordinate transformation, so that the pose of the camera can be adjusted according to the target pose of the camera in the base coordinate system, the automatic positioning of the registered camera to the optimal position is realized, and the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light for the neurosurgery is improved.
In some embodiments, the step S204 of determining the target pose of the camera in the base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and the camera internal parameters includes the following steps:
determining an initial pose of the target object in a camera coordinate system according to the image feature points, the scanning feature points and the camera internal parameters;
determining a coordinate transformation relation between the camera and the base coordinate system;
and determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
In this embodiment, determining the initial pose of the target object in the camera coordinate system is converted into solving a PNP problem, which is an object positioning problem as follows: assuming that the camera is a pinhole model and is calibrated, an image of N space points with known coordinates in an object coordinate system is taken, the coordinates of the N image points are known, and the coordinates of the N space points in the camera coordinate system are determined.
For example, the image feature points are face contour points of the target object, after the face contour points of the target object are recognized, pixel coordinates corresponding to the face contour points can be obtained, the pixel coordinates corresponding to n recognized face contour points are Ai (xi, yi) (i ═ 1, 2, 3.. n), the standard face model space coordinates corresponding to n face contour points are Bj (Xj, Yj, Zj) (j ═ 1, 2, 3 … n), the standard face model space coordinates in this example are preset fixed values, the camera parameter is a known matrix M, and the target object is recordedThe initial pose of the object in the camera coordinate system is
Figure BDA0003364642540000061
Namely a pose transformation matrix from a face coordinate system corresponding to the target object to a camera coordinate system
Figure BDA0003364642540000062
The initial pose of the target object in the camera coordinate system can be determined by
Figure BDA0003364642540000063
Figure BDA0003364642540000064
Through the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the camera coordinate system is determined according to the scanning feature points corresponding to the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, i.e. determining the initial pose of the target object in the camera coordinate system by means of a coordinate transformation, and, further, determining the target pose of the camera in the base coordinate system according to the initial pose of the target object in the camera coordinate system, thereby adjusting the pose of the camera according to the target pose of the camera in the base coordinate system, realizing the automatic positioning of the registered camera to the optimal position, the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light in the neurosurgery is improved.
In some of these embodiments, determining a coordinate transformation relationship between the camera and the base coordinate system comprises:
acquiring a first transformation matrix between a camera and a terminal coordinate system of a mechanical arm;
acquiring a second transformation matrix between the tail end coordinate system and the base coordinate system of the mechanical arm;
and determining the coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
In this embodiment, the first transformation matrix between the camera and the end coordinate system of the mechanical arm may be obtained by a hand-eye calibration algorithm, the second transformation matrix between the end coordinate system of the mechanical arm and the base coordinate system may be obtained by direct feedback from a controller in the robot system, where the manner of obtaining the first transformation matrix and the second transformation matrix is specifically defined, and it is assumed that the first transformation matrix between the camera and the end coordinate system of the mechanical arm is the first transformation matrix
Figure BDA0003364642540000071
Second transformation matrix between the end coordinate system and the base coordinate system of the robot arm
Figure BDA0003364642540000072
The coordinate transformation relationship T between the camera and the base coordinate system can be determined by the following equation.
Figure BDA0003364642540000073
By the method, the coordinate transformation relation between the camera and the base coordinate system can be accurately determined, further, the target pose of the camera in the base coordinate system is determined according to the coordinate transformation relation between the camera and the base coordinate system, so that the pose of the camera can be adjusted according to the target pose of the camera in the base coordinate system, the automatic positioning of the registration camera to the optimal position is realized, and the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
Based on the fact that the distance between the target object and the camera affects the accuracy of the acquired point cloud data, the application provides a method for determining the distance between the target object and the camera, and the method comprises the following steps:
if a complete target object is present in the field of view of the camera, along the optical axis Z of the cameracThe camera is dragged in the direction to obtain the targets shot by the camera at different distancesObtaining point cloud data of a face under each distance by using point cloud data of the object, wherein the distance is the distance between the camera and the target face;
and determining the optimal distance between the target object and the camera according to the precision and quality of the face point cloud data at each distance.
By the method, the optimal distance between the target object and the camera can be accurately determined, so that the accuracy of the obtained point cloud data can be further improved.
In some of these embodiments, the camera pose adjustment method further includes:
determining a preset distance between a target object and a camera;
in this embodiment, when the distance between the target object and the camera is the preset distance, the better point cloud data can be acquired, and for example, the preset distance in this embodiment is the optimal distance determined in the above manner.
Step S2040, determining an initial pose of the target object in the camera coordinate system, and then:
determining a third transformation matrix according to the preset distance;
and determining the target pose of the target object in the camera coordinate system at the preset distance according to the third transformation matrix and the initial pose.
In the present embodiment, assuming that the predetermined distance is H, the third transformation matrix P is as shown in the following formula.
Figure BDA0003364642540000081
Assuming an initial pose of the target object in the camera coordinate system as
Figure BDA0003364642540000082
The target pose of the target object in the camera coordinate system under the preset distance is
Figure BDA0003364642540000083
The coordinates of the target object at the camera at the preset distance are determined byTarget pose in the system.
Figure BDA0003364642540000084
By the method, the target pose of the target object in the camera coordinate system is determined under the preset distance, namely when the distance between the target object and the camera is the optimal distance, the target pose of the target object in the camera coordinate system is determined, and after the camera pose is adjusted according to the target pose, more accurate point cloud data can be acquired, so that the precision of the registration scheme of the facial structured light in the neurosurgery is improved.
In some embodiments, the robot arm further includes an end joint to which the robot arm joint of the end tool is connected, and the adjusting the pose of the camera according to the target pose of the camera in the base coordinate system in step S205 includes:
determining the pose of the tail end joint in the base coordinate system through coordinate transformation according to the target pose of the camera in the base coordinate system;
and adjusting the pose of the camera according to the pose of the tail end joint in the base coordinate system.
By the method, the pose of the end joint in the base coordinate system can be accurately determined, so that the pose of the camera can be accurately adjusted according to the pose of the end joint in the base coordinate system, namely the camera can reach the optimal target pose by adjusting the pose of the end joint, the automatic positioning of the registration camera to the optimal position is realized, and the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
In some of these embodiments, determining the pose of the end joint in the base coordinate system by a coordinate transformation comprises:
acquiring a first transformation matrix between a camera and a terminal coordinate system of a mechanical arm;
and determining the pose of the end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
In this embodiment, assume that the first transformation matrix between the camera and the coordinate system of the end of the robot arm is
Figure BDA0003364642540000091
Object pose of camera in base coordinate system
Figure BDA0003364642540000092
The pose of the end joint in the base coordinate system can be determined by
Figure BDA0003364642540000093
Figure BDA0003364642540000094
By the method, the pose of the end joint in the base coordinate system can be accurately determined, so that the pose of the camera can be accurately adjusted according to the pose of the end joint in the base coordinate system, namely the camera can reach the optimal target pose by adjusting the pose of the end joint, the automatic positioning of the registration camera to the optimal position is realized, and the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
It can be understood that, after the camera is installed by the doctor, the pose of the mechanical arm is at the random initial position, as shown in fig. 6, at this time, the complete face of the patient is not presented in the visual field of the camera, and therefore, the pose of the end of the mechanical arm needs to be locally adjusted first, so that the complete facial feature information of the patient appears in the visual field of the camera, in this embodiment, the mechanical arm is adjusted in the following manner, so that the complete facial feature of the patient is presented in the visual field of the camera, that is, the image data of the target object captured by the camera is acquired, which previously includes:
judging whether a complete target object is presented in the visual field of the camera;
if not, the mechanical arm is adjusted to enable the complete target object to be presented in the visual field of the camera.
In this embodiment, a doctor may drag the pose of the end camera or adjust the mechanical arm according to another scheme, so that a complete target object appears in the field of view of the camera.
By the method, before the image data of the target object is acquired, the complete target object is presented in the visual field of the camera, so that the point cloud data of the target object can be acquired more accurately, the problem of low accuracy of the acquired point cloud data is solved, and the accuracy of the registration scheme of the facial structured light in the neurosurgery is improved.
In addition, the present application also provides a spatial registration method, including:
the automatic positioning of the camera is realized according to the camera pose adjusting method;
capturing a target object image by a camera;
registration of the target object with the planning image is achieved through the target object image.
By the method, automatic positioning of the camera can be more accurately realized according to the camera pose adjusting method, the problem of low accuracy of the acquired point cloud data is solved, and the accuracy of the registration scheme of the facial structured light in the neurosurgery is improved.
For example, the above-mentioned spatial registration method may be applied to a spatial registration system as shown in fig. 7, which includes a camera and a mechanical arm, the camera is mounted on a tool coordinate system at the end of the mechanical arm, the camera is connected with the mechanical arm in a rigid body, and a rigid body transformation matrix of the camera and the mechanical arm is a known parameter, and the camera is aligned with the face of the patient to acquire point cloud data for spatial registration.
By way of example, the present application further proposes a spatial registration system, as shown in fig. 9, comprising a robot arm 91, a camera 92 and a processor 93, wherein the camera 92 is mounted at an end of the robot arm, the processor 93 is connected to the robot arm 91 and the camera 92, respectively, and the processor 93 executes the spatial registration method.
The embodiment also provides a camera position and posture adjusting method. Fig. 10 is a flowchart of another camera pose adjustment method according to an embodiment of the present application, applied to a robot system including a robot arm and a camera mounted on the robot arm, as shown in fig. 10, where the flowchart includes the following steps:
in step S1001, target object image data captured by the camera is acquired.
Step S1002 determines image feature points of the target object based on the target object image data.
Step S1003, scanning feature points of the image feature points corresponding to the three-dimensional scanning image of the standard human face model are obtained.
And step S1004, determining the initial pose of the target object in the camera coordinate system according to the image feature points, the scanning feature points and the camera internal parameters.
In step S1005, a coordinate transformation relationship between the camera and the base coordinate system is determined.
And step S1006, determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
And step S1007, adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
Through the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the camera coordinate system is determined according to the scanning feature points corresponding to the image feature points of the target object in the standard face model, the current image feature points of the target object and the camera internal parameters, i.e. determining the initial pose of the target object in the camera coordinate system by means of a coordinate transformation, and, further, determining the target pose of the camera in the base coordinate system according to the initial pose of the target object in the camera coordinate system, thereby adjusting the pose of the camera according to the target pose of the camera in the base coordinate system, realizing the automatic positioning of the registered camera to the optimal position, the point cloud data with higher precision can be acquired at the position, so that the problem of low precision of the acquired point cloud data is solved, and the precision of the registration scheme of the facial structured light in the neurosurgery is improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here. For example, step S1004 and step S1005 may be interchanged.
A facial structure light registration method is also provided in the present embodiment. Fig. 11 is a flowchart of a facial structure light registration method according to an embodiment of the present application, applied to a robot system, where the robot system includes a robot arm and a camera mounted on the robot arm, and the robot arm further includes an end joint where a robot arm connecting member to which an end tool is connected is located, as shown in fig. 11, and the flowchart includes the following steps:
in step S1101, the process proceeds to the facial structure light registration module.
And step S1102, adjusting the pose of the tail end of the mechanical arm.
In this embodiment, after the doctor has installed the camera, the pose of the mechanical arm is at the random initial position, as shown in fig. 6, at this time, the complete face of the patient does not appear in the visual field of the camera, and therefore, the pose of the end of the mechanical arm needs to be locally adjusted first, so that the complete facial feature information of the patient appears in the visual field of the camera.
In step S1103, it is determined whether a complete face contour can be recognized.
In this embodiment, if a complete face contour can be recognized, the process proceeds to step S1104, otherwise, the process proceeds to step S1102.
In step S1104, facial feature recognition is performed to obtain pixel coordinates of facial feature points.
Step S1105, obtaining the scanning feature points of the face feature points in the three-dimensional scanning image of the standard face model.
Step 1106, determining the target pose of the end joint in the base coordinate system according to the pixel coordinates of the facial feature points and the space coordinates of the scanning feature points.
In the present embodiment, as shown in fig. 14, the current camera coordinatesThe system is a camera _ link, a face coordinate system calculated by the formula (1) is face _ link, a camera coordinate system at the optimal distance H between the target object and the camera is face _ link _ view, the positioning motion of the camera is from the coordinate system camera _ link to the face _ link _ view, and the target pose of the end joint of the mechanical arm after the positioning is finished in the base coordinate system is marked as the target pose
Figure BDA0003364642540000111
Recording the initial pose of the current tail end joint in the base coordinate system as
Figure BDA0003364642540000112
The pose of the face in the camera coordinate system under the optimal distance H between the target object and the camera is
Figure BDA0003364642540000113
The matrix between the camera and the coordinate system at the end of the mechanical arm is
Figure BDA0003364642540000114
Determining the pose of the face in the camera coordinate system under the optimal distance H between the target object and the camera through the formula (4)
Figure BDA0003364642540000115
The matrix between the camera and the coordinate system at the end of the mechanical arm is
Figure BDA0003364642540000116
Can be obtained by a hand-eye calibration algorithm, and the initial pose of the current terminal joint in the base coordinate system is
Figure BDA0003364642540000117
The target pose of the tail end joint of the mechanical arm after the placement is finished in the base coordinate system is determined through a formula (5) and a formula (6)
Figure BDA0003364642540000118
Figure BDA0003364642540000119
Step S1107, the pose of the current end joint in the base coordinate system is adjusted to the target pose of the end joint in the base coordinate system.
In the present embodiment, the initial pose of the current end joint in the base coordinate system and the target pose of the end joint in the base coordinate system are determined by the above-mentioned camera pose adjustment method, as shown in fig. 12 and 13, fig. 12 is a schematic diagram of the initial pose of the end joint in the base coordinate system, fig. 13 is a schematic diagram of the target pose of the end joint in the base coordinate system, and as can be seen from fig. 12 and 13, the point cloud data acquired at the target pose is more accurate than the point cloud data acquired at the initial pose.
In step S1108, facial structure light registration is started.
Through the steps, the angle required to be adjusted when the face pose of the target object is adjusted to the pose of the standard face model in the base coordinate system is determined according to the corresponding scanning feature points of the image feature points of the target object in the standard face model and the current image feature points of the target object, namely the target pose of the end joint in the base coordinate system is determined, further, the camera pose can be adjusted according to the target pose of the end joint in the base coordinate system, the automatic positioning of the registration camera to the optimal position is realized, and the point cloud data with higher precision can be obtained at the position, so that the problem of low precision of the obtained point cloud data is solved, and the precision of the registration scheme of the facial structured light of the neurosurgery is improved.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
target object image data captured by a camera is acquired.
And determining the image characteristic points of the target object according to the image data of the target object.
And acquiring scanning characteristic points of the image characteristic points corresponding to the three-dimensional scanning image of the standard human face model.
And determining the target pose of the camera in the base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and the camera internal parameters.
And adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the camera pose adjustment method provided in the above embodiment, a storage medium may also be provided to implement in this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the camera pose adjustment methods in the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A camera position and posture adjusting method is applied to a robot system, the robot system comprises a mechanical arm and a camera mounted on the mechanical arm, and the method comprises the following steps:
acquiring target object image data shot by the camera;
determining image characteristic points of the target object according to the target object image data;
acquiring scanning feature points corresponding to the image feature points in a three-dimensional scanning image of a standard human face model;
determining the target pose of the camera in a base coordinate system through coordinate transformation according to the image feature points, the scanning feature points and camera internal parameters;
and adjusting the pose of the camera according to the target pose of the camera in the base coordinate system.
2. The camera pose adjustment method according to claim 1, wherein the determining the target pose of the camera in the base coordinate system by coordinate transformation based on the image feature points, the scan feature points, and the camera internal parameters comprises:
determining an initial pose of the target object in a camera coordinate system according to the image feature points, the scanning feature points and camera internal parameters;
determining a coordinate transformation relation between the camera and the base coordinate system;
and determining the target pose of the camera in the base coordinate system according to the initial pose and the coordinate transformation relation.
3. The camera pose adjustment method according to claim 2, characterized in that the coordinate transformation relationship between the camera and the base coordinate system is determined by:
acquiring a first transformation matrix between the camera and a terminal coordinate system of the mechanical arm;
acquiring a second transformation matrix between the tail end coordinate system of the mechanical arm and the base coordinate system;
and determining the coordinate transformation relation between the camera and the base coordinate system according to the first transformation matrix and the second transformation matrix.
4. The camera pose adjustment method according to claim 2, characterized by further comprising:
determining a preset distance between the target object and the camera;
the determining an initial pose of the target object in a camera coordinate system, then comprising:
determining a third transformation matrix according to the preset distance;
and determining the target pose of the target object in a camera coordinate system at the preset distance according to the third transformation matrix and the initial pose.
5. The camera pose adjustment method according to claim 1, wherein the robot arm further includes a tip joint to which a robot arm link of a tip tool is connected, and adjusting the camera pose according to the target pose of the camera in the base coordinate system includes:
determining the pose of the end joint in the base coordinate system through coordinate transformation according to the target pose of the camera in the base coordinate system;
and adjusting the pose of the camera according to the pose of the tail end joint in the base coordinate system.
6. The camera pose adjustment method according to claim 5, wherein the determining the pose of the tip joint in the base coordinate system by coordinate transformation includes:
acquiring a first transformation matrix between the camera and a terminal coordinate system of the mechanical arm;
and determining the pose of the end joint in the base coordinate system according to the inverse matrix corresponding to the first transformation matrix and the target pose of the camera in the base coordinate system.
7. The camera pose adjustment method according to claim 1, wherein the acquiring target object image data taken by the camera previously includes:
judging whether a complete target object is presented in the visual field of the camera;
and if not, adjusting the mechanical arm to enable the complete target object to be presented in the visual field of the camera.
8. A spatial registration method, comprising:
the camera pose adjustment method according to any one of claims 1 to 7, which realizes automatic camera positioning;
capturing a target object image by a camera;
registration of the target object with the planning image is achieved through the target object image.
9. A spatial registration system comprising a robotic arm, a camera and a processor, wherein the camera is mounted at an end of the robotic arm, the processor is coupled to the robotic arm and the camera, respectively, and the processor is operative to perform the spatial registration method of claim 8.
10. A computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the camera pose adjustment method of any one of claims 1 to 7 or the space registration method of claim 8.
CN202111400891.6A 2021-05-10 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium Active CN114098980B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202111400891.6A CN114098980B (en) 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium
EP22806750.0A EP4321121A1 (en) 2021-05-10 2022-05-10 Robot positioning and pose adjustment method and system
PCT/CN2022/092003 WO2022237787A1 (en) 2021-05-10 2022-05-10 Robot positioning and pose adjustment method and system
US18/506,980 US20240075631A1 (en) 2021-05-10 2023-11-10 Methods and systems for positioning robots and adjusting postures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111400891.6A CN114098980B (en) 2021-11-19 Camera pose adjustment method, space registration method, system and storage medium

Publications (2)

Publication Number Publication Date
CN114098980A true CN114098980A (en) 2022-03-01
CN114098980B CN114098980B (en) 2024-06-11

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237787A1 (en) * 2021-05-10 2022-11-17 武汉联影智融医疗科技有限公司 Robot positioning and pose adjustment method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187985A1 (en) * 2015-05-28 2016-12-01 中兴通讯股份有限公司 Photographing device, tracking photographing method and system, and computer storage medium
US20170155829A1 (en) * 2015-11-27 2017-06-01 Xiaomi Inc. Methods, Apparatuses, and Storage Mediums for Adjusting Camera Shooting Angle
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method
CN110266940A (en) * 2019-05-29 2019-09-20 昆明理工大学 A kind of face-video camera active pose collaboration face faces image acquiring method
CN110382046A (en) * 2019-02-26 2019-10-25 武汉资联虹康科技股份有限公司 A kind of transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
US20200242339A1 (en) * 2019-01-25 2020-07-30 Biosense Webster (Israel) Ltd. Registration of frames of reference
CN111862180A (en) * 2020-07-24 2020-10-30 三一重工股份有限公司 Camera group pose acquisition method and device, storage medium and electronic equipment
CN112022355A (en) * 2020-09-27 2020-12-04 平安科技(深圳)有限公司 Hand-eye calibration method and device based on computer vision and storage medium
CN112446917A (en) * 2019-09-03 2021-03-05 北京地平线机器人技术研发有限公司 Attitude determination method and device
US20210244485A1 (en) * 2020-02-12 2021-08-12 Medtech S.A. Robotic guided 3d structured light-based camera
CN113274130A (en) * 2021-05-14 2021-08-20 上海大学 Markless surgery registration method for optical surgery navigation system
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium
CN113397704A (en) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 Robot positioning method, device and system and computer equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016187985A1 (en) * 2015-05-28 2016-12-01 中兴通讯股份有限公司 Photographing device, tracking photographing method and system, and computer storage medium
US20170155829A1 (en) * 2015-11-27 2017-06-01 Xiaomi Inc. Methods, Apparatuses, and Storage Mediums for Adjusting Camera Shooting Angle
CN110377015A (en) * 2018-04-13 2019-10-25 北京三快在线科技有限公司 Robot localization method and robotic positioning device
US20200242339A1 (en) * 2019-01-25 2020-07-30 Biosense Webster (Israel) Ltd. Registration of frames of reference
CN110382046A (en) * 2019-02-26 2019-10-25 武汉资联虹康科技股份有限公司 A kind of transcranial magnetic stimulation diagnosis and treatment detection system based on camera
CN110266940A (en) * 2019-05-29 2019-09-20 昆明理工大学 A kind of face-video camera active pose collaboration face faces image acquiring method
CN110215284A (en) * 2019-06-06 2019-09-10 上海木木聚枞机器人科技有限公司 A kind of visualization system and method
CN112446917A (en) * 2019-09-03 2021-03-05 北京地平线机器人技术研发有限公司 Attitude determination method and device
US20210244485A1 (en) * 2020-02-12 2021-08-12 Medtech S.A. Robotic guided 3d structured light-based camera
CN111862180A (en) * 2020-07-24 2020-10-30 三一重工股份有限公司 Camera group pose acquisition method and device, storage medium and electronic equipment
CN112022355A (en) * 2020-09-27 2020-12-04 平安科技(深圳)有限公司 Hand-eye calibration method and device based on computer vision and storage medium
CN113397704A (en) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 Robot positioning method, device and system and computer equipment
CN113274130A (en) * 2021-05-14 2021-08-20 上海大学 Markless surgery registration method for optical surgery navigation system
CN113379850A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 Mobile robot control method, mobile robot control device, mobile robot, and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈国栋等: "光学定位脑外科机器人系统及其空间配准", 仪器仪表学报, vol. 28, no. 03, 31 March 2007 (2007-03-31), pages 499 - 502 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237787A1 (en) * 2021-05-10 2022-11-17 武汉联影智融医疗科技有限公司 Robot positioning and pose adjustment method and system

Similar Documents

Publication Publication Date Title
KR102018565B1 (en) Method, apparatus and program for constructing surgical simulation information
RU2478980C2 (en) System and method for automatic calibration of tracked ultrasound
CN111513850B (en) Guide device, puncture needle adjustment method, storage medium, and electronic apparatus
CN107320124A (en) The method and medical image system of spacer scanning are set in medical image system
CN106308946A (en) Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN112669371A (en) Moxibustion robot control device, system, equipment and storage medium
CN113601503A (en) Hand-eye calibration method and device, computer equipment and storage medium
CN114147728B (en) Universal robot eye on-hand calibration method and system
CN110332930B (en) Position determination method, device and equipment
JP2023022123A (en) System for providing determination of guidance signal and guidance for hand held ultrasonic transducer
CN112382359A (en) Patient registration method and device, electronic equipment and computer readable medium
Meng et al. An automatic markerless registration method for neurosurgical robotics based on an optical camera
CN113768627A (en) Method and device for acquiring receptive field of visual navigator and surgical robot
US20240075631A1 (en) Methods and systems for positioning robots and adjusting postures
CN112043359B (en) Mammary gland puncture method, device, equipment and storage medium
CN114098980A (en) Camera pose adjusting method, space registration method, system and storage medium
CN113344926A (en) Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image
CN114098980B (en) Camera pose adjustment method, space registration method, system and storage medium
CN116019562A (en) Robot control system and method
CN113855240B (en) Medical image registration system and method based on magnetic navigation
WO2022183372A1 (en) Control method, control apparatus, and terminal device
CN113246145A (en) Pose compensation method and system for nuclear industry grabbing equipment and electronic device
CN117754561A (en) Cable grabbing point positioning method, device and robot system
CN112790786A (en) Point cloud data registration method and device, ultrasonic equipment and storage medium
CN118081735A (en) Path simulation method and device based on three-dimensional scanning system and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant